New second order Mumford-Shah model based on Γ-convergence approximation for image processing
Duan, Jinming; Lu, Wenqi; Pan, Zhenkuan; Bai, Li
2016-05-01
In this paper, a second order variational model named the Mumford-Shah total generalized variation (MSTGV) is proposed for simultaneously image denoising and segmentation, which combines the original Γ-convergence approximated Mumford-Shah model with the second order total generalized variation (TGV). For image denoising, the proposed MSTGV can eliminate both the staircase artefact associated with the first order total variation and the edge blurring effect associated with the quadratic H1 regularization or the second order bounded Hessian regularization. For image segmentation, the MSTGV can obtain clear and continuous boundaries of objects in the image. To improve computational efficiency, the implementation of the MSTGV does not directly solve its high order nonlinear partial differential equations and instead exploits the efficient split Bregman algorithm. The algorithm benefits from the fast Fourier transform, analytical generalized soft thresholding equation, and Gauss-Seidel iteration. Extensive experiments are conducted to demonstrate the effectiveness and efficiency of the proposed model.
Ordered cones and approximation
Keimel, Klaus
1992-01-01
This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.
Adaptive approximation of higher order posterior statistics
Lee, Wonjung
2014-02-01
Filtering is an approach for incorporating observed data into time-evolving systems. Instead of a family of Dirac delta masses that is widely used in Monte Carlo methods, we here use the Wiener chaos expansion for the parametrization of the conditioned probability distribution to solve the nonlinear filtering problem. The Wiener chaos expansion is not the best method for uncertainty propagation without observations. Nevertheless, the projection of the system variables in a fixed polynomial basis spanning the probability space might be a competitive representation in the presence of relatively frequent observations because the Wiener chaos approach not only leads to an accurate and efficient prediction for short time uncertainty quantification, but it also allows to apply several data assimilation methods that can be used to yield a better approximate filtering solution. The aim of the present paper is to investigate this hypothesis. We answer in the affirmative for the (stochastic) Lorenz-63 system based on numerical simulations in which the uncertainty quantification method and the data assimilation method are adaptively selected by whether the dynamics is driven by Brownian motion and the near-Gaussianity of the measure to be updated, respectively. © 2013 Elsevier Inc.
Gauge-invariant intense-field approximations to all orders
International Nuclear Information System (INIS)
Faisal, F H M
2007-01-01
We present a gauge-invariant formulation of the so-called strong-field KFR approximations in the 'velocity' and 'length' gauges and demonstrate their equivalence in all orders. The theory thus overcomes a longstanding discrepancy between the strong-field velocity and the length-gauge approximations for non-perturbative processes in intense laser fields. (fast track communication)
Local facet approximation for image stitching
Li, Jing; Lai, Shiming; Liu, Yu; Wang, Zhengming; Zhang, Maojun
2018-01-01
Image stitching aims at eliminating multiview parallax and generating a seamless panorama given a set of input images. This paper proposes a local adaptive stitching method, which could achieve both accurate and robust image alignments across the whole panorama. A transformation estimation model is introduced by approximating the scene as a combination of neighboring facets. Then, the local adaptive stitching field is constructed using a series of linear systems of the facet parameters, which enables the parallax handling in three-dimensional space. We also provide a concise but effective global projectivity preserving technique that smoothly varies the transformations from local adaptive to global planar. The proposed model is capable of stitching both normal images and fisheye images. The efficiency of our method is quantitatively demonstrated in the comparative experiments on several challenging cases.
Ordering, symbols and finite-dimensional approximations of path integrals
International Nuclear Information System (INIS)
Kashiwa, Taro; Sakoda, Seiji; Zenkin, S.V.
1994-01-01
We derive general form of finite-dimensional approximations of path integrals for both bosonic and fermionic canonical systems in terms of symbols of operators determined by operator ordering. We argue that for a system with a given quantum Hamiltonian such approximations are independent of the type of symbols up to terms of O(ε), where ε of is infinitesimal time interval determining the accuracy of the approximations. A new class of such approximations is found for both c-number and Grassmannian dynamical variables. The actions determined by the approximations are non-local and have no classical continuum limit except the cases of pq- and qp-ordering. As an explicit example the fermionic oscillator is considered in detail. (author)
Practical implementation of a higher order transverse leakage approximation
International Nuclear Information System (INIS)
Prinsloo, Rian H.; Tomašević
2011-01-01
Transverse integrated nodal diffusion methods currently represent the standard in full core neutronic simulation. The primary shortcoming in this approach, be it via the Analytic Nodal Method or Nodal Expansion Method, is the utilization of the quadratic transverse leakage approximation. This approach, although proven to work well for typical LWR problems, is not consistent with the formulation of nodal methods and can cause accuracy and convergence problems. In this work an improved, consistent quadratic leakage approximation is formulated, which derives from the class of higher order nodal methods developed some years ago. In this new approach, only information relevant to describing the transverse leak- age terms in the zero-order nodal equations are obtained from the higher order formalism. The method yields accuracy comparable to full higher order methods, but does not suffer from the same computational burden which these methods typically incur. (author)
Approximation of Analytic Functions by Bessel's Functions of Fractional Order
Directory of Open Access Journals (Sweden)
Soon-Mo Jung
2011-01-01
Full Text Available We will solve the inhomogeneous Bessel's differential equation x2y″(x+xy′(x+(x2-ν2y(x=∑m=0∞amxm, where ν is a positive nonintegral number and apply this result for approximating analytic functions of a special type by the Bessel functions of fractional order.
Bilinear reduced order approximate model of parabolic distributed solar collectors
Elmetennani, Shahrazed
2015-07-01
This paper proposes a novel, low dimensional and accurate approximate model for the distributed parabolic solar collector, by means of a modified gaussian interpolation along the spatial domain. The proposed reduced model, taking the form of a low dimensional bilinear state representation, enables the reproduction of the heat transfer dynamics along the collector tube for system analysis. Moreover, presented as a reduced order bilinear state space model, the well established control theory for this class of systems can be applied. The approximation efficiency has been proven by several simulation tests, which have been performed considering parameters of the Acurex field with real external working conditions. Model accuracy has been evaluated by comparison to the analytical solution of the hyperbolic distributed model and its semi discretized approximation highlighting the benefits of using the proposed numerical scheme. Furthermore, model sensitivity to the different parameters of the gaussian interpolation has been studied.
Symmetries of th-Order Approximate Stochastic Ordinary Differential Equations
Fredericks, E.; Mahomed, F. M.
2012-01-01
Symmetries of $n$ th-order approximate stochastic ordinary differential equations (SODEs) are studied. The determining equations of these SODEs are derived in an Itô calculus context. These determining equations are not stochastic in nature. SODEs are normally used to model nature (e.g., earthquakes) or for testing the safety and reliability of models in construction engineering when looking at the impact of random perturbations.
Understanding operational risk capital approximations: First and second orders
Directory of Open Access Journals (Sweden)
Gareth W. Peters
2013-07-01
Full Text Available We set the context for capital approximation within the framework of the Basel II / III regulatory capital accords. This is particularly topical as the Basel III accord is shortly due to take effect. In this regard, we provide a summary of the role of capital adequacy in the new accord, highlighting along the way the significant loss events that have been attributed to the Operational Risk class that was introduced in the Basel II and III accords. Then we provide a semi-tutorial discussion on the modelling aspects of capital estimation under a Loss Distributional Approach (LDA. Our emphasis is to focuss on the important loss processes with regard to those that contribute most to capital, the so called “high consequence, low frequency" loss processes. This leads us to provide a tutorial overview of heavy tailed loss process modelling in OpRisk under Basel III, with discussion on the implications of such tail assumptions for the severity model in an LDA structure. This provides practitioners with a clear understanding of the features that they may wish to consider when developing OpRisk severity models in practice. From this discussion on heavy tailed severity models, we then develop an understanding of the impact such models have on the right tail asymptotics of the compound loss process and we provide detailed presentation of what are known as first and second order tail approximations for the resulting heavy tailed loss process. From this we develop a tutorial on three key families of risk measures and their equivalent second order asymptotic approximations: Value-at-Risk (Basel III industry standard; Expected Shortfall (ES and the Spectral Risk Measure. These then form the capital approximations. We then provide a few example case studies to illustrate the accuracy of these asymptotic captial approximations, the rate of the convergence of the assymptotic result as a function of the LDA frequency and severity model parameters, the sensitivity
Two angle dependent reactive infinite order sudden approximation
International Nuclear Information System (INIS)
Jellinek, J.; Kouri, D.J.
1984-01-01
The reactive infinite order sudden approximation is redeveloped in a manner in which the initial and final arrangement internal angles γ/sub lambda/ amd γ/sub ν/ enter as independent quantities. The analysis follows parallel to that due to Khare, Kouri, and Baer except that matching of the wave function from different arrangements is done in a manner such that no single γ/sub ν/ angle is associated with a particular γ/sub lambda/ angle. As a consequence, the matching surface parameter B/sub lambdanu/ does not occur
Approximate Schur complement preconditioning of the lowest order nodal discretizations
Energy Technology Data Exchange (ETDEWEB)
Moulton, J.D.; Ascher, U.M. [Univ. of British Columbia, Vancouver, British Columbia (Canada); Morel, J.E. [Los Alamos National Lab., NM (United States)
1996-12-31
Particular classes of nodal methods and mixed hybrid finite element methods lead to equivalent, robust and accurate discretizations of 2nd order elliptic PDEs. However, widespread popularity of these discretizations has been hindered by the awkward linear systems which result. The present work exploits this awkwardness, which provides a natural partitioning of the linear system, by defining two optimal preconditioners based on approximate Schur complements. Central to the optimal performance of these preconditioners is their sparsity structure which is compatible with Dendy`s black box multigrid code.
Wave vector modification of the infinite order sudden approximation
International Nuclear Information System (INIS)
Sachs, J.G.; Bowman, J.M.
1980-01-01
A simple method is proposed to modify the infinite order sudden approximation (IOS) in order to extend its region of quantitative validity. The method involves modifying the phase of the IOS scattering matrix to include a part calculated at the outgoing relative kinetic energy as well as a part calculated at the incoming kinetic energy. An immediate advantage of this modification is that the resulting S matrix is symmetric. We also present a closely related method in which the relative kinetic energies used in the calculation of the phase are determined from quasiclassical trajectory calculations. A set of trajectories is run with the initial state being the incoming state, and another set is run with the initial state being the outgoing state, and the average final relative kinetic energy of each set is obtained. One part of the S-operator phase is then calculated at each of these kinetic energies. We apply these methods to vibrationally inelastic collinear collisions of an atom and a harmonic oscillator, and calculate transition probabilities P/sub n/1→nf for three model systems. For systems which are sudden, or nearly so, the agreement with exact quantum close-coupling calculations is substantially improved over standard IOS ones when Δn=such thatub f/-n/sub i/ is large, and the corresponding transition probability is small, i.e., less than 0.1. However, the modifications we propose will not improve the accuracy of the IOS transition probabilities for any collisional system unless the standard form of IOS already gives at least qualitative agreement with exact quantal calculations. We also suggest comparisons between some classical quantities and sudden predictions which should help in determining the validity of the sudden approximation. This is useful when exact quantal data is not available for comparison
Wave vector modification of the infinite order sudden approximation
Sachs, Judith Grobe; Bowman, Joel M.
1980-10-01
A simple method is proposed to modify the infinite order sudden approximation (IOS) in order to extend its region of quantitative validity. The method involves modifying the phase of the IOS scattering matrix to include a part calculated at the outgoing relative kinetic energy as well as a part calculated at the incoming kinetic energy. An immediate advantage of this modification is that the resulting S matrix is symmetric. We also present a closely related method in which the relative kinetic energies used in the calculation of the phase are determined from quasiclassical trajectory calculations. A set of trajectories is run with the initial state being the incoming state, and another set is run with the initial state being the outgoing state, and the average final relative kinetic energy of each set is obtained. One part of the S-operator phase is then calculated at each of these kinetic energies. We apply these methods to vibrationally inelastic collinear collisions of an atom and a harmonic oscillator, and calculate transition probabilities Pn1→nf for three model systems. For systems which are sudden, or nearly so, the agreement with exact quantum close-coupling calculations is substantially improved over standard IOS ones when Δn=‖nf-ni‖ is large, and the corresponding transition probability is small, i.e., less than 0.1. However, the modifications we propose will not improve the accuracy of the IOS transition probabilities for any collisional system unless the standard form of IOS already gives at least qualitative agreement with exact quantal calculations. We also suggest comparisons between some classical quantities and sudden predictions which should help in determining the validity of the sudden approximation. This is useful when exact quantal data is not available for comparison.
Fractal image coding by an approximation of the collage error
Salih, Ismail; Smith, Stanley H.
1998-12-01
In fractal image compression an image is coded as a set of contractive transformations, and is guaranteed to generate an approximation to the original image when iteratively applied to any initial image. In this paper we present a method for mapping similar regions within an image by an approximation of the collage error; that is, range blocks can be approximated by a linear combination of domain blocks.
The impact of approximations and arbitrary choices on geophysical images
Valentine, Andrew P.; Trampert, Jeannot
2016-01-01
Whenever a geophysical image is to be constructed, a variety of choices must be made. Some, such as those governing data selection and processing, or model parametrization, are somewhat arbitrary: there may be little reason to prefer one choice over another. Others, such as defining the theoretical framework within which the data are to be explained, may be more straightforward: typically, an `exact' theory exists, but various approximations may need to be adopted in order to make the imaging problem computationally tractable. Differences between any two images of the same system can be explained in terms of differences between these choices. Understanding the impact of each particular decision is essential if images are to be interpreted properly-but little progress has been made towards a quantitative treatment of this effect. In this paper, we consider a general linearized inverse problem, applicable to a wide range of imaging situations. We write down an expression for the difference between two images produced using similar inversion strategies, but where different choices have been made. This provides a framework within which inversion algorithms may be analysed, and allows us to consider how image effects may arise. In this paper, we take a general view, and do not specialize our discussion to any specific imaging problem or setup (beyond the restrictions implied by the use of linearized inversion techniques). In particular, we look at the concept of `hybrid inversion', in which highly accurate synthetic data (typically the result of an expensive numerical simulation) is combined with an inverse operator constructed based on theoretical approximations. It is generally supposed that this offers the benefits of using the more complete theory, without the full computational costs. We argue that the inverse operator is as important as the forward calculation in determining the accuracy of results. We illustrate this using a simple example, based on imaging the
Higher Order Improvements for Approximate Estimators
DEFF Research Database (Denmark)
Kristensen, Dennis; Salanié, Bernard
Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such appr......Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties...... of such approximate estimators at a low computational cost. The first two methods correct the objective function so as to remove the leading term of the bias due to the approximation. One variant provides an analytical bias adjustment, but it only works for estimators based on stochastic approximators......, such as simulation-based estimators. Our second bias correction is based on ideas from the resampling literature; it eliminates the leading bias term for non-stochastic as well as stochastic approximators. Finally, we propose an iterative procedure where we use Newton-Raphson (NR) iterations based on a much finer...
Bilinear reduced order approximate model of parabolic distributed solar collectors
Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem
2015-01-01
This paper proposes a novel, low dimensional and accurate approximate model for the distributed parabolic solar collector, by means of a modified gaussian interpolation along the spatial domain. The proposed reduced model, taking the form of a low
Higher-Order Approximation of Cubic-Quintic Duffing Model
DEFF Research Database (Denmark)
Ganji, S. S.; Barari, Amin; Babazadeh, H.
2011-01-01
We apply an Artificial Parameter Lindstedt-Poincaré Method (APL-PM) to find improved approximate solutions for strongly nonlinear Duffing oscillations with cubic-quintic nonlinear restoring force. This approach yields simple linear algebraic equations instead of nonlinear algebraic equations...
Lowest order Virtual Element approximation of magnetostatic problems
Beirão da Veiga, L.; Brezzi, F.; Dassi, F.; Marini, L. D.; Russo, A.
2018-04-01
We give here a simplified presentation of the lowest order Serendipity Virtual Element method, and show its use for the numerical solution of linear magneto-static problems in three dimensions. The method can be applied to very general decompositions of the computational domain (as is natural for Virtual Element Methods) and uses as unknowns the (constant) tangential component of the magnetic field $\\mathbf{H}$ on each edge, and the vertex values of the Lagrange multiplier $p$ (used to enforce the solenoidality of the magnetic induction $\\mathbf{B}=\\mu\\mathbf{H}$). In this respect the method can be seen as the natural generalization of the lowest order Edge Finite Element Method (the so-called "first kind N\\'ed\\'elec" elements) to polyhedra of almost arbitrary shape, and as we show on some numerical examples it exhibits very good accuracy (for being a lowest order element) and excellent robustness with respect to distortions.
An improved corrective smoothed particle method approximation for second‐order derivatives
Korzilius, S.P.; Schilders, W.H.A.; Anthonissen, M.J.H.
2013-01-01
To solve (partial) differential equations it is necessary to have good numerical approximations. In SPH, most approximations suffer from the presence of boundaries. In this work a new approximation for the second-order derivative is derived and numerically compared with two other approximation
Fast and Analytical EAP Approximation from a 4th-Order Tensor
Directory of Open Access Journals (Sweden)
Aurobrata Ghosh
2012-01-01
Full Text Available Generalized diffusion tensor imaging (GDTI was developed to model complex apparent diffusivity coefficient (ADC using higher-order tensors (HOTs and to overcome the inherent single-peak shortcoming of DTI. However, the geometry of a complex ADC profile does not correspond to the underlying structure of fibers. This tissue geometry can be inferred from the shape of the ensemble average propagator (EAP. Though interesting methods for estimating a positive ADC using 4th-order diffusion tensors were developed, GDTI in general was overtaken by other approaches, for example, the orientation distribution function (ODF, since it is considerably difficult to recuperate the EAP from a HOT model of the ADC in GDTI. In this paper, we present a novel closed-form approximation of the EAP using Hermite polynomials from a modified HOT model of the original GDTI-ADC. Since the solution is analytical, it is fast, differentiable, and the approximation converges well to the true EAP. This method also makes the effort of computing a positive ADC worthwhile, since now both the ADC and the EAP can be used and have closed forms. We demonstrate our approach with 4th-order tensors on synthetic data and in vivo human data.
Fast and Analytical EAP Approximation from a 4th-Order Tensor.
Ghosh, Aurobrata; Deriche, Rachid
2012-01-01
Generalized diffusion tensor imaging (GDTI) was developed to model complex apparent diffusivity coefficient (ADC) using higher-order tensors (HOTs) and to overcome the inherent single-peak shortcoming of DTI. However, the geometry of a complex ADC profile does not correspond to the underlying structure of fibers. This tissue geometry can be inferred from the shape of the ensemble average propagator (EAP). Though interesting methods for estimating a positive ADC using 4th-order diffusion tensors were developed, GDTI in general was overtaken by other approaches, for example, the orientation distribution function (ODF), since it is considerably difficult to recuperate the EAP from a HOT model of the ADC in GDTI. In this paper, we present a novel closed-form approximation of the EAP using Hermite polynomials from a modified HOT model of the original GDTI-ADC. Since the solution is analytical, it is fast, differentiable, and the approximation converges well to the true EAP. This method also makes the effort of computing a positive ADC worthwhile, since now both the ADC and the EAP can be used and have closed forms. We demonstrate our approach with 4th-order tensors on synthetic data and in vivo human data.
SAR image regularization with fast approximate discrete minimization.
Denis, Loïc; Tupin, Florence; Darbon, Jérôme; Sigelle, Marc
2009-07-01
Synthetic aperture radar (SAR) images, like other coherent imaging modalities, suffer from speckle noise. The presence of this noise makes the automatic interpretation of images a challenging task and noise reduction is often a prerequisite for successful use of classical image processing algorithms. Numerous approaches have been proposed to filter speckle noise. Markov random field (MRF) modelization provides a convenient way to express both data fidelity constraints and desirable properties of the filtered image. In this context, total variation minimization has been extensively used to constrain the oscillations in the regularized image while preserving its edges. Speckle noise follows heavy-tailed distributions, and the MRF formulation leads to a minimization problem involving nonconvex log-likelihood terms. Such a minimization can be performed efficiently by computing minimum cuts on weighted graphs. Due to memory constraints, exact minimization, although theoretically possible, is not achievable on large images required by remote sensing applications. The computational burden of the state-of-the-art algorithm for approximate minimization (namely the alpha -expansion) is too heavy specially when considering joint regularization of several images. We show that a satisfying solution can be reached, in few iterations, by performing a graph-cut-based combinatorial exploration of large trial moves. This algorithm is applied to joint regularization of the amplitude and interferometric phase in urban area SAR images.
Virial theorem and the Born-Oppenheimer approximation at different orders of perturbation
International Nuclear Information System (INIS)
Olivier, Gabriel; Weislinger, Edmond
1977-01-01
The link between the virial theorem and the adiabatic approximation is studied for a few orders of perturbation. It is shown that the total energy of the system is distributed between the mean values of kinetic and potential energy of the nuclei and the electrons in each order of perturbation. No static approximation connected with the Hellmann-Feynman theorem is made [fr
Nodal approximations of varying order by energy group for solving the diffusion equation
International Nuclear Information System (INIS)
Broda, J.T.
1992-02-01
The neutron flux across the nuclear reactor core is of interest to reactor designers and others. The diffusion equation, an integro-differential equation in space and energy, is commonly used to determine the flux level. However, the solution of a simplified version of this equation when automated is very time consuming. Since the flux level changes with time, in general, this calculation must be made repeatedly. Therefore solution techniques that speed the calculation while maintaining accuracy are desirable. One factor that contributes to the solution time is the spatial flux shape approximation used. It is common practice to use the same order flux shape approximation in each energy group even though this method may not be the most efficient. The one-dimensional, two-energy group diffusion equation was solved, for the node average flux and core k-effective, using two sets of spatial shape approximations for each of three reactor types. A fourth-order approximation in both energy groups forms the first set of approximations used. The second set used combines a second-order approximation with a fourth-order approximation in energy group two. Comparison of the results from the two approximation sets show that the use of a different order spatial flux shape approximation results in considerable loss in accuracy for the pressurized water reactor modeled. However, the loss in accuracy is small for the heavy water and graphite reactors modeled. The use of different order approximations in each energy group produces mixed results. Further investigation into the accuracy and computing time is required before any quantitative advantage of the use of the second-order approximation in energy group one and the fourth-order approximation in energy group two can be determined
Generalized frameworks for first-order evolution inclusions based on Yosida approximations
Directory of Open Access Journals (Sweden)
Ram U. Verma
2011-04-01
Full Text Available First, general frameworks for the first-order evolution inclusions are developed based on the A-maximal relaxed monotonicity, and then using the Yosida approximation the solvability of a general class of first-order nonlinear evolution inclusions is investigated. The role the A-maximal relaxed monotonicity is significant in the sense that it not only empowers the first-order nonlinear evolution inclusions but also generalizes the existing Yosida approximations and its characterizations in the current literature.
Hybrid approximations via second order combined dynamic derivatives on time scales
Directory of Open Access Journals (Sweden)
Qin Sheng
2007-09-01
Full Text Available This article focuses on the approximation of conventional second order derivative via the combined (diamond-$\\alpha$ dynamic derivative on time scales with necessary smoothness conditions embedded. We will show the constraints under which the second order dynamic derivative provides a consistent approximation to the conventional second derivative; the cases where the dynamic derivative approximates the derivative only via a proper modification of the existing formula; and the situations in which the dynamic derivative can never approximate consistently even with the help of available structure correction methods. Constructive error analysis will be given via asymptotic expansions for practical hybrid modeling and computational applications.
Photogrammetric Processing of Planetary Linear Pushbroom Images Based on Approximate Orthophotos
Geng, X.; Xu, Q.; Xing, S.; Hou, Y. F.; Lan, C. Z.; Zhang, J. J.
2018-04-01
It is still a great challenging task to efficiently produce planetary mapping products from orbital remote sensing images. There are many disadvantages in photogrammetric processing of planetary stereo images, such as lacking ground control information and informative features. Among which, image matching is the most difficult job in planetary photogrammetry. This paper designs a photogrammetric processing framework for planetary remote sensing images based on approximate orthophotos. Both tie points extraction for bundle adjustment and dense image matching for generating digital terrain model (DTM) are performed on approximate orthophotos. Since most of planetary remote sensing images are acquired by linear scanner cameras, we mainly deal with linear pushbroom images. In order to improve the computational efficiency of orthophotos generation and coordinates transformation, a fast back-projection algorithm of linear pushbroom images is introduced. Moreover, an iteratively refined DTM and orthophotos scheme was adopted in the DTM generation process, which is helpful to reduce search space of image matching and improve matching accuracy of conjugate points. With the advantages of approximate orthophotos, the matching results of planetary remote sensing images can be greatly improved. We tested the proposed approach with Mars Express (MEX) High Resolution Stereo Camera (HRSC) and Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) images. The preliminary experimental results demonstrate the feasibility of the proposed approach.
A comparative study of the second-order Born and Faddeev-Watson approximations: Pt. 3
International Nuclear Information System (INIS)
Roberts, M.J.
1988-01-01
Singularities which arise in the second-order Born and Faddeev-Watson approximations for ionisation processes are examined. A regularisation procedure for the latter is suggested. Comparison with He(e,2e)He + experimental data in symmetric coplanar energy-sharing kinematics shows that the second-order Faddeev-Watson approximation is inferior to the second Born results of Byron et al. (1985. J. Phys. B: At. Mol. Phys. 18, 3203). (author)
International Nuclear Information System (INIS)
Lu Yujie; Zhu Banghe; Rasmussen, John C; Sevick-Muraca, Eva M; Shen Haiou; Wang Ge
2010-01-01
Fluorescence molecular imaging/tomography may play an important future role in preclinical research and clinical diagnostics. Time- and frequency-domain fluorescence imaging can acquire more measurement information than the continuous wave (CW) counterpart, improving the image quality of fluorescence molecular tomography. Although diffusion approximation (DA) theory has been extensively applied in optical molecular imaging, high-order photon migration models need to be further investigated to match quantitation provided by nuclear imaging. In this paper, a frequency-domain parallel adaptive finite element solver is developed with simplified spherical harmonics (SP N ) approximations. To fully evaluate the performance of the SP N approximations, a fast time-resolved tetrahedron-based Monte Carlo fluorescence simulator suitable for complex heterogeneous geometries is developed using a convolution strategy to realize the simulation of the fluorescence excitation and emission. The validation results show that high-order SP N can effectively correct the modeling errors of the diffusion equation, especially when the tissues have high absorption characteristics or when high modulation frequency measurements are used. Furthermore, the parallel adaptive mesh evolution strategy improves the modeling precision and the simulation speed significantly on a realistic digital mouse phantom. This solver is a promising platform for fluorescence molecular tomography using high-order approximations to the radiative transfer equation.
On the validity of localized approximation for an on-axis zeroth-order Bessel beam
International Nuclear Information System (INIS)
Gouesbet, Gérard; Lock, J.A.; Ambrosio, L.A.; Wang, J.J.
2017-01-01
Localized approximation procedures are efficient ways to evaluate beam shape coefficients of laser beams, and are particularly useful when other methods are ineffective or inefficient. Several papers in the literature have reported the use of such procedures to evaluate the beam shape coefficients of Bessel beams. Examining the specific case of an on-axis zeroth-order Bessel beam, we demonstrate that localized approximation procedures are valid only for small axicon angles. - Highlights: • The localized approximation has been widely used to evaluate the Beam Shape Coefficients (BSCs) of Bessel beams. • The validity of this approximation is examined in the case of an on-axis zeroth-order Bessel beam. • It is demonstrated, in this specific example, that the localized approximation is efficient only for small enough axicon angles. • It is easily argued that this result must remain true for any kind of Bessel beams.
First-order corrections to random-phase approximation GW calculations in silicon and diamond
Ummels, R.T.M.; Bobbert, P.A.; van Haeringen, W.
1998-01-01
We report on ab initio calculations of the first-order corrections in the screened interaction W to the random-phase approximation polarizability and to the GW self-energy, using a noninteracting Green's function, for silicon and diamond. It is found that the first-order vertex and self-consistency
International Nuclear Information System (INIS)
He Qiu-Yan; Yuan Xiao; Yu Bo
2017-01-01
The performance analysis of the generalized Carlson iterating process, which can realize the rational approximation of fractional operator with arbitrary order, is presented in this paper. The reasons why the generalized Carlson iterating function possesses more excellent properties such as self-similarity and exponential symmetry are also explained. K-index, P-index, O-index, and complexity index are introduced to contribute to performance analysis. Considering nine different operational orders and choosing an appropriate rational initial impedance for a certain operational order, these rational approximation impedance functions calculated by the iterating function meet computational rationality, positive reality, and operational validity. Then they are capable of having the operational performance of fractional operators and being physical realization. The approximation performance of the impedance function to the ideal fractional operator and the circuit network complexity are also exhibited. (paper)
International Nuclear Information System (INIS)
Liu Chunliang; Xie Xi; Chen Yinbao
1991-01-01
The universal nonlinear dynamic system equation is equivalent to its nonlinear Volterra's integral equation, and any order approximate analytical solution of the nonlinear Volterra's integral equation is obtained by exact analytical method, thus giving another derivation procedure as well as another computation algorithm for the solution of the universal nonlinear dynamic system equation
Physical Applications of a Simple Approximation of Bessel Functions of Integer Order
Barsan, V.; Cojocaru, S.
2007-01-01
Applications of a simple approximation of Bessel functions of integer order, in terms of trigonometric functions, are discussed for several examples from electromagnetism and optics. The method may be applied in the intermediate regime, bridging the "small values regime" and the "asymptotic" one, and covering, in this way, an area of great…
Convex order approximations in case of cash flows of mixed signs
Dhaene, J.; Goovaerts, M.J.; Vanmaele, M.; van Weert, K.
2012-01-01
In Van Weert et al. (2010), results are obtained showing that, when allowing some of the cash flows to be negative, convex order lower bound approximations can still be used to solve general investment problems in a context of provisioning or terminal wealth. In this paper, a correction and further
Breakdown of the single-exchange approximation in third-order symmetry-adapted perturbation theory.
Lao, Ka Un; Herbert, John M
2012-03-22
We report third-order symmetry-adapted perturbation theory (SAPT) calculations for several dimers whose intermolecular interactions are dominated by induction. We demonstrate that the single-exchange approximation (SEA) employed to derive the third-order exchange-induction correction (E(exch-ind)((30))) fails to quench the attractive nature of the third-order induction (E(ind)((30))), leading to one-dimensional potential curves that become attractive rather than repulsive at short intermolecular separations. A scaling equation for (E(exch-ind)((30))), based on an exact formula for the first-order exchange correction, is introduced to approximate exchange effects beyond the SEA, and qualitatively correct potential energy curves that include third-order induction are thereby obtained. For induction-dominated systems, our results indicate that a "hybrid" SAPT approach, in which a dimer Hartree-Fock calculation is performed in order to obtain a correction for higher-order induction, is necessary not only to obtain quantitative binding energies but also to obtain qualitatively correct potential energy surfaces. These results underscore the need to develop higher-order exchange-induction formulas that go beyond the SEA. © 2012 American Chemical Society
Approximating second-order vector differential operators on distorted meshes in two space dimensions
International Nuclear Information System (INIS)
Hermeline, F.
2008-01-01
A new finite volume method is presented for approximating second-order vector differential operators in two space dimensions. This method allows distorted triangle or quadrilateral meshes to be used without the numerical results being too much altered. The matrices that need to be inverted are symmetric positive definite therefore, the most powerful linear solvers can be applied. The method has been tested on a few second-order vector partial differential equations coming from elasticity and fluids mechanics areas. These numerical experiments show that it is second-order accurate and locking-free. (authors)
Repfinder: Finding approximately repeated scene elements for image editing
Cheng, Ming-Ming
2010-07-26
Repeated elements are ubiquitous and abundant in both manmade and natural scenes. Editing such images while preserving the repetitions and their relations is nontrivial due to overlap, missing parts, deformation across instances, illumination variation, etc. Manually enforcing such relations is laborious and error-prone. We propose a novel framework where user scribbles are used to guide detection and extraction of such repeated elements. Our detection process, which is based on a novel boundary band method, robustly extracts the repetitions along with their deformations. The algorithm only considers the shape of the elements, and ignores similarity based on color, texture, etc. We then use topological sorting to establish a partial depth ordering of overlapping repeated instances. Missing parts on occluded instances are completed using information from other instances. The extracted repeated instances can then be seamlessly edited and manipulated for a variety of high level tasks that are otherwise difficult to perform. We demonstrate the versatility of our framework on a large set of inputs of varying complexity, showing applications to image rearrangement, edit transfer, deformation propagation, and instance replacement. © 2010 ACM.
Higher order analytical approximate solutions to the nonlinear pendulum by He's homotopy method
International Nuclear Information System (INIS)
Belendez, A; Pascual, C; Alvarez, M L; Mendez, D I; Yebra, M S; Hernandez, A
2009-01-01
A modified He's homotopy perturbation method is used to calculate the periodic solutions of a nonlinear pendulum. The method has been modified by truncating the infinite series corresponding to the first-order approximate solution and substituting a finite number of terms in the second-order linear differential equation. As can be seen, the modified homotopy perturbation method works very well for high values of the initial amplitude. Excellent agreement of the analytical approximate period with the exact period has been demonstrated not only for small but also for large amplitudes A (the relative error is less than 1% for A < 152 deg.). Comparison of the result obtained using this method with the exact ones reveals that this modified method is very effective and convenient.
HQET at order 1/m. Pt. 1. Non-perturbative parameters in the quenched approximation
Energy Technology Data Exchange (ETDEWEB)
Blossier, Benoit [Paris XI Univ., 91 - Orsay (France). Lab. de Physique Theorique; Della Morte, Michele [Mainz Univ. (Germany). Inst. fuer Kernphysik; Garron, Nicolas [Universidad Autonoma de Madrid (Spain). Dept. Fisica Teorica y Inst. de Fisica Teorica UAM/CSIC; Edinburgh Univ. (United Kingdom). School of Physics and Astronomy - SUPA; Sommer, Rainer [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC
2010-01-15
We determine non-perturbatively the parameters of the lattice HQET Lagrangian and those of heavy-light axial-vector and vector currents in the quenched approximation. The HQET expansion includes terms of order 1/m{sub b}. Our results allow to compute, for example, the heavy-light spectrum and B-meson decay constants in the static approximation and to order 1/m{sub b} in HQET. The determination of the parameters is separated into universal and non-universal parts. The universal results can be used to determine the parameters for various discretizations. The computation reported in this paper uses the plaquette gauge action and the ''HYP1/2'' action for the b-quark described by HQET. The parameters of the currents also depend on the light-quark action, for which we choose non-perturbatively O(a)-improved Wilson fermions. (orig.)
HQET at order 1/m. Pt. 1. Non-perturbative parameters in the quenched approximation
International Nuclear Information System (INIS)
Blossier, Benoit; Della Morte, Michele; Garron, Nicolas; Edinburgh Univ.; Sommer, Rainer
2010-01-01
We determine non-perturbatively the parameters of the lattice HQET Lagrangian and those of heavy-light axial-vector and vector currents in the quenched approximation. The HQET expansion includes terms of order 1/m b . Our results allow to compute, for example, the heavy-light spectrum and B-meson decay constants in the static approximation and to order 1/m b in HQET. The determination of the parameters is separated into universal and non-universal parts. The universal results can be used to determine the parameters for various discretizations. The computation reported in this paper uses the plaquette gauge action and the ''HYP1/2'' action for the b-quark described by HQET. The parameters of the currents also depend on the light-quark action, for which we choose non-perturbatively O(a)-improved Wilson fermions. (orig.)
Relaxation approximations to second-order traffic flow models by high-resolution schemes
International Nuclear Information System (INIS)
Nikolos, I.K.; Delis, A.I.; Papageorgiou, M.
2015-01-01
A relaxation-type approximation of second-order non-equilibrium traffic models, written in conservation or balance law form, is considered. Using the relaxation approximation, the nonlinear equations are transformed to a semi-linear diagonilizable problem with linear characteristic variables and stiff source terms with the attractive feature that neither Riemann solvers nor characteristic decompositions are in need. In particular, it is only necessary to provide the flux and source term functions and an estimate of the characteristic speeds. To discretize the resulting relaxation system, high-resolution reconstructions in space are considered. Emphasis is given on a fifth-order WENO scheme and its performance. The computations reported demonstrate the simplicity and versatility of relaxation schemes as numerical solvers
A second-order approximation of particle motion in the fringing field of a dipole magnet
International Nuclear Information System (INIS)
Tarantin, N.I.
1980-01-01
The radial and axial motion of charged particles in the fringing field of an arbitrary dipole magnet has been considered with accuracy to the second-order of small quantities. The dipole magnet has an inhomogeneous field and oblique entrance and exit boundaries in the form of second-order curves. The region of the fringing field has a variable extension. A new definition of the effective boundary of the real fringing field has a variable extension. A new definition of the effective boundary of the real fringing field of the dipole magnet is used. A better understanding of the influence of the fringing magnetic field on the motion of charged particles in the pole gap of the dipole magnet has been obtained. In particular, it is shown that it is important to take into account, in the second approximation, some terms related formally to the next approximations. The results are presented in a form convenient for practical calculations. (orig.)
Second order approximation for optical polaron in the strong coupling case
International Nuclear Information System (INIS)
Bogolubov, N.N. Jr.
1993-11-01
Here we propose a method of construction second order approximation for ground state energy for class of model Hamiltonian with linear type interaction on Bose operators in strong coupling case. For the application of the above method we have considered polaron model and propose construction set of nonlinear differential equations for definition ground state energy in strong coupling case. We have considered also radial symmetry case. (author). 10 refs
Directory of Open Access Journals (Sweden)
Veyis Turut
2013-01-01
Full Text Available Two tecHniques were implemented, the Adomian decomposition method (ADM and multivariate Padé approximation (MPA, for solving nonlinear partial differential equations of fractional order. The fractional derivatives are described in Caputo sense. First, the fractional differential equation has been solved and converted to power series by Adomian decomposition method (ADM, then power series solution of fractional differential equation was put into multivariate Padé series. Finally, numerical results were compared and presented in tables and figures.
Kiefer, Claus; Wichmann, David
2018-06-01
We extend the Born-Oppenheimer type of approximation scheme for the Wheeler-DeWitt equation of canonical quantum gravity to arbitrary orders in the inverse Planck mass squared. We discuss in detail the origin of unitarity violation in this scheme and show that unitarity can be restored by an appropriate modification which requires back reaction from matter onto the gravitational sector. In our analysis, we heavily rely on the gauge aspects of the standard Born-Oppenheimer scheme in molecular physics.
Discrete Ordinates Approximations to the First- and Second-Order Radiation Transport Equations
International Nuclear Information System (INIS)
FAN, WESLEY C.; DRUMM, CLIFTON R.; POWELL, JENNIFER L. email wcfan@sandia.gov
2002-01-01
The conventional discrete ordinates approximation to the Boltzmann transport equation can be described in a matrix form. Specifically, the within-group scattering integral can be represented by three components: a moment-to-discrete matrix, a scattering cross-section matrix and a discrete-to-moment matrix. Using and extending these entities, we derive and summarize the matrix representations of the second-order transport equations
Second-order Born approximation for the ionization of molecules by electron and positron impact
Energy Technology Data Exchange (ETDEWEB)
Dal Cappello, C. [Universite Paul Verlaine-Metz, Laboratoire de Physique Moleculaire et des Collisions, Institut Jean Barriol (FR2843), 1 Boulevard Arago, F-57078 Metz Cedex 3 (France); Rezkallah, Z.; Houamer, S. [Laboratoire de Physique Quantique et Systemes Dynamiques, Departement de Physique, Faculte des Sciences Universite Ferhat Abbas, Setif 19000 (Algeria); Charpentier, I. [Universite Paul Verlaine-Metz, Laboratoire de Physique et Mecanique des Materiaux UMR 7554, Ile du Saulcy, F-57045 Metz Cedex 1 (France); Hervieux, P. A. [Institut de Physique et Chimie des Materiaux de Strasbourg, 23 Rue du Loess, BP 43, F-67034 Strasbourg Cedex 2 (France); Ruiz-Lopez, M. F. [Nancy-University, Equipe de Chimie et Biochimie Theoriques, UMR CNRS-UHP 7565, BP 239, F-54506 Vandoeuvre-les-Nancy (France); Dey, R. [Max-Planck Institut fuer Plasmaphysik, Boltzmannstr. 2, D-85748 Garching (Germany); Roy, A. C. [School of Mathematical Sciences, Ramakrishna Mission Vivekananda University, Belur Math 711202, West Bengal (India)
2011-09-15
Second-order Born approximation is applied to study the ionization of molecules. The initial and final states are described by single-center wave functions. For the initial state a Gaussian wave function is used while for the ejected electron it is a distorted wave. Results of the present model are compared with recent (e,2e) experiments on the water molecule. Preliminary results are also presented for the ionization of the thymine molecule by electrons and positrons.
Discrete Ordinates Approximations to the First- and Second-Order Radiation Transport Equations
Fan, W C; Powell, J L
2002-01-01
The conventional discrete ordinates approximation to the Boltzmann transport equation can be described in a matrix form. Specifically, the within-group scattering integral can be represented by three components: a moment-to-discrete matrix, a scattering cross-section matrix and a discrete-to-moment matrix. Using and extending these entities, we derive and summarize the matrix representations of the second-order transport equations.
High-order harmonic generation in solid slabs beyond the single-active-electron approximation
Hansen, Kenneth K.; Deffge, Tobias; Bauer, Dieter
2017-11-01
High-harmonic generation by a laser-driven solid slab is simulated using time-dependent density functional theory. Multiple harmonic plateaus up to very high harmonic orders are observed already at surprisingly low field strengths. The full all-electron harmonic spectra are, in general, very different from those of any individual Kohn-Sham orbital. Freezing the Kohn-Sham potential instead is found to be a good approximation for the laser intensities and harmonic orders considered. The origins of the plateau cutoffs are explained in terms of band gaps that can be reached by Kohn-Sham electrons and holes moving through the band structure.
Ganji, S. S.; Domairry, G.; Davodi, A. G.; Babazadeh, H.; Seyedalizadeh Ganji, S. H.
The main objective of this paper is to apply the parameter expansion technique (a modified Lindstedt-Poincaré method) to calculate the first, second, and third-order approximations of motion of a nonlinear oscillator arising in rigid rod rocking back. The dynamics and frequency of motion of this nonlinear mechanical system are analyzed. A meticulous attention is carried out to the study of the introduced nonlinearity effects on the amplitudes of the oscillatory states and on the bifurcation structures. We examine the synchronization and the frequency of systems using both the strong and special method. Numerical simulations and computer's answers confirm and complement the results obtained by the analytical approach. The approach proposes a choice to overcome the difficulty of computing the periodic behavior of the oscillation problems in engineering. The solutions of this method are compared with the exact ones in order to validate the approach, and assess the accuracy of the solutions. In particular, APL-PM works well for the whole range of oscillation amplitudes and excellent agreement of the approximate frequency with the exact one has been demonstrated. The approximate period derived here is accurate and close to the exact solution. This method has a distinguished feature which makes it simple to use, and also it agrees with the exact solutions for various parameters.
Higher-order tensors in diffusion imaging
Schultz, T.; Fuster, A.; Ghosh, A.; Deriche, R.; Florack, L.M.J.; Lim, L.H.; Westin, C.-F.; Vilanova, A.; Burgeth, B.
2014-01-01
Diffusion imaging is a noninvasive tool for probing the microstructure of fibrous nerve and muscle tissue. Higher-order tensors provide a powerful mathematical language to model and analyze the large and complex data that is generated by its modern variants such as High Angular Resolution Diffusion
High-order harmonic propagation in gases within the discrete dipole approximation
International Nuclear Information System (INIS)
Hernandez-Garcia, C.; Perez-Hernandez, J. A.; Ramos, J.; Jarque, E. Conejero; Plaja, L.; Roso, L.
2010-01-01
We present an efficient approach for computing high-order harmonic propagation based on the discrete dipole approximation. In contrast with other approaches, our strategy is based on computing the total field as the superposition of the driving field with the field radiated by the elemental emitters of the sample. In this way we avoid the numerical integration of the wave equation, as Maxwell's equations have an analytical solution for an elementary (pointlike) emitter. The present strategy is valid for low-pressure gases interacting with strong fields near the saturation threshold (i.e., partially ionized), which is a common situation in the experiments of high-order harmonic generation. We use this tool to study the dependence of phase matching of high-order harmonics with the relative position between the beam focus and the gas jet.
Zúñiga-Aguilar, C. J.; Coronel-Escamilla, A.; Gómez-Aguilar, J. F.; Alvarado-Martínez, V. M.; Romero-Ugalde, H. M.
2018-02-01
In this paper, we approximate the solution of fractional differential equations with delay using a new approach based on artificial neural networks. We consider fractional differential equations of variable order with the Mittag-Leffler kernel in the Liouville-Caputo sense. With this new neural network approach, an approximate solution of the fractional delay differential equation is obtained. Synaptic weights are optimized using the Levenberg-Marquardt algorithm. The neural network effectiveness and applicability were validated by solving different types of fractional delay differential equations, linear systems with delay, nonlinear systems with delay and a system of differential equations, for instance, the Newton-Leipnik oscillator. The solution of the neural network was compared with the analytical solutions and the numerical simulations obtained through the Adams-Bashforth-Moulton method. To show the effectiveness of the proposed neural network, different performance indices were calculated.
A new implementation of the second-order polarization propagator approximation (SOPPA)
DEFF Research Database (Denmark)
Packer, Martin J.; Dalskov, Erik K.; Enevoldsen, Thomas
1996-01-01
We present a new implementation of the second-order polarization propagator approximation (SOPPA) using a direct linear transformation approach, in which the SOPPA equations are solved iteratively. This approach has two important advantages over its predecessors. First, the direct linear...... and triplet transitions for benzene and naphthalene. The results compare well with experiment and CASPT2 values, calculated with identical basis sets and molecular geometries. This indicates that SOPPA can provide reliable values for excitation energies and response properties for relatively large molecular...
Second-order symmetric eikonal approximation for electron capture at high energies
Energy Technology Data Exchange (ETDEWEB)
Deco, G R; Rivarola, R D [Rosario Univ. Nacional (Argentina). Dept. de Fisica
1985-06-14
A symmetric eikonal approximation for electron capture in ion-atom collisions at high energies has been developed within the Dodd and Greider (1966, Phys. Rev. 146 675) formalism. Implicit intermediate states are included through the choice of distorted initial and final wavefunctions. Explicit intermediate state are considered by the introduction of a free-particle Green's function G/sup +//sub 0/. The model is applied for the resonant charge exchange in H/sup +/+H(1s) collisions. Also, the characteristic dip of the continuum distorted-wave model is analysed when higher orders are included at 'realistic' high energies.
An approximate framework for quantum transport calculation with model order reduction
Energy Technology Data Exchange (ETDEWEB)
Chen, Quan, E-mail: quanchen@eee.hku.hk [Department of Electrical and Electronic Engineering, The University of Hong Kong (Hong Kong); Li, Jun [Department of Chemistry, The University of Hong Kong (Hong Kong); Yam, Chiyung [Beijing Computational Science Research Center (China); Zhang, Yu [Department of Chemistry, The University of Hong Kong (Hong Kong); Wong, Ngai [Department of Electrical and Electronic Engineering, The University of Hong Kong (Hong Kong); Chen, Guanhua [Department of Chemistry, The University of Hong Kong (Hong Kong)
2015-04-01
A new approximate computational framework is proposed for computing the non-equilibrium charge density in the context of the non-equilibrium Green's function (NEGF) method for quantum mechanical transport problems. The framework consists of a new formulation, called the X-formulation, for single-energy density calculation based on the solution of sparse linear systems, and a projection-based nonlinear model order reduction (MOR) approach to address the large number of energy points required for large applied biases. The advantages of the new methods are confirmed by numerical experiments.
Convergence acceleration for time-independent first-order PDE using optimal PNB-approximations
Energy Technology Data Exchange (ETDEWEB)
Holmgren, S.; Branden, H. [Uppsala Univ. (Sweden)
1996-12-31
We consider solving time-independent (steady-state) flow problems in 2D or 3D governed by hyperbolic or {open_quotes}almost hyperbolic{close_quotes} systems of partial differential equations (PDE). Examples of such PDE are the Euler and the Navier-Stokes equations. The PDE is discretized using a finite difference or finite volume scheme with arbitrary order of accuracy. If the matrix B describes the discretized differential operator and u denotes the approximate solution, the discrete problem is given by a large system of equations.
HQET at order 1/m. Pt. 3. Decay constants in the quenched approximation
Energy Technology Data Exchange (ETDEWEB)
Blossier, Benoit [CNRS et Paris-Sud XI Univ., Orsay (France). Lab. de Physique Theorique; Della Morte, Michele [Mainz Univ. (Germany). Inst. fuer Kernphysik; Garron, Nicolas [Universidad Autonoma de Madrid (Spain). Dept. de Fisica Teorica e Inst. de Fisica Teorica IFT-UAM/CSIC; Edinburgh Univ. (United Kingdom). SUPA, School of Physics; Hippel, Georg von [Mainz Univ. (Germany). Inst. fuer Kernphysik; DESY, Zeuthen (Germany). NIC; Mendes, Tereza [DESY, Zeuthen (Germany). NIC; Sao Paulo Univ., Sao Carlos (Brazil). IFSC; Simma, Hubert; Sommer, Rainer [DESY, Zeuthen (Germany). NIC
2010-06-15
We report on the computation of the B{sub s} meson decay constant in Heavy Quark Effective Theory on the lattice. The next to leading order corrections in the HQET expansion are included non-perturbatively. We estimate higher order contributions to be very small. The results are extrapolated to the continuum limit, the main systematic error affecting the computation is therefore the quenched approximation used here. The Generalized Eigenvalue Problem and the use of all-to-all propagators are important technical ingredients of our approach that allow to keep statistical and systematic errors under control. We also report on the decay constant f{sub B{sub s}{sup '}} of the first radially excited state in the B{sub s} sector, computed in the static limit. (orig.)
Higher-order meshing of implicit geometries, Part II: Approximations on manifolds
Fries, T. P.; Schöllhammer, D.
2017-11-01
A new concept for the higher-order accurate approximation of partial differential equations on manifolds is proposed where a surface mesh composed by higher-order elements is automatically generated based on level-set data. Thereby, it enables a completely automatic workflow from the geometric description to the numerical analysis without any user-intervention. A master level-set function defines the shape of the manifold through its zero-isosurface which is then restricted to a finite domain by additional level-set functions. It is ensured that the surface elements are sufficiently continuous and shape regular which is achieved by manipulating the background mesh. The numerical results show that optimal convergence rates are obtained with a moderate increase in the condition number compared to handcrafted surface meshes.
Approximate solution of space and time fractional higher order phase field equation
Shamseldeen, S.
2018-03-01
This paper is concerned with a class of space and time fractional partial differential equation (STFDE) with Riesz derivative in space and Caputo in time. The proposed STFDE is considered as a generalization of a sixth-order partial phase field equation. We describe the application of the optimal homotopy analysis method (OHAM) to obtain an approximate solution for the suggested fractional initial value problem. An averaged-squared residual error function is defined and used to determine the optimal convergence control parameter. Two numerical examples are studied, considering periodic and non-periodic initial conditions, to justify the efficiency and the accuracy of the adopted iterative approach. The dependence of the solution on the order of the fractional derivative in space and time and model parameters is investigated.
HQET at order 1/m. Pt. 3. Decay constants in the quenched approximation
International Nuclear Information System (INIS)
Blossier, Benoit; Della Morte, Michele; Garron, Nicolas; Edinburgh Univ.; Hippel, Georg von; DESY, Zeuthen; Mendes, Tereza; Sao Paulo Univ., Sao Carlos; Simma, Hubert; Sommer, Rainer
2010-06-01
We report on the computation of the B s meson decay constant in Heavy Quark Effective Theory on the lattice. The next to leading order corrections in the HQET expansion are included non-perturbatively. We estimate higher order contributions to be very small. The results are extrapolated to the continuum limit, the main systematic error affecting the computation is therefore the quenched approximation used here. The Generalized Eigenvalue Problem and the use of all-to-all propagators are important technical ingredients of our approach that allow to keep statistical and systematic errors under control. We also report on the decay constant f B s ' of the first radially excited state in the B s sector, computed in the static limit. (orig.)
Charge and finite size corrections for virtual photon spectra in second order Born approximation
International Nuclear Information System (INIS)
Durgapal, P.
1982-01-01
The purpose of this work is to investigate the effects of finite nuclear size and charge on the spectrum of virtual photons emitted when a relativistic electron is scattered in the field of an atomic nucleus. The method consisted in expanding the scattering cross section in terms of integrals over the nuclear inelastic form factor with a kernel which was evaluated in second order Born approximation and was derived from the elastic-electron scattering form factor. The kernel could be evaluated analytically provided the elastic form factor contained only poles. For this reason the author used a Yukawa form factor. Before calculating the second order term the author studied the first order term containing finite size effects in the inelastic form factor. The author observed that the virtual photon spectrum is insensitive to the details of the inelastic distribution over a large range of energies and depends only on the transition radius. This gave the author the freedom of choosing an inelastic distribution for which the form factor has only poles and the author chose a modified form of the exponential distribution, which enabled the author to evaluate the matrix element analytically. The remaining integral over the physical momentum transfer was performed numerically. The author evaluated the virtual photon spectra for E1 and M1 transitions for a variety of electron energies using several nuclei and compared the results with the distorted wave calculations. Except for low energy and high Z, the second order results compared well with the distorted wave calculations
Successive approximation algorithm for cancellation of artifacts in DSA images
International Nuclear Information System (INIS)
Funakami, Raiko; Hiroshima, Kyoichi; Nishino, Junji
2000-01-01
In this paper, we propose an algorithm for cancellation of artifacts in DSA images. We have already proposed an automatic registration method based on the detection of local movements. When motion of the object is large, it is difficult to estimate the exact movement, and the cancellation of artifacts may therefore fail. The algorithm we propose here is based on a simple rigid model. We present the results of applying the proposed method to a series of experimental X-ray images, as well as the results of applying the algorithm as preprocessing for a registration method based on local movement. (author)
Assessment of cavitation in artificial approximal dental lesions with near-IR imaging
Simon, Jacob C.; Darling, Cynthia L.; Fried, Daniel
2017-02-01
Bitewing radiography is still considered state-of-the-art diagnostic technology for assessing cavitation within approximal carious dental lesions, even though radiographs cannot resolve cavitated surfaces but instead are used to measure lesion depth in order to predict cavitation. Clinicians need new technologies capable of determining whether approximal carious lesions have become cavitated because not all lesions progress to cavitation. Assessing lesion cavitation from near-infrared (NIR) imaging methods holds great potential due to the high transparency of enamel in the NIR region from λ=1300-1700-nm, which allows direct visualization and quantified measurements of enamel demineralization. The objective of this study was to measure the change in lesion appearance between non-cavitated and cavitated lesions in artificially generated lesions using NIR imaging modalities (two-dimensional) at λ=1300-nm and λ=1450-nm and cross-polarization optical coherence tomography (CP-OCT) (thee-dimensional) λ=1300-nm. Extracted human posterior teeth with sound proximal surfaces were chosen for this study and imaged before and after artificial lesions were made. A high speed dental hand piece was used to create artificial cavitated proximal lesions in sound samples and imaged. The cavitated artificial lesions were then filled with hydroxyapatite powder to simulate non-cavitated proximal lesions.
Leveraging Gaussian process approximations for rapid image overlay production
CSIR Research Space (South Africa)
Burke, Michael
2017-10-01
Full Text Available value, xs = argmax x∗ [ K (x∗, x∗) − K (x∗, x)K (x, x)−1K (x, x∗) ] . (10) Figure 2 illustrates this sampling strategy more clearly. This selec- tion process can be slow, but could be bootstrapped using Latin hypercube sampling [16]. 3 RESULTS Empirical... point - a 240 sample Gaussian process approximation takes roughly the same amount of time to compute as the full blanked overlay. GP 50 GP 100 GP 150 GP 200 GP 250 GP 300 GP 350 GP 400 Full Itti-Koch 0 2 4 6 8 10 Method R at in g Boxplot of storyboard...
Directory of Open Access Journals (Sweden)
Wang Yajun
2008-12-01
Full Text Available In order to address the complex uncertainties caused by interfacing between the fuzziness and randomness of the safety problem for embankment engineering projects, and to evaluate the safety of embankment engineering projects more scientifically and reasonably, this study presents the fuzzy logic modeling of the stochastic finite element method (SFEM based on the harmonious finite element (HFE technique using a first-order approximation theorem. Fuzzy mathematical models of safety repertories were introduced into the SFEM to analyze the stability of embankments and foundations in order to describe the fuzzy failure procedure for the random safety performance function. The fuzzy models were developed with membership functions with half depressed gamma distribution, half depressed normal distribution, and half depressed echelon distribution. The fuzzy stochastic mathematical algorithm was used to comprehensively study the local failure mechanism of the main embankment section near Jingnan in the Yangtze River in terms of numerical analysis for the probability integration of reliability on the random field affected by three fuzzy factors. The result shows that the middle region of the embankment is the principal zone of concentrated failure due to local fractures. There is also some local shear failure on the embankment crust. This study provides a referential method for solving complex multi-uncertainty problems in engineering safety analysis.
International Nuclear Information System (INIS)
Gougam, L.A.; Taibi, H.; Chikhi, A.; Mekideche-Chafa, F.
2009-01-01
The problem of determining the analytical description for a set of data arises in numerous sciences and applications and can be referred to as data modeling or system identification. Neural networks are a convenient means of representation because they are known to be universal approximates that can learn data. The desired task is usually obtained by a learning procedure which consists in adjusting the s ynaptic weights . For this purpose, many learning algorithms have been proposed to update these weights. The convergence for these learning algorithms is a crucial criterion for neural networks to be useful in different applications. The aim of the present contribution is to use a training algorithm for feed forward wavelet networks used for function approximation. The training is based on the minimization of the least-square cost function. The minimization is performed by iterative second order gradient-based methods. We make use of the Levenberg-Marquardt algorithm to train the architecture of the chosen network and, then, the training procedure starts with a simple gradient method which is followed by a BFGS (Broyden, Fletcher, Glodfarb et Shanno) algorithm. The performances of the two algorithms are then compared. Our method is then applied to determine the energy of the ground state associated to a sextic potential. In fact, the Schrodinger equation does not always admit an exact solution and one has, generally, to solve it numerically. To this end, the sextic potential is, firstly, approximated with the above outlined wavelet network and, secondly, implemented into a numerical scheme. Our results are in good agreement with the ones found in the literature.
Comparison of the methods for discrete approximation of the fractional-order operator
Directory of Open Access Journals (Sweden)
Zborovjan Martin
2003-12-01
Full Text Available In this paper we will present some alternative types of discretization methods (discrete approximation for the fractional-order (FO differentiator and their application to the FO dynamical system described by the FO differential equation (FDE. With analytical solution and numerical solution by power series expansion (PSE method are compared two effective methods - the Muir expansion of the Tustin operator and continued fraction expansion method (CFE with the Tustin operator and the Al-Alaoui operator. Except detailed mathematical description presented are also simulation results. From the Bode plots of the FO differentiator and FDE and from the solution in the time domain we can see, that the CFE is a more effective method according to the PSE method, but there are some restrictions for the choice of the time step. The Muir expansion is almost unusable.
DEFF Research Database (Denmark)
Eriksen, Janus Juul; Solanko, Lukasz Michal; Nåbo, Lina J.
2014-01-01
2) wave function coupled to PCM, we introduce dynamical PCM solvent effects only in the Random Phase Approximation (RPA) part of the SOPPA response equations while the static solvent contribution is kept in both the RPA terms as well as in the higher order correlation matrix components of the SOPPA...... response equations. By dynamic terms, we refer to contributions that describe a change in environmental polarization which, in turn, reflects a change in the core molecular charge distribution upon an electronic excitation. This new combination of methods is termed PCM-SOPPA/RPA. We apply this newly...... defined method to the challenging cases of solvent effects on the lowest and intense electronic transitions in o-, m- and p-nitroaniline and o-, m- and p-nitrophenol and compare the performance of PCM-SOPPA/RPA with more conventional approaches. Compared to calculations based on time-dependent density...
High-order above-threshold ionization beyond the electric dipole approximation
Brennecke, Simon; Lein, Manfred
2018-05-01
Photoelectron momentum distributions from strong-field ionization are calculated by numerical solution of the one-electron time-dependent Schrödinger equation for a model atom including effects beyond the electric dipole approximation. We focus on the high-energy electrons from rescattering and analyze their momentum component along the field propagation direction. We show that the boundary of the calculated momentum distribution is deformed in accordance with the classical three-step model including the beyond-dipole Lorentz force. In addition, the momentum distribution exhibits an asymmetry in the signal strengths of electrons emitted in the forward/backward directions. Taken together, the two non-dipole effects give rise to a considerable average forward momentum component of the order of 0.1 a.u. for realistic laser parameters.
Zeroth order regular approximation approach to electric dipole moment interactions of the electron
Gaul, Konstantin; Berger, Robert
2017-07-01
A quasi-relativistic two-component approach for an efficient calculation of P ,T -odd interactions caused by a permanent electric dipole moment of the electron (eEDM) is presented. The approach uses a (two-component) complex generalized Hartree-Fock and a complex generalized Kohn-Sham scheme within the zeroth order regular approximation. In applications to select heavy-elemental polar diatomic molecular radicals, which are promising candidates for an eEDM experiment, the method is compared to relativistic four-component electron-correlation calculations and confirms values for the effective electric field acting on the unpaired electron for RaF, BaF, YbF, and HgF. The calculations show that purely relativistic effects, involving only the lower component of the Dirac bi-spinor, are well described by treating only the upper component explicitly.
Directory of Open Access Journals (Sweden)
Hui Huang
2017-01-01
Full Text Available According to the pros and cons of contourlet transform and multimodality medical imaging, here we propose a novel image fusion algorithm that combines nonlinear approximation of contourlet transform with image regional features. The most important coefficient bands of the contourlet sparse matrix are retained by nonlinear approximation. Low-frequency and high-frequency regional features are also elaborated to fuse medical images. The results strongly suggested that the proposed algorithm could improve the visual effects of medical image fusion and image quality, image denoising, and enhancement.
Examining the accuracy of the infinite order sudden approximation using sensitivity analysis
International Nuclear Information System (INIS)
Eno, L.; Rabitz, H.
1981-01-01
A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix S/sup IOS/ with respect to a parameter which reintroduces the internal energy operator h 0 into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (h 0 in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result is obtained for the effect of h 0 on S/sup IOS/. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H 2 system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed
Examining the accuracy of the infinite order sudden approximation using sensitivity analysis
Eno, Larry; Rabitz, Herschel
1981-08-01
A method is developed for assessing the accuracy of scattering observables calculated within the framework of the infinite order sudden (IOS) approximation. In particular, we focus on the energy sudden assumption of the IOS method and our approach involves the determination of the sensitivity of the IOS scattering matrix SIOS with respect to a parameter which reintroduces the internal energy operator ?0 into the IOS Hamiltonian. This procedure is an example of sensitivity analysis of missing model components (?0 in this case) in the reference Hamiltonian. In contrast to simple first-order perturbation theory a finite result is obtained for the effect of ?0 on SIOS. As an illustration, our method of analysis is applied to integral state-to-state cross sections for the scattering of an atom and rigid rotor. Results are generated within the He+H2 system and a comparison is made between IOS and coupled states cross sections and the corresponding IOS sensitivities. It is found that the sensitivity coefficients are very useful indicators of the accuracy of the IOS results. Finally, further developments and applications are discussed.
Morphing Continuum Theory: A First Order Approximation to the Balance Laws
Wonnell, Louis; Cheikh, Mohamad Ibrahim; Chen, James
2017-11-01
Morphing Continuum Theory is constructed under the framework of Rational Continuum Mechanics (RCM) for fluid flows with inner structure. This multiscale theory has been successfully emplyed to model turbulent flows. The framework of RCM ensures the mathematical rigor of MCT, but contains new material constants related to the inner structure. The physical meanings of these material constants have yet to be determined. Here, a linear deviation from the zeroth-order Boltzmann-Curtiss distribution function is derived. When applied to the Boltzmann-Curtiss equation, a first-order approximation of the MCT governing equations is obtained. The integral equations are then related to the appropriate material constants found in the heat flux, Cauchy stress, and moment stress terms in the governing equations. These new material properties associated with the inner structure of the fluid are compared with the corresponding integrals, and a clearer physical interpretation of these coefficients emerges. The physical meanings of these material properties is determined by analyzing previous results obtained from numerical simulations of MCT for compressible and incompressible flows. The implications for the physics underlying the MCT governing equations will also be discussed. This material is based upon work supported by the Air Force Office of Scientific Research under Award Number FA9550-17-1-0154.
Anderson, Daniel M.; McLaughlin, Richard M.; Miller, Cass T.
2018-02-01
We examine a mathematical model of one-dimensional draining of a fluid through a periodically-layered porous medium. A porous medium, initially saturated with a fluid of a high density is assumed to drain out the bottom of the porous medium with a second lighter fluid replacing the draining fluid. We assume that the draining layer is sufficiently dense that the dynamics of the lighter fluid can be neglected with respect to the dynamics of the heavier draining fluid and that the height of the draining fluid, represented as a free boundary in the model, evolves in time. In this context, we neglect interfacial tension effects at the boundary between the two fluids. We show that this problem admits an exact solution. Our primary objective is to develop a homogenization theory in which we find not only leading-order, or effective, trends but also capture higher-order corrections to these effective draining rates. The approximate solution obtained by this homogenization theory is compared to the exact solution for two cases: (1) the permeability of the porous medium varies smoothly but rapidly and (2) the permeability varies as a piecewise constant function representing discrete layers of alternating high/low permeability. In both cases we are able to show that the corrections in the homogenization theory accurately predict the position of the free boundary moving through the porous medium.
Practical error estimates for Reynolds' lubrication approximation and its higher order corrections
Energy Technology Data Exchange (ETDEWEB)
Wilkening, Jon
2008-12-10
Reynolds lubrication approximation is used extensively to study flows between moving machine parts, in narrow channels, and in thin films. The solution of Reynolds equation may be thought of as the zeroth order term in an expansion of the solution of the Stokes equations in powers of the aspect ratio {var_epsilon} of the domain. In this paper, we show how to compute the terms in this expansion to arbitrary order on a two-dimensional, x-periodic domain and derive rigorous, a-priori error bounds for the difference between the exact solution and the truncated expansion solution. Unlike previous studies of this sort, the constants in our error bounds are either independent of the function h(x) describing the geometry, or depend on h and its derivatives in an explicit, intuitive way. Specifically, if the expansion is truncated at order 2k, the error is O({var_epsilon}{sup 2k+2}) and h enters into the error bound only through its first and third inverse moments {integral}{sub 0}{sup 1} h(x){sup -m} dx, m = 1,3 and via the max norms {parallel} 1/{ell}! h{sup {ell}-1}{partial_derivative}{sub x}{sup {ell}}h{parallel}{sub {infinity}}, 1 {le} {ell} {le} 2k + 2. We validate our estimates by comparing with finite element solutions and present numerical evidence that suggests that even when h is real analytic and periodic, the expansion solution forms an asymptotic series rather than a convergent series.
Response analysis for an approximate 3-D image reconstruction in cone-beam SPECT
International Nuclear Information System (INIS)
Murayama, Hideo; Nohara, Norimasa
1991-01-01
Cone-beam single photon emission computed tomography (SPECT) offers the potential for a large increase in sensitivity as compared with parallel hole or fan-beam collimation. Three-dimensional image reconstruction was approximately accomplished by backprojecting filtered projections using a two-dimensional fan-beam algorithm. The cone-beam projection data were formed from mathematical phantoms as analytically derived line integrals of the density. In order to reduce the processing time, the filtered projections were backprojected into each plane parallel to the circle on which the focal point moved. Discrepancy of source position and degradation of resolution were investigated by computer simulation in three-dimensional image space. The results obtained suggest that, the nearer to the central plane or the axis of rotation, the less image degradation is performed. By introducing a parameter of angular difference between the focal point and a fixed point in the image space during rotation, degradation of the reconstructed image can be estimated for any cone-beam SPECT system. (author)
Li, X.; Li, S. W.
2012-07-01
In this paper, an efficient global optimization algorithm in the field of artificial intelligence, named Particle Swarm Optimization (PSO), is introduced into close range photogrammetric data processing. PSO can be applied to obtain the approximate values of exterior orientation elements under the condition that multi-intersection photography and a small portable plane control frame are used. PSO, put forward by an American social psychologist J. Kennedy and an electrical engineer R.C. Eberhart, is a stochastic global optimization method based on swarm intelligence, which was inspired by social behavior of bird flocking or fish schooling. The strategy of obtaining the approximate values of exterior orientation elements using PSO is as follows: in terms of image coordinate observed values and space coordinates of few control points, the equations of calculating the image coordinate residual errors can be given. The sum of absolute value of each image coordinate is minimized to be the objective function. The difference between image coordinate observed value and the image coordinate computed through collinear condition equation is defined as the image coordinate residual error. Firstly a gross area of exterior orientation elements is given, and then the adjustment of other parameters is made to get the particles fly in the gross area. After iterative computation for certain times, the satisfied approximate values of exterior orientation elements are obtained. By doing so, the procedures like positioning and measuring space control points in close range photogrammetry can be avoided. Obviously, this method can improve the surveying efficiency greatly and at the same time can decrease the surveying cost. And during such a process, only one small portable control frame with a couple of control points is employed, and there are no strict requirements for the space distribution of control points. In order to verify the effectiveness of this algorithm, two experiments are
Directory of Open Access Journals (Sweden)
X. Li
2012-07-01
Full Text Available In this paper, an efficient global optimization algorithm in the field of artificial intelligence, named Particle Swarm Optimization (PSO, is introduced into close range photogrammetric data processing. PSO can be applied to obtain the approximate values of exterior orientation elements under the condition that multi-intersection photography and a small portable plane control frame are used. PSO, put forward by an American social psychologist J. Kennedy and an electrical engineer R.C. Eberhart, is a stochastic global optimization method based on swarm intelligence, which was inspired by social behavior of bird flocking or fish schooling. The strategy of obtaining the approximate values of exterior orientation elements using PSO is as follows: in terms of image coordinate observed values and space coordinates of few control points, the equations of calculating the image coordinate residual errors can be given. The sum of absolute value of each image coordinate is minimized to be the objective function. The difference between image coordinate observed value and the image coordinate computed through collinear condition equation is defined as the image coordinate residual error. Firstly a gross area of exterior orientation elements is given, and then the adjustment of other parameters is made to get the particles fly in the gross area. After iterative computation for certain times, the satisfied approximate values of exterior orientation elements are obtained. By doing so, the procedures like positioning and measuring space control points in close range photogrammetry can be avoided. Obviously, this method can improve the surveying efficiency greatly and at the same time can decrease the surveying cost. And during such a process, only one small portable control frame with a couple of control points is employed, and there are no strict requirements for the space distribution of control points. In order to verify the effectiveness of this algorithm
Quick, Christopher M; Venugopal, Arun M; Dongaonkar, Ranjeet M; Laine, Glen A; Stewart, Randolph H
2008-05-01
To return lymph to the great veins of the neck, it must be actively pumped against a pressure gradient. Mean lymph flow in a portion of a lymphatic network has been characterized by an empirical relationship (P(in) - P(out) = -P(p) + R(L)Q(L)), where P(in) - P(out) is the axial pressure gradient and Q(L) is mean lymph flow. R(L) and P(p) are empirical parameters characterizing the effective lymphatic resistance and pump pressure, respectively. The relation of these global empirical parameters to the properties of lymphangions, the segments of a lymphatic vessel bounded by valves, has been problematic. Lymphangions have a structure like blood vessels but cyclically contract like cardiac ventricles; they are characterized by a contraction frequency (f) and the slopes of the end-diastolic pressure-volume relationship [minimum value of resulting elastance (E(min))] and end-systolic pressure-volume relationship [maximum value of resulting elastance (E(max))]. Poiseuille's law provides a first-order approximation relating the pressure-flow relationship to the fundamental properties of a blood vessel. No analogous formula exists for a pumping lymphangion. We therefore derived an algebraic formula predicting lymphangion flow from fundamental physical principles and known lymphangion properties. Quantitative analysis revealed that lymph inertia and resistance to lymph flow are negligible and that lymphangions act like a series of interconnected ventricles. For a single lymphangion, P(p) = P(in) (E(max) - E(min))/E(min) and R(L) = E(max)/f. The formula was tested against a validated, realistic mathematical model of a lymphangion and found to be accurate. Predicted flows were within the range of flows measured in vitro. The present work therefore provides a general solution that makes it possible to relate fundamental lymphangion properties to lymphatic system function.
Fiori, A.; Zarlenga, A.; Jankovic, I.; Dagan, G.
2017-12-01
Natural gradient steady flow of mean velocity U takes place in heterogeneous aquifers of random logconductivity Y = lnK , characterized by the normal univariate PDF f(Y) and autocorrelation ρY, of variance σY2 and horizontal integral scale I. Solute transport is quantified by the Breakthrough Curve (BTC) M at planes at distance x from the injection plane. The study builds on the extensive 3D numerical simulations of flow and transport of Jankovic et al. (2017) for different conductivity structures. The present study further explores the predictive capabilities of the Advection Dispersion Equation (ADE), with macrodispersivity αL given by the First Order Approximation (FOA), by checking in a quantitative manner its applicability. After a discussion on the suitable boundary conditions for ADE, we find that the ADE-FOA solution is a sufficiently accurate predictor for applications, the many other sources of uncertainty prevailing in practice notwithstanding. We checked by least squares and by comparison of travel time of quantiles of M that indeed the analytical Inverse Gaussian M with αL =σY2 I , is able to fit well the bulk of the simulated BTCs. It tends to underestimate the late arrival time of the thin and persistent tail. The tail is better reproduced by the semi-analytical MIMSCA model, which also allows for a physical explanation of the success of the Inverse Gaussian solution. Examination of the pertinent longitudinal mass distribution shows that it is different from the commonly used Gaussian one in the analysis of field experiments, and it captures the main features of the plume measurements of the MADE experiment. The results strengthen the confidence in the applicability of the ADE and the FOA to predicting longitudinal spreading in solute transport through heterogeneous aquifers of stationary random structure.
Chen, Zhenhua; Hoffmann, Mark R
2012-07-07
A unitary wave operator, exp (G), G(+) = -G, is considered to transform a multiconfigurational reference wave function Φ to the potentially exact, within basis set limit, wave function Ψ = exp (G)Φ. To obtain a useful approximation, the Hausdorff expansion of the similarity transformed effective Hamiltonian, exp (-G)Hexp (G), is truncated at second order and the excitation manifold is limited; an additional separate perturbation approximation can also be made. In the perturbation approximation, which we refer to as multireference unitary second-order perturbation theory (MRUPT2), the Hamiltonian operator in the highest order commutator is approximated by a Mo̸ller-Plesset-type one-body zero-order Hamiltonian. If a complete active space self-consistent field wave function is used as reference, then the energy is invariant under orbital rotations within the inactive, active, and virtual orbital subspaces for both the second-order unitary coupled cluster method and its perturbative approximation. Furthermore, the redundancies of the excitation operators are addressed in a novel way, which is potentially more efficient compared to the usual full diagonalization of the metric of the excited configurations. Despite the loss of rigorous size-extensivity possibly due to the use of a variational approach rather than a projective one in the solution of the amplitudes, test calculations show that the size-extensivity errors are very small. Compared to other internally contracted multireference perturbation theories, MRUPT2 only needs reduced density matrices up to three-body even with a non-complete active space reference wave function when two-body excitations within the active orbital subspace are involved in the wave operator, exp (G). Both the coupled cluster and perturbation theory variants are amenable to large, incomplete model spaces. Applications to some widely studied model systems that can be problematic because of geometry dependent quasidegeneracy, H4, P4
A simple and accurate approximation for the order fill rates in lost-sales Assemble-to-Order systems
Hoen, K.M.R.; Güllü, R.; van Houtum, Geert-Jan; Vliegen, Ingrid
2010-01-01
In this paper we consider an Assemble-to-Order system with multiple end-products. Demands for an end-product follow a Poisson process and each end-product requires a fixed set of components. We are interested in the order fill rates, i.e., the percentage of demands for which all requested components
Four-quadrant propeller modeling: A low-order harmonic approximation
Digital Repository Service at National Institute of Oceanography (India)
Haeusler, A.J; Saccon, A.; Hauser, J; Pascoal, A.M.; Aguiar, A.P.
. We explore the connection between the propeller thrust, torque, and efficiency curves and the lift and drag curves of the propeller blades. The model originates from a well-known four-quadrant model, based on a sinusoidal approximation...
International Nuclear Information System (INIS)
Fargher, H.E.; Roberts, M.J.
1983-01-01
Simplified versions of the second-order Born and Faddeev-Watson approximations are applied to the excitation of the n=2 levels of atomic hydrogen by the impact of 54.4 eV electrons. The theories are compared with the measurements of differential cross sections and angular correlation parameters. The results indicate that the Born approximation is better at low angles of scattering but that the Faddeev-Watson approximation is better at high angles. The importance of the phases of the two-body T matrices in the Faddeev-Watson approximation is illustrated. (author)
Higher order saddlepoint approximations in the Vasicek portfolio credit loss model
Huang, X.; Oosterlee, C.W.; van der Weide, J.A.M.
2006-01-01
This paper utilizes the saddlepoint approximation as an efficient tool to estimate the portfolio credit loss distribution in the Vasicek model. Value at Risk (VaR), the risk measure chosen in the Basel II Accord for the evaluation of capital requirement, can then be found by inverting the loss
Higher-order convex approximations of Young measures in optimal control
Czech Academy of Sciences Publication Activity Database
Matache, A. M.; Roubíček, Tomáš; Schwab, Ch.
2003-01-01
Roč. 19, č. 1 (2003), s. 73-97 ISSN 1019-7168 R&D Projects: GA ČR GA201/00/0768; GA AV ČR IAA1075005 Institutional research plan: CEZ:AV0Z1075907 Keywords : Young measures * approximation * error estimation Subject RIV: BA - General Mathematics Impact factor: 0.926, year: 2003
International Nuclear Information System (INIS)
Belendez, A; Pascual, C; Fernandez, E; Neipp, C; Belendez, T
2008-01-01
A modified He's homotopy perturbation method is used to calculate higher-order analytical approximate solutions to the relativistic and Duffing-harmonic oscillators. The He's homotopy perturbation method is modified by truncating the infinite series corresponding to the first-order approximate solution before introducing this solution in the second-order linear differential equation, and so on. We find this modified homotopy perturbation method works very well for the whole range of initial amplitudes, and the excellent agreement of the approximate frequencies and periodic solutions with the exact ones has been demonstrated and discussed. The approximate formulae obtained show excellent agreement with the exact solutions, and are valid for small as well as large amplitudes of oscillation, including the limiting cases of amplitude approaching zero and infinity. For the relativistic oscillator, only one iteration leads to high accuracy of the solutions with a maximal relative error for the approximate frequency of less than 1.6% for small and large values of oscillation amplitude, while this relative error is 0.65% for two iterations with two harmonics and as low as 0.18% when three harmonics are considered in the second approximation. For the Duffing-harmonic oscillator the relative error is as low as 0.078% when the second approximation is considered. Comparison of the result obtained using this method with those obtained by the harmonic balance methods reveals that the former is very effective and convenient
Algebraically approximate and noisy realization of discrete-time systems and digital images
Hasegawa, Yasumichi
2009-01-01
This monograph deals with approximation and noise cancellation of dynamical systems which include linear and nonlinear input/output relationships. It also deal with approximation and noise cancellation of two dimensional arrays. It will be of special interest to researchers, engineers and graduate students who have specialized in filtering theory and system theory and digital images. This monograph is composed of two parts. Part I and Part II will deal with approximation and noise cancellation of dynamical systems or digital images respectively. From noiseless or noisy data, reduction will be
Higher-order scene statistics of breast images
Abbey, Craig K.; Sohl-Dickstein, Jascha N.; Olshausen, Bruno A.; Eckstein, Miguel P.; Boone, John M.
2009-02-01
Researchers studying human and computer vision have found description and construction of these systems greatly aided by analysis of the statistical properties of naturally occurring scenes. More specifically, it has been found that receptive fields with directional selectivity and bandwidth properties similar to mammalian visual systems are more closely matched to the statistics of natural scenes. It is argued that this allows for sparse representation of the independent components of natural images [Olshausen and Field, Nature, 1996]. These theories have important implications for medical image perception. For example, will a system that is designed to represent the independent components of natural scenes, where objects occlude one another and illumination is typically reflected, be appropriate for X-ray imaging, where features superimpose on one another and illumination is transmissive? In this research we begin to examine these issues by evaluating higher-order statistical properties of breast images from X-ray projection mammography (PM) and dedicated breast computed tomography (bCT). We evaluate kurtosis in responses of octave bandwidth Gabor filters applied to PM and to coronal slices of bCT scans. We find that kurtosis in PM rises and quickly saturates for filter center frequencies with an average value above 0.95. By contrast, kurtosis in bCT peaks near 0.20 cyc/mm with kurtosis of approximately 2. Our findings suggest that the human visual system may be tuned to represent breast tissue more effectively in bCT over a specific range of spatial frequencies.
Accuracy of the Bethe approximation for hyperparameter estimation in probabilistic image processing
International Nuclear Information System (INIS)
Tanaka, Kazuyuki; Shouno, Hayaru; Okada, Masato; Titterington, D M
2004-01-01
We investigate the accuracy of statistical-mechanical approximations for the estimation of hyperparameters from observable data in probabilistic image processing, which is based on Bayesian statistics and maximum likelihood estimation. Hyperparameters in statistical science correspond to interactions or external fields in the statistical-mechanics context. In this paper, hyperparameters in the probabilistic model are determined so as to maximize a marginal likelihood. A practical algorithm is described for grey-level image restoration based on a Gaussian graphical model and the Bethe approximation. The algorithm corresponds to loopy belief propagation in artificial intelligence. We examine the accuracy of hyperparameter estimation when we use the Bethe approximation. It is well known that a practical algorithm for probabilistic image processing can be prescribed analytically when a Gaussian graphical model is adopted as a prior probabilistic model in Bayes' formula. We are therefore able to compare, in a numerical study, results obtained through mean-field-type approximations with those based on exact calculation
Boistard, H.; Lopuhää, H.P.; Ruiz-Gazen, A.
2012-01-01
This paper is devoted to rejective sampling. We provide an expansion of joint inclusion probabilities of any order in terms of the inclusion probabilities of order one, extending previous results by Hájek (1964) and Hájek (1981) and making the remainder term more precise. Following Hájek (1981), the
International Nuclear Information System (INIS)
Estiot, J.C.; Salvatores, M.; Palmiotti, G.
1981-01-01
We present the characteristics of SAMPO, a one dimension transport theory code system, which is used for the following types of calculation: sensitivity analysis for functional linear or bi-linear on the direct or adjoint flux and their ratios; classic perturbation analysis. First order calculations, as well higher order, can be presented
Recursive estimation of high-order Markov chains: Approximation by finite mixtures
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav
2016-01-01
Roč. 326, č. 1 (2016), s. 188-201 ISSN 0020-0255 R&D Projects : GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Markov chain * Approximate parameter estimation * Bayesian recursive estimation * Adaptive systems * Kullback–Leibler divergence * Forgetting Subject RIV: BC - Control Systems Theory Impact factor: 4.832, year: 2016 http://library.utia.cas.cz/separaty/2015/AS/karny-0447119.pdf
Many particle approximation of the Aw-Rascle-Zhang second order model for vehicular traffic.
Francesco, Marco Di; Fagioli, Simone; Rosini, Massimiliano D
2017-02-01
We consider the follow-the-leader approximation of the Aw-Rascle-Zhang (ARZ) model for traffic flow in a multi population formulation. We prove rigorous convergence to weak solutions of the ARZ system in the many particle limit in presence of vacuum. The result is based on uniform BV estimates on the discrete particle velocity. We complement our result with numerical simulations of the particle method compared with some exact solutions to the Riemann problem of the ARZ system.
Entropy Viscosity Method for High-Order Approximations of Conservation Laws
Guermond, J. L.
2010-09-17
A stabilization technique for conservation laws is presented. It introduces in the governing equations a nonlinear dissipation function of the residual of the associated entropy equation and bounded from above by a first order viscous term. Different two-dimensional test cases are simulated - a 2D Burgers problem, the "KPP rotating wave" and the Euler system - using high order methods: spectral elements or Fourier expansions. Details on the tuning of the parameters controlling the entropy viscosity are given. © 2011 Springer.
Entropy Viscosity Method for High-Order Approximations of Conservation Laws
Guermond, J. L.; Pasquetti, R.
2010-01-01
A stabilization technique for conservation laws is presented. It introduces in the governing equations a nonlinear dissipation function of the residual of the associated entropy equation and bounded from above by a first order viscous term. Different two-dimensional test cases are simulated - a 2D Burgers problem, the "KPP rotating wave" and the Euler system - using high order methods: spectral elements or Fourier expansions. Details on the tuning of the parameters controlling the entropy viscosity are given. © 2011 Springer.
Singh, Brajesh K; Srivastava, Vineet K
2015-04-01
The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations.
Ghost imaging with third-order correlated thermal light
International Nuclear Information System (INIS)
Ou, L-H; Kuang, L-M
2007-01-01
In this paper, we propose a ghost imaging scheme with third-order correlated thermal light. We show that it is possible to produce the spatial information of an object at two different places in a nonlocal fashion by means of a third-order correlated imaging process with a third-order correlated thermal source and third-order correlation measurement. Concretely, we propose a protocol to create two ghost images at two different places from one object. This protocol involves two optical configurations. We derive the Gaussian thin lens equations and plot the geometrical optics of the ghost imaging processes for the two configurations. It is indicated that third-order correlated ghost imaging with thermal light exhibits richer correlated imaging effects than second-order correlated ghost imaging with thermal light
HQET at order 1/m. Pt. 2. Spectroscopy in the quenched approximation
International Nuclear Information System (INIS)
Blossier, Benoit; Della Morte, Michele; Garron, Nicolas; Edinburgh Univ.; Hippel, Georg von; DESY, Zeuthen; Mendes, Tereza; Sao Paulo Univ., Sao Carlos; Simma, Hubert; Sommer, Rainer
2010-06-01
Using Heavy Quark Effective Theory with non-perturbatively determined parameters in a quenched lattice calculation, we evaluate the splittings between the ground state and the first two radially excited states of the B s system at static order. We also determine the splitting between first excited and ground state, and between the B s * and B s ground states to order 1/m b . The Generalized Eigenvalue Problem and the use of all-to-all propagators are important ingredients of our approach. (orig.)
HQET at order 1/m. Pt. 2. Spectroscopy in the quenched approximation
Energy Technology Data Exchange (ETDEWEB)
Blossier, Benoit [CNRS et Paris-Sud XI Univ., Orsay (France). Lab. de Physique Theorique; Della Morte, Michele [Mainz Univ. (Germany). Inst. fuer Kernphysik; Garron, Nicolas [Universidad Autonoma de Madrid (Spain). Dept. de Fisica Teorica e Inst. de Fisica Teorica IFT-UAM/CSIC; Edinburgh Univ. (United Kingdom). SUPA, School of Physics; Hippel, Georg von [Mainz Univ. (Germany). Inst. fuer Kernphysik; DESY, Zeuthen (Germany). NIC; Mendes, Tereza [DESY, Zeuthen (Germany). NIC; Sao Paulo Univ., Sao Carlos (Brazil). IFSC; Simma, Hubert; Sommer, Rainer [DESY, Zeuthen (Germany). NIC
2010-06-15
Using Heavy Quark Effective Theory with non-perturbatively determined parameters in a quenched lattice calculation, we evaluate the splittings between the ground state and the first two radially excited states of the B{sub s} system at static order. We also determine the splitting between first excited and ground state, and between the B{sub s}{sup *} and B{sub s} ground states to order 1/m{sub b}. The Generalized Eigenvalue Problem and the use of all-to-all propagators are important ingredients of our approach. (orig.)
Approximate solution of integro-differential equation of fractional (arbitrary order
Directory of Open Access Journals (Sweden)
Asma A. Elbeleze
2016-01-01
Full Text Available In the present paper, we study the integro-differential equations which are combination of differential and Fredholm–Volterra equations that have the fractional order with constant coefficients by the homotopy perturbation and the variational iteration. The fractional derivatives are described in Caputo sense. Some illustrative examples are presented.
High-Order Approximation of Chromatographic Models using a Nodal Discontinuous Galerkin Approach
DEFF Research Database (Denmark)
Meyer, Kristian; Huusom, Jakob Kjøbsted; Abildskov, Jens
2018-01-01
by Javeed et al. (2011a,b, 2013) with an efficient quadrature-free implementation. The framework is used to simulate linear and non-linear multicomponent chromatographic systems. The results confirm arbitrary high-order accuracy and demonstrate the potential for accuracy and speed-up gains obtainable...
DEFF Research Database (Denmark)
Etches, Adam; Madsen, Christian Bruun; Madsen, Lars Bojer
A correction term is introduced in the stationary-point analysis on high-order harmonic generation (HHG) from aligned molecules. Arising from a multi-centre expansion of the electron wave function, this term brings our numerical calculations of the Lewenstein model into qualitative agreement...
Probabilistic image processing by means of the Bethe approximation for the Q-Ising model
International Nuclear Information System (INIS)
Tanaka, Kazuyuki; Inoue, Jun-ichi; Titterington, D M
2003-01-01
The framework of Bayesian image restoration for multi-valued images by means of the Q-Ising model with nearest-neighbour interactions is presented. Hyperparameters in the probabilistic model are determined so as to maximize the marginal likelihood. A practical algorithm is described for multi-valued image restoration based on the Bethe approximation. The algorithm corresponds to loopy belief propagation in artificial intelligence. We conclude that, in real world grey-level images, the Q-Ising model can give us good results
Directory of Open Access Journals (Sweden)
Christer Dalen
2017-10-01
Full Text Available A model reduction technique based on optimization theory is presented, where a possible higher order system/model is approximated with an unstable DIPTD model by using only step response data. The DIPTD model is used to tune PD/PID controllers for the underlying possible higher order system. Numerous examples are used to illustrate the theory, i.e. both linear and nonlinear models. The Pareto Optimal controller is used as a reference controller.
Off-shell properties of the second-order Born approximation for laser-assisted potential scattering
International Nuclear Information System (INIS)
Trombetta, F.
1991-01-01
A formal method is presented to evaluate the second-order Born approximation of the laser-assisted potential scattering. It is an implicit closure technique that includes intermediate virtual-state transitions and enables one to find the exact explicit expression of the transition amplitude. This is of interest from two standpoints: first, one can deal with ranges of parameters in which the first-order Born approximation is a poor one; second, one can set limits of on-shell approximations that are also widely used to analyze recent laser-assisted experiments. The off-shell character yields new terms in the exact amplitude, and in particular, it is shown to play a crucial role in forward scattering from a long-range potential
Approximate tensor-product preconditioners for very high order discontinuous Galerkin methods
Pazner, Will; Persson, Per-Olof
2018-02-01
In this paper, we develop a new tensor-product based preconditioner for discontinuous Galerkin methods with polynomial degrees higher than those typically employed. This preconditioner uses an automatic, purely algebraic method to approximate the exact block Jacobi preconditioner by Kronecker products of several small, one-dimensional matrices. Traditional matrix-based preconditioners require O (p2d) storage and O (p3d) computational work, where p is the degree of basis polynomials used, and d is the spatial dimension. Our SVD-based tensor-product preconditioner requires O (p d + 1) storage, O (p d + 1) work in two spatial dimensions, and O (p d + 2) work in three spatial dimensions. Combined with a matrix-free Newton-Krylov solver, these preconditioners allow for the solution of DG systems in linear time in p per degree of freedom in 2D, and reduce the computational complexity from O (p9) to O (p5) in 3D. Numerical results are shown in 2D and 3D for the advection, Euler, and Navier-Stokes equations, using polynomials of degree up to p = 30. For many test cases, the preconditioner results in similar iteration counts when compared with the exact block Jacobi preconditioner, and performance is significantly improved for high polynomial degrees p.
On the application of the Williams-Weizsaecker-method to higher order S-matrix-approximations
International Nuclear Information System (INIS)
Ziegelbecker, R.C.
1983-05-01
In this paper the method of quasireal processes is investigated using a special example - pair production in the stationary field of a nucleus by an incident electron. As a result, the semi-classical version of the Williams-Weizsaecker-method is confirmed on the basis of all 3sup(rd)-order Feynman-diagrams. The spectra of quasireal processes, derived from quantum field theory, can also be applied simultaneously in several vertex points on one diagram and are valid for higher photon energies than the semiclassical spectrum; the restriction #betta# [de
Bomb blast imaging: bringing order to chaos.
Dick, E A; Ballard, M; Alwan-Walker, H; Kashef, E; Batrick, N; Hettiaratchy, S; Moran, C G
2018-06-01
Blast injuries are complex, severe, and outside of our everyday clinical practice, but every radiologist needs to understand them. By their nature, bomb blasts are unpredictable and affect multiple victims, yet require an immediate, coordinated, and whole-hearted response from all members of the clinical team, including all radiology staff. This article will help you gain the requisite expertise in blast imaging including recognising primary, secondary, and tertiary blast injuries. It will also help you understand the fundamental role that imaging plays during mass casualty attacks and how to avoid radiology becoming a bottleneck to the forward flow of severely injured patients as they are triaged and treated. Copyright © 2018. Published by Elsevier Ltd.
Rayleigh scatter in kilovoltage x-ray imaging: is the independent atom approximation good enough?
Poludniowski, G; Evans, PM; Webb, S
2009-01-01
Monte Carlo simulation is the gold standard method for modelling scattering processes in medical x-ray imaging. General-purpose Monte Carlo codes, however, typically use the independent atom approximation (IAA). This is known to be inaccurate for Rayleigh scattering, for many materials, in the forward direction. This work addresses whether the IAA is sufficient for the typical modelling tasks in medical kilovoltage x-ray imaging. As a means of comparison, we incorporate a more realistic 'inte...
A first order approximation of the tumor absorbed dose prior to treatment with Sr-89
Energy Technology Data Exchange (ETDEWEB)
Manetou, A [NIMITS Hospital, Medical Physics Unit, Athens (Greece); Toubanakis, N; Lyra, M; Lymouris, G [Areteion University Hospital, Radiology Department, Athens (Greece)
1994-12-31
A new technique developed for the estimation of the absorbed dose prior to treatment with Sr-89 is presented. This technique implies that patient undergoes bone scanning with Tc-99m-MDP, two days before the administration of Sr-89. A number of sequential quantitative images are to be obtained over the first 8 hours after the Tc-99m-MDP injection and data are used to derive St-89 time retention curve. For the development of this technique a simplified model for the kinetics of both Sr-89 and Tc-99m-MDP was assumed. Data on the time retention of the two radiopharmaceuticals for a compartment including bone surface and bone space of trabecular and cortical bone for normal adults were combined together. A linear relationship was derived between the time required for the same percentage uptake of the two radiopharmaceuticals after single injection. The absorbed dose in the principal metastases and normal bone, of the same type and volume with the metastases, for two patients who were treated with Sr-89 for metastasized prostatic carcinoma are reported. (authors). 23 refs,3 figs, 2 tabs.
A first order approximation of the tumor absorbed dose prior to treatment with Sr-89
International Nuclear Information System (INIS)
Manetou, A.; Toubanakis, N.; Lyra, M.; Lymouris, G.
1994-01-01
A new technique developed for the estimation of the absorbed dose prior to treatment with Sr-89 is presented. This technique implies that patient undergoes bone scanning with Tc-99m-MDP, two days before the administration of Sr-89. A number of sequential quantitative images are to be obtained over the first 8 hours after the Tc-99m-MDP injection and data are used to derive St-89 time retention curve. For the development of this technique a simplified model for the kinetics of both Sr-89 and Tc-99m-MDP was assumed. Data on the time retention of the two radiopharmaceuticals for a compartment including bone surface and bone space of trabecular and cortical bone for normal adults were combined together. A linear relationship was derived between the time required for the same percentage uptake of the two radiopharmaceuticals after single injection. The absorbed dose in the principal metastases and normal bone, of the same type and volume with the metastases, for two patients who were treated with Sr-89 for metastasized prostatic carcinoma are reported. (authors)
International Nuclear Information System (INIS)
Vrscay, E.R.
1986-01-01
A simple power-series method is developed to calculate to large order the Rayleigh-Schroedinger perturbation expansions for energy levels of a hydrogen atom with a Yukawa-type screened Coulomb potential. Perturbation series for the 1s, 2s, and 2p levels, shown not to be of the Stieltjes type, are calculated to 100th order. Nevertheless, the poles of the Pade approximants to these series generally avoid the region of the positive real axis 0 < lambda < lambda(, where lambda( represents the coupling constant threshold. As a result, the Pade sums afford accurate approximations to E(lambda) in this domain. The continued-fraction representations to these perturbation series have been accurately calculated to large (100th) order and demonstrate a curious ''quasioscillatory,'' but non-Stieltjes, behavior. Accurate values of E(lambda) as well as lambda( for the 1s, 2s, and 2p levels are reported
International Nuclear Information System (INIS)
Yasa, F.; Anli, F.; Guengoer, S.
2007-01-01
We present analytical calculations of spherically symmetric radioactive transfer and neutron transport using a hypothesis of P1 and T1 low order polynomial approximation for diffusion coefficient D. Transport equation in spherical geometry is considered as the pseudo slab equation. The validity of polynomial expansionion in transport theory is investigated through a comparison with classic diffusion theory. It is found that for causes when the fluctuation of the scattering cross section dominates, the quantitative difference between the polynomial approximation and diffusion results was physically acceptable in general
Parallel magnetic resonance imaging as approximation in a reproducing kernel Hilbert space
International Nuclear Information System (INIS)
Athalye, Vivek; Lustig, Michael; Martin Uecker
2015-01-01
In magnetic resonance imaging data samples are collected in the spatial frequency domain (k-space), typically by time-consuming line-by-line scanning on a Cartesian grid. Scans can be accelerated by simultaneous acquisition of data using multiple receivers (parallel imaging), and by using more efficient non-Cartesian sampling schemes. To understand and design k-space sampling patterns, a theoretical framework is needed to analyze how well arbitrary sampling patterns reconstruct unsampled k-space using receive coil information. As shown here, reconstruction from samples at arbitrary locations can be understood as approximation of vector-valued functions from the acquired samples and formulated using a reproducing kernel Hilbert space with a matrix-valued kernel defined by the spatial sensitivities of the receive coils. This establishes a formal connection between approximation theory and parallel imaging. Theoretical tools from approximation theory can then be used to understand reconstruction in k-space and to extend the analysis of the effects of samples selection beyond the traditional image-domain g-factor noise analysis to both noise amplification and approximation errors in k-space. This is demonstrated with numerical examples. (paper)
Directory of Open Access Journals (Sweden)
Jiameng Wu
2018-01-01
Full Text Available The infinite depth free surface Green function (GF and its high order derivatives for diffraction and radiation of water waves are considered. Especially second order derivatives are essential requirements in high-order panel method. In this paper, concerning the classical representation, composed of a semi-infinite integral involving a Bessel function and a Cauchy singularity, not only the GF and its first order derivatives but also second order derivatives are derived from four kinds of analytical series expansion and refined division of whole calculation domain. The approximations of special functions, particularly the hypergeometric function and the algorithmic applicability with different subdomains are implemented. As a result, the computation accuracy can reach 10-9 in whole domain compared with conventional methods based on direct numerical integration. Furthermore, numerical efficiency is almost equivalent to that with the classical method.
Approximate fuzzy C-means (AFCM) cluster analysis of medical magnetic resonance image (MRI) data
International Nuclear Information System (INIS)
DelaPaz, R.L.; Chang, P.J.; Bernstein, R.; Dave, J.V.
1987-01-01
The authors describe the application of an approximate fuzzy C-means (AFCM) clustering algorithm as a data dimension reduction approach to medical magnetic resonance images (MRI). Image data consisted of one T1-weighted, two T2-weighted, and one T2*-weighted (magnetic susceptibility) image for each cranial study and a matrix of 10 images generated from 10 combinations of TE and TR for each body lymphoma study. All images were obtained with a 1.5 Tesla imaging system (GE Signa). Analyses were performed on over 100 MR image sets with a variety of pathologies. The cluster analysis was operated in an unsupervised mode and computational overhead was minimized by utilizing a table look-up approach without adversely affecting accuracy. Image data were first segmented into 2 coarse clusters, each of which was then subdivided into 16 fine clusters. The final tissue classifications were presented as color-coded anatomically-mapped images and as two and three dimensional displays of cluster center data in selected feature space (minimum spanning tree). Fuzzy cluster analysis appears to be a clinically useful dimension reduction technique which results in improved diagnostic specificity of medical magnetic resonance images
Directory of Open Access Journals (Sweden)
Hongyang Lu
2016-06-01
Full Text Available Because of the contradiction between the spatial and temporal resolution of remote sensing images (RSI and quality loss in the process of acquisition, it is of great significance to reconstruct RSI in remote sensing applications. Recent studies have demonstrated that reference image-based reconstruction methods have great potential for higher reconstruction performance, while lacking accuracy and quality of reconstruction. For this application, a new compressed sensing objective function incorporating a reference image as prior information is developed. We resort to the reference prior information inherent in interior and exterior data simultaneously to build a new generalized nonconvex low-rank approximation framework for RSI reconstruction. Specifically, the innovation of this paper consists of the following three respects: (1 we propose a nonconvex low-rank approximation for reconstructing RSI; (2 we inject reference prior information to overcome over smoothed edges and texture detail losses; (3 on this basis, we combine conjugate gradient algorithms and a single-value threshold (SVT simultaneously to solve the proposed algorithm. The performance of the algorithm is evaluated both qualitatively and quantitatively. Experimental results demonstrate that the proposed algorithm improves several dBs in terms of peak signal to noise ratio (PSNR and preserves image details significantly compared to most of the current approaches without reference images as priors. In addition, the generalized nonconvex low-rank approximation of our approach is naturally robust to noise, and therefore, the proposed algorithm can handle low resolution with noisy inputs in a more unified framework.
Single image super-resolution based on approximated Heaviside functions and iterative refinement
Wang, Xin-Yu; Huang, Ting-Zhu; Deng, Liang-Jian
2018-01-01
One method of solving the single-image super-resolution problem is to use Heaviside functions. This has been done previously by making a binary classification of image components as “smooth” and “non-smooth”, describing these with approximated Heaviside functions (AHFs), and iteration including l1 regularization. We now introduce a new method in which the binary classification of image components is extended to different degrees of smoothness and non-smoothness, these components being represented by various classes of AHFs. Taking into account the sparsity of the non-smooth components, their coefficients are l1 regularized. In addition, to pick up more image details, the new method uses an iterative refinement for the residuals between the original low-resolution input and the downsampled resulting image. Experimental results showed that the new method is superior to the original AHF method and to four other published methods. PMID:29329298
Image fusion and denoising using fractional-order gradient information
DEFF Research Database (Denmark)
Mei, Jin-Jin; Dong, Yiqiu; Huang, Ting-Zhu
Image fusion and denoising are signiﬁcant in image processing because of the availability of multi-sensor and the presence of the noise. The ﬁrst-order and second-order gradient information have been eﬀectively applied to deal with fusing the noiseless source images. In this paper, due to the adv...... show that the proposed method outperforms the conventional total variation in methods for simultaneously fusing and denoising....
International Nuclear Information System (INIS)
Hamman, E.; Zorgati, R.
1995-01-01
Eddy current non-destructive testing is used by EDF to detect flaws affecting conductive objects such as steam generator tubes. With a view to obtaining ever more accurate information on equipment integrity, thereby facilitating diagnosis, studies aimed at using measurements to reconstruct an image of the flaw have been proceeding now for about ten years. In this context, our approach to eddy current imaging is based on inverse problem formalism. The direct problem, involving a mathematical model linking measurements provided by a probe with variables characterizing the defect, is dealt with elsewhere. Using the model results, we study the possibility of inverting it, i.e. of reconstructing an image of the flaw from the measurements. We first give an overview of the different inversion techniques, representative of the state of the art and all based on linearization of the inverse problem by means of the Born approximation. The model error resulting from an excessive Born approximation nevertheless severely limits the quantity of the images which can be obtained. In order to counteract this often critical error and extend the eddy current imaging application field, we have to del with the non-linear inverse problem. A method derived from recent research is proposed and implemented to ensure consistency with the exact model. Based on an 'optimization' type approach and provided with a convergence theorem, the method is highly efficient. (authors). 17 refs., 7 figs., 1 append
A New Approach to Rational Discrete-Time Approximations to Continuous-Time Fractional-Order Systems
Matos , Carlos; Ortigueira , Manuel ,
2012-01-01
Part 10: Signal Processing; International audience; In this paper a new approach to rational discrete-time approximations to continuous fractional-order systems of the form 1/(sα+p) is proposed. We will show that such fractional-order LTI system can be decomposed into sub-systems. One has the classic behavior and the other is similar to a Finite Impulse Response (FIR) system. The conversion from continuous-time to discrete-time systems will be done using the Laplace transform inversion integr...
Rayleigh scatter in kilovoltage x-ray imaging: is the independent atom approximation good enough?
Poludniowski, G.; Evans, P. M.; Webb, S.
2009-11-01
Monte Carlo simulation is the gold standard method for modelling scattering processes in medical x-ray imaging. General-purpose Monte Carlo codes, however, typically use the independent atom approximation (IAA). This is known to be inaccurate for Rayleigh scattering, for many materials, in the forward direction. This work addresses whether the IAA is sufficient for the typical modelling tasks in medical kilovoltage x-ray imaging. As a means of comparison, we incorporate a more realistic 'interference function' model into a custom-written Monte Carlo code. First, we conduct simulations of scatter from isolated voxels of soft tissue, adipose, cortical bone and spongiosa. Then, we simulate scatter profiles from a cylinder of water and from phantoms of a patient's head, thorax and pelvis, constructed from diagnostic-quality CT data sets. Lastly, we reconstruct CT numbers from simulated sets of projection images and investigate the quantitative effects of the approximation. We show that the IAA can produce errors of several per cent of the total scatter, across a projection image, for typical x-ray beams and patients. The errors in reconstructed CT number, however, for the phantoms simulated, were small (typically < 10 HU). The IAA can therefore be considered sufficient for the modelling of scatter correction in CT imaging. Where accurate quantitative estimates of scatter in individual projection images are required, however, the appropriate interference functions should be included.
Detection accuracy of in vitro approximal caries by cone beam computed tomography images
Energy Technology Data Exchange (ETDEWEB)
Qu Xingmin, E-mail: quxingmin@bjmu.edu.cn [Department of Oral and Maxillofacial Radiology, Peking University School and Hospital of Stomatology, 22 Zhongguancun Nandajie, Hai Dian District, Beijing 100081 (China); Li Gang, E-mail: kqgang@bjmu.edu.cn [Department of Oral and Maxillofacial Radiology, Peking University School and Hospital of Stomatology, 22 Zhongguancun Nandajie, Hai Dian District, Beijing 100081 (China); Zhang Zuyan, E-mail: zhangzy-bj@vip.sina.com [Department of Oral and Maxillofacial Radiology, Peking University School and Hospital of Stomatology, 22 Zhongguancun Nandajie, Hai Dian District, Beijing 100081 (China); Ma Xuchen, E-mail: kqxcma@bjmu.edu.cn [Department of Oral and Maxillofacial Radiology, Peking University School and Hospital of Stomatology, 22 Zhongguancun Nandajie, Hai Dian District, Beijing 100081 (China)
2011-08-15
Aims: To evaluate the diagnostic accuracy of approximal carious lesions among five CBCT systems and to assess the effect of detector types employed by different CBCT systems on the accuracy of approximal caries diagnosis. Materials and methods: Thirty-nine extracted non-cavitated human permanent teeth were employed in the study. Seven observers evaluated 78 approximal surfaces of the teeth with respect to caries by the images from the following five CBCT systems: (1) NewTom 9000; (2) Accuitomo 3DX; (3) Kodak 9000 3D; (4) ProMax 3D; and (5) DCT PRO, respectively. The lesions were validated by histological examination. The area under receiver operating characteristic (ROC) curve (A{sub z}) was used to evaluate the diagnostic accuracy. Results: Microscopy of approximal surfaces found 47.4% sound, 39.8% enamel and 12.8% dentin lesions. The differences of A{sub z} values among the five CBCT systems were not statistically significant (p = 0.348). No significant difference was found between the two detector types of CBCT systems (p = 0.47). Conclusions: The five CBCT systems employed in the study showed no significant difference in the in vitro approximal caries detection. Neither the detector nor the FOV employed by the CBCT systems has an impact on the detection accuracy of approximal caries.
Detection accuracy of in vitro approximal caries by cone beam computed tomography images
International Nuclear Information System (INIS)
Qu Xingmin; Li Gang; Zhang Zuyan; Ma Xuchen
2011-01-01
Aims: To evaluate the diagnostic accuracy of approximal carious lesions among five CBCT systems and to assess the effect of detector types employed by different CBCT systems on the accuracy of approximal caries diagnosis. Materials and methods: Thirty-nine extracted non-cavitated human permanent teeth were employed in the study. Seven observers evaluated 78 approximal surfaces of the teeth with respect to caries by the images from the following five CBCT systems: (1) NewTom 9000; (2) Accuitomo 3DX; (3) Kodak 9000 3D; (4) ProMax 3D; and (5) DCT PRO, respectively. The lesions were validated by histological examination. The area under receiver operating characteristic (ROC) curve (A z ) was used to evaluate the diagnostic accuracy. Results: Microscopy of approximal surfaces found 47.4% sound, 39.8% enamel and 12.8% dentin lesions. The differences of A z values among the five CBCT systems were not statistically significant (p = 0.348). No significant difference was found between the two detector types of CBCT systems (p = 0.47). Conclusions: The five CBCT systems employed in the study showed no significant difference in the in vitro approximal caries detection. Neither the detector nor the FOV employed by the CBCT systems has an impact on the detection accuracy of approximal caries.
A Comparison of Techniques for Approximating Full Image-Based Lighting
DEFF Research Database (Denmark)
Madsen, Claus B.; Laursen, Rune Elmgaard
2006-01-01
Light probes, or environment maps, are used extensively in computer graphics for visual effects involving rendering virtual objects into real scenes (Augment Reality). A light probe is a High Dynamic Range omni-directional image covering all directions on a sphere at some location. Each pixel...... in the light probe image measures the incident radiance at the light probe acquisition point. The figure above shows an example of a light probe image in the longitude-latitude mapping, (similar to an atlas mapping of the Earth). Using the light probe information a virtual object can be rendered with correct...... scene illumination and inserted into images of the scene with credible shading, reflections and shadows. Rendering virtual objects with light probe information is a very time consuming process. Therefore several techniques exist which attempt to approximate the light probe with a set of directional...
A new approximation of Fermi-Dirac integrals of order 1/2 for degenerate semiconductor devices
AlQurashi, Ahmed; Selvakumar, C. R.
2018-06-01
There had been tremendous growth in the field of Integrated circuits (ICs) in the past fifty years. Scaling laws mandated both lateral and vertical dimensions to be reduced and a steady increase in doping densities. Most of the modern semiconductor devices have invariably heavily doped regions where Fermi-Dirac Integrals are required. Several attempts have been devoted to developing analytical approximations for Fermi-Dirac Integrals since numerical computations of Fermi-Dirac Integrals are difficult to use in semiconductor devices, although there are several highly accurate tabulated functions available. Most of these analytical expressions are not sufficiently suitable to be employed in semiconductor device applications due to their poor accuracy, the requirement of complicated calculations, and difficulties in differentiating and integrating. A new approximation has been developed for the Fermi-Dirac integrals of the order 1/2 by using Prony's method and discussed in this paper. The approximation is accurate enough (Mean Absolute Error (MAE) = 0.38%) and easy enough to be used in semiconductor device equations. The new approximation of Fermi-Dirac Integrals is applied to a more generalized Einstein Relation which is an important relation in semiconductor devices.
Zeng, Cheng; Liang, Shan; Xiang, Shuwen
2017-05-01
Continuous-time systems are usually modelled by the form of ordinary differential equations arising from physical laws. However, the use of these models in practice and utilizing, analyzing or transmitting these data from such systems must first invariably be discretized. More importantly, for digital control of a continuous-time nonlinear system, a good sampled-data model is required. This paper investigates the new consistency condition which is weaker than the previous similar results presented. Moreover, given the stability of the high-order approximate model with stable zero dynamics, the novel condition presented stabilizes the exact sampled-data model of the nonlinear system for sufficiently small sampling periods. An insightful interpretation of the obtained results can be made in terms of the stable sampling zero dynamics, and the new consistency condition is surprisingly associated with the relative degree of the nonlinear continuous-time system. Our controller design, based on the higher-order approximate discretized model, extends the existing methods which mainly deal with the Euler approximation. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad
2017-07-01
This paper introduces a fractional order total variation (FOTV) based model with three different weights in the fractional order derivative definition for multiplicative noise removal purpose. The fractional-order Euler Lagrange equation which is a highly non-linear partial differential equation (PDE) is obtained by the minimization of the energy functional for image restoration. Two numerical schemes namely an iterative scheme based on the dual theory and majorization- minimization algorithm (MMA) are used. To improve the restoration results, we opt for an adaptive parameter selection procedure for the proposed model by applying the trial and error method. We report numerical simulations which show the validity and state of the art performance of the fractional-order model in visual improvement as well as an increase in the peak signal to noise ratio comparing to corresponding methods. Numerical experiments also demonstrate that MMAbased methodology is slightly better than that of an iterative scheme.
Statistical trajectory of an approximate EM algorithm for probabilistic image processing
International Nuclear Information System (INIS)
Tanaka, Kazuyuki; Titterington, D M
2007-01-01
We calculate analytically a statistical average of trajectories of an approximate expectation-maximization (EM) algorithm with generalized belief propagation (GBP) and a Gaussian graphical model for the estimation of hyperparameters from observable data in probabilistic image processing. A statistical average with respect to observed data corresponds to a configuration average for the random-field Ising model in spin glass theory. In the present paper, hyperparameters which correspond to interactions and external fields of spin systems are estimated by an approximate EM algorithm. A practical algorithm is described for gray-level image restoration based on a Gaussian graphical model and GBP. The GBP approach corresponds to the cluster variation method in statistical mechanics. Our main result in the present paper is to obtain the statistical average of the trajectory in the approximate EM algorithm by using loopy belief propagation and GBP with respect to degraded images generated from a probability density function with true values of hyperparameters. The statistical average of the trajectory can be expressed in terms of recursion formulas derived from some analytical calculations
A simple method to approximate liver size on cross-sectional images using living liver models
International Nuclear Information System (INIS)
Muggli, D.; Mueller, M.A.; Karlo, C.; Fornaro, J.; Marincek, B.; Frauenfelder, T.
2009-01-01
Aim: To assess whether a simple. diameter-based formula applicable to cross-sectional images can be used to calculate the total liver volume. Materials and methods: On 119 cross-sectional examinations (62 computed tomography and 57 magnetic resonance imaging) a simple, formula-based method to approximate the liver volume was evaluated. The total liver volume was approximated measuring the largest craniocaudal (cc), ventrodorsal (vd), and coronal (cor) diameters by two readers and implementing the equation: Vol estimated =ccxvdxcorx0.31. Inter-rater reliability, agreement, and correlation between liver volume calculation and virtual liver volumetry were analysed. Results: No significant disagreement between the two readers was found. The formula correlated significantly with the volumetric data (r > 0.85, p < 0.0001). In 81% of cases the error of the approximated volume was <10% and in 92% of cases <15% compared to the volumetric data. Conclusion: Total liver volume can be accurately estimated on cross-sectional images using a simple, diameter-based equation.
Memon, Sajid; Nataraj, Neela; Pani, Amiya Kumar
2012-01-01
In this article, a posteriori error estimates are derived for mixed finite element Galerkin approximations to second order linear parabolic initial and boundary value problems. Using mixed elliptic reconstructions, a posteriori error estimates in L∞(L2)- and L2(L2)-norms for the solution as well as its flux are proved for the semidiscrete scheme. Finally, based on a backward Euler method, a completely discrete scheme is analyzed and a posteriori error bounds are derived, which improves upon earlier results on a posteriori estimates of mixed finite element approximations to parabolic problems. Results of numerical experiments verifying the efficiency of the estimators have also been provided. © 2012 Society for Industrial and Applied Mathematics.
Mauri, Francesco
Anharmonic effects can generally be treated within perturbation theory. Such an approach breaks down when the harmonic solution is dynamically unstable or when the anharmonic corrections of the phonon energies are larger than the harmonic frequencies themselves. This situation occurs near lattice-related second-order phase-transitions such as charge-density-wave (CDW) or ferroelectric instabilities or in H-containing materials, where the large zero-point motion of the protons results in a violation of the harmonic approximation. Interestingly, even in these cases, phonons can be observed, measured, and used to model transport properties. In order to treat such cases, we developed a stochastic implementation of the self-consistent harmonic approximation valid to treat anharmonicity in the nonperturbative regime and to obtain, from first-principles, the structural, thermodynamic and vibrational properties of strongly anharmonic systems. I will present applications to the ferroelectric transitions in SnTe, to the CWD transitions in NbS2 and NbSe2 (in bulk and monolayer) and to the hydrogen-bond symmetrization transition in the superconducting hydrogen sulfide system, that exhibits the highest Tc reported for any superconductor so far. In all cases we are able to predict the transition temperature (pressure) and the evolution of phonons with temperature (pressure). This project has received funding from the European Union's Horizon 2020 research and innovation programme under Grant agreement No. 696656 GrapheneCore1.
Self-imaging in first-order optical systems
Alieva, T.; Bastiaans, M.J.; Nijhawan, O.P.; Guota, A.K.; Musla, A.K.; Singh, Kehar
1998-01-01
The structure and main properties of coherent and partially coherent optical fields that are self-reproducible under propagation through a first-order optical system are investigated. A phase space description of self-imaging in first-order optical systems is presented. The Wigner distribution
International Nuclear Information System (INIS)
Ishikawa, Nobuyuki; Suzuki, Katsuo
1999-01-01
Having advantages of setting independently feedback characteristics such as disturbance rejection specification and reference response characteristics, two-degree-of-freedom (2DOF) control is widely utilized to improve the control performance. The ordinary design method such as model matching usually derives high-ordered feedforward element of 2DOF controller. In this paper, we propose a new design method for low order feedforward element which is based on Pade approximation of the denominator series expansion. The features of the proposed method are as follows: (1) it is suited to realize reference response characteristics in low frequency region, (2) the order of the feedforward element can be selected apart from the feedback element. These are essential to the 2DOF controller design. With this method, 2DOF reactor power controller is designed and its control performance is evaluated by numerical simulation with reactor dynamics model. For this evaluation, it is confirmed that the controller designed by the proposed method possesses equivalent control characteristics to the controller by the ordinary model matching method. (author)
International Nuclear Information System (INIS)
Baer, M.; Nakamura, H.; Kouri, D.J.
1986-01-01
In this work the ion-molecule reaction He + H 2 + (v/sub i/) → HeH + (v/sub f/) + H(v/sub i/ = 0-7, v/sub f/ = 0-2) was studied quantum mechanically in the energy range 1.3 eV ≤ E/sub tot/ ≤ 1.8 eV. The calculations were carried out employing the Reactive Infinite Order Sudden Approximation (RIOSA). The two features characteristic of this system in the above energy range, namely the strong enhancement of the reaction rate with the initial vibrational energy (at a fixed total energy) and the relatively weak dependence of the cross sections on translational energy, were found to be well reproduced in the numerical treatment. The results also revealed the existence of two mechanisms of the exchange process: one is the ordinary mechanism and the other is probably related to the spectator stripping model
International Nuclear Information System (INIS)
Li, G; Tyagi, N; Deasy, J; Wei, J; Hunt, M
2015-01-01
Purpose: Cine 2DMRI is useful in MR-guided radiotherapy but it lacks volumetric information. We explore the feasibility of estimating timeresolved (TR) 4DMRI based on cine 2DMRI and respiratory-correlated (RC) 4DMRI though simulation. Methods: We hypothesize that a volumetric image during free breathing can be approximated by interpolation among 3DMRI image sets generated from a RC-4DMRI. Two patients’ RC-4DMRI with 4 or 5 phases were used to generate additional 3DMRI by interpolation. For each patient, six libraries were created to have total 5-to-35 3DMRI images by 0–6 equi-spaced tri-linear interpolation between adjacent and full-inhalation/full-exhalation phases. Sagittal cine 2DMRI were generated from reference 3DMRIs created from separate, unique interpolations from the original RC-4DMRI. To test if accurate 3DMRI could be generated through rigid registration of the cine 2DMRI to the 3DMRI libraries, each sagittal 2DMRI was registered to sagittal cuts in the same location in the 3DMRI within each library to identify the two best matches: one with greater lung volume and one with smaller. A final interpolation between the corresponding 3DMRI was then performed to produce the first-order-approximation (FOA) 3DMRI. The quality and performance of the FOA as a function of library size was assessed using both the difference in lung volume and average voxel intensity between the FOA and the reference 3DMRI. Results: The discrepancy between the FOA and reference 3DMRI decreases as the library size increases. The 3D lung volume difference decreases from 5–15% to 1–2% as the library size increases from 5 to 35 image sets. The average difference in lung voxel intensity decreases from 7–8 to 5–6 with the lung intensity being 0–135. Conclusion: This study indicates that the quality of FOA 3DMRI improves with increasing 3DMRI library size. On-going investigations will test this approach using actual cine 2DMRI and introduce a higher order approximation for
Quezada de Luna, M.; Farthing, M.; Guermond, J. L.; Kees, C. E.; Popov, B.
2017-12-01
The Shallow Water Equations (SWEs) are popular for modeling non-dispersive incompressible water waves where the horizontal wavelength is much larger than the vertical scales. They can be derived from the incompressible Navier-Stokes equations assuming a constant vertical velocity. The SWEs are important in Geophysical Fluid Dynamics for modeling surface gravity waves in shallow regimes; e.g., in the deep ocean. Some common geophysical applications are the evolution of tsunamis, river flooding and dam breaks, storm surge simulations, atmospheric flows and others. This work is concerned with the approximation of the time-dependent Shallow Water Equations with friction using explicit time stepping and continuous finite elements. The objective is to construct a method that is at least second-order accurate in space and third or higher-order accurate in time, positivity preserving, well-balanced with respect to rest states, well-balanced with respect to steady sliding solutions on inclined planes and robust with respect to dry states. Methods fulfilling the desired goals are common within the finite volume literature. However, to the best of our knowledge, schemes with the above properties are not well developed in the context of continuous finite elements. We start this work based on a finite element method that is second-order accurate in space, positivity preserving and well-balanced with respect to rest states. We extend it by: modifying the artificial viscosity (via the entropy viscosity method) to deal with issues of loss of accuracy around local extrema, considering a singular Manning friction term handled via an explicit discretization under the usual CFL condition, considering a water height regularization that depends on the mesh size and is consistent with the polynomial approximation, reducing dispersive errors introduced by lumping the mass matrix and others. After presenting the details of the method we show numerical tests that demonstrate the well
DEFF Research Database (Denmark)
Enevoldsen, Thomas; Oddershede, Jens; Sauer, Stephan P. A.
1998-01-01
We present correlated calculations of the indirect nuclear spin-spin coupling constants of HD, HF, H2O, CH4, C2H2, BH, AlH, CO and N2 at the level of the second-order polarization propagator approximation (SOPPA) and the second-order polarization propagator approximation with coupled-cluster sing...
Navas, F J; Alcántara, R; Fernández-Lorenzo, C; Martín-Calleja, J
2010-03-01
A laser beam induced current (LBIC) map of a photoactive surface is a very useful tool when it is necessary to study the spatial variability of properties such as photoconverter efficiency or factors connected with the recombination of carriers. Obtaining high spatial resolution LBIC maps involves irradiating the photoactive surface with a photonic beam with Gaussian power distribution and with a low dispersion coefficient. Laser emission fulfils these characteristics, but against it is the fact that it is highly monochromatic and therefore has a spectral distribution different to solar emissions. This work presents an instrumental system and procedure to obtain high spatial resolution LBIC maps in conditions approximating solar irradiation. The methodology developed consists of a trichromatic irradiation system based on three sources of laser excitation with emission in the red, green, and blue zones of the electromagnetic spectrum. The relative irradiation powers are determined by either solar spectrum distribution or Planck's emission formula which provides information approximate to the behavior of the system if it were under solar irradiation. In turn, an algorithm and a procedure have been developed to be able to form images based on the scans performed by the three lasers, providing information about the photoconverter efficiency of photovoltaic devices under the irradiation conditions used. This system has been checked with three photosensitive devices based on three different technologies: a commercial silicon photodiode, a commercial photoresistor, and a dye-sensitized solar cell. These devices make it possible to check how the superficial quantum efficiency has areas dependent upon the excitation wavelength while it has been possible to measure global incident photon-to-current efficiency values approximating those that would be obtained under irradiation conditions with sunlight.
Khaniani, Hassan
This thesis proposes a "standard strategy" for iterative inversion of elastic properties from the seismic reflection data. The term "standard" refers to the current hands-on commercial techniques that are used for the seismic imaging and inverse problem. The method is established to reduce the computation time associated with elastic Full Waveform Inversion (FWI) methods. It makes use of AVO analysis, prestack time migration and corresponding forward modeling in an iterative scheme. The main objective is to describe the iterative inversion procedure used in seismic reflection data using simplified mathematical expression and their numerical applications. The frame work of the inversion is similar to (FWI) method but with less computational costs. The reduction of computational costs depends on the data conditioning (with or without multiple data), the level of the complexity of geological model and acquisition condition such as Signal to Noise Ratio (SNR). Many processing methods consider multiple events as noise and remove it from the data. This is the motivation for reducing the computational cost associated with Finite Difference Time Domain (FDTD) forward modeling and Reverse Time Migration (RTM)-based techniques. Therefore, a one-way solution of the wave equation for inversion is implemented. While less computationally intensive depth imaging methods are available by iterative coupling of ray theory and the Born approximation, it is shown that we can further reduce the cost of inversion by dropping the cost of ray tracing for traveltime estimation in a way similar to standard Prestack Time Migration (PSTM) and the corresponding forward modeling. This requires the model to have smooth lateral variations in elastic properties, so that the traveltime of the scatterpoints can be approximated by a Double Square Root (DSR) equation. To represent a more realistic and stable solution of the inverse problem, while considering the phase of supercritical angles, the
International Nuclear Information System (INIS)
Green, Timothy F. G.; Yates, Jonathan R.
2014-01-01
We present a method for the first-principles calculation of nuclear magnetic resonance (NMR) J-coupling in extended systems using state-of-the-art ultrasoft pseudopotentials and including scalar-relativistic effects. The use of ultrasoft pseudopotentials is allowed by extending the projector augmented wave (PAW) method of Joyce et al. [J. Chem. Phys. 127, 204107 (2007)]. We benchmark it against existing local-orbital quantum chemical calculations and experiments for small molecules containing light elements, with good agreement. Scalar-relativistic effects are included at the zeroth-order regular approximation level of theory and benchmarked against existing local-orbital quantum chemical calculations and experiments for a number of small molecules containing the heavy row six elements W, Pt, Hg, Tl, and Pb, with good agreement. Finally, 1 J(P-Ag) and 2 J(P-Ag-P) couplings are calculated in some larger molecular crystals and compared against solid-state NMR experiments. Some remarks are also made as to improving the numerical stability of dipole perturbations using PAW
Hu, Jinyan; Li, Li; Yang, Yunfeng
2017-06-01
The hierarchical and successive approximate registration method of non-rigid medical image based on the thin-plate splines is proposed in the paper. There are two major novelties in the proposed method. First, the hierarchical registration based on Wavelet transform is used. The approximate image of Wavelet transform is selected as the registered object. Second, the successive approximation registration method is used to accomplish the non-rigid medical images registration, i.e. the local regions of the couple images are registered roughly based on the thin-plate splines, then, the current rough registration result is selected as the object to be registered in the following registration procedure. Experiments show that the proposed method is effective in the registration process of the non-rigid medical images.
Mixed Higher Order Variational Model for Image Recovery
Directory of Open Access Journals (Sweden)
Pengfei Liu
2014-01-01
Full Text Available A novel mixed higher order regularizer involving the first and second degree image derivatives is proposed in this paper. Using spectral decomposition, we reformulate the new regularizer as a weighted L1-L2 mixed norm of image derivatives. Due to the equivalent formulation of the proposed regularizer, an efficient fast projected gradient algorithm combined with monotone fast iterative shrinkage thresholding, called, FPG-MFISTA, is designed to solve the resulting variational image recovery problems under majorization-minimization framework. Finally, we demonstrate the effectiveness of the proposed regularization scheme by the experimental comparisons with total variation (TV scheme, nonlocal TV scheme, and current second degree methods. Specifically, the proposed approach achieves better results than related state-of-the-art methods in terms of peak signal to ratio (PSNR and restoration quality.
The diurnal order of the image in Dracula
Directory of Open Access Journals (Sweden)
Claudio Vescia Zanini
2015-02-01
Full Text Available the article analyses images from Bram Stoker’s novel Dracula having as a main theoretical frame the Diurnal regime of the Image, proposed by Gilbert Durand in The Anthropological Structures of the Imaginary and presented by Durand himself as the “order of antithesis”. By presenting the main kinds of images proposed by Durand in binary pairs (theriomorphic and diæretic, nyctomorphic and spectacular, catamorphic and ascensional, the analysis proposed here aims at staying in tune with both the theoretical approach and the context of production of the novel. Victorian England at the end of the nineteenth century was a time of anxieties, fears and doubts, recurrent in the Victorian cultural production as a whole and well-depicted in Dracula, a work where binary oppositions also seem to be recurrent: life and death, good and evil, moral and desire, among others. The focus is on how the main character is perceived by the other characters, which ultimately affects our perception as readers. Images related to animals, colors, weapons and movements are also included in the analysis. The conclusion points out that the Diurnal Order is a prolific and coherent approach towards an understanding of Bram Stoker’s vampire novel.
Neural network approximation of tip-abrasion effects in AFM imaging
International Nuclear Information System (INIS)
Bakucz, Peter; Dziomba, Thorsten; Koenders, Ludger; Krüger-Sehm, Rolf; Yacoot, Andrew
2008-01-01
The abrasion (wear) of tips used in scanning force microscopy (SFM) directly influences SFM image quality and is therefore of great relevance to quantitative SFM measurements. The increasing implementation of automated SFM measurement schemes has become a strong driving force for increasing efforts towards the prediction of tip wear, as it needs to be ensured that the probe is exchanged before a level of tip wear is reached that adversely affects the measurement quality. In this paper, we describe the identification of tip abrasion in a system of SFM measurements. We attempt to model the tip-abrasion process as a concatenation of a mapping from the measured AFM data to a regression vector and a nonlinear mapping from the regressor space to the output space. The mapping is formed as a basis function expansion. Feedforward neural networks are used to approximate this mapping. The one-hidden layer network gave a good quality of fit for the training and test sets for the tip-abrasion system. We illustrate our method with AFM measurements of both fine periodic structures and randomly oriented sharp features and compare our neural network results with those obtained using other methods
Neural network approximation of tip-abrasion effects in AFM imaging
Bakucz, Peter; Yacoot, Andrew; Dziomba, Thorsten; Koenders, Ludger; Krüger-Sehm, Rolf
2008-06-01
The abrasion (wear) of tips used in scanning force microscopy (SFM) directly influences SFM image quality and is therefore of great relevance to quantitative SFM measurements. The increasing implementation of automated SFM measurement schemes has become a strong driving force for increasing efforts towards the prediction of tip wear, as it needs to be ensured that the probe is exchanged before a level of tip wear is reached that adversely affects the measurement quality. In this paper, we describe the identification of tip abrasion in a system of SFM measurements. We attempt to model the tip-abrasion process as a concatenation of a mapping from the measured AFM data to a regression vector and a nonlinear mapping from the regressor space to the output space. The mapping is formed as a basis function expansion. Feedforward neural networks are used to approximate this mapping. The one-hidden layer network gave a good quality of fit for the training and test sets for the tip-abrasion system. We illustrate our method with AFM measurements of both fine periodic structures and randomly oriented sharp features and compare our neural network results with those obtained using other methods.
International Nuclear Information System (INIS)
Martin, P.; Zamudio-Cristi, J.
1982-01-01
A method is described to obtain fractional approximations for linear first order differential equations with polynomial coefficients. This approximation can give good accuracy in a large region of the complex variable plane that may include all the real axis. The parameters of the approximation are solutions of algebraic equations obtained through the coefficients of the highest and lowest power of the variable after the sustitution of the fractional approximation in the differential equation. The method is more general than the asymptotical Pade method, and it is not required to determine the power series or asymptotical expansion. A simple approximation for the exponential integral is found, which give three exact digits for most of the real values of the variable. Approximations of higher accuracy and of the same degree than other authors are also obtained. (Author) [pt
Czech Academy of Sciences Publication Activity Database
Gebresenbut, G.; Andersson, M. S.; Beran, Přemysl; Manuel, P.; Nordblad, P.; Sahlberg, M.; Gomez, C. P.
2014-01-01
Roč. 26, č. 32 (2014), s. 322202 ISSN 0953-8984 R&D Projects: GA MŠk(XE) LM2011019 EU Projects: European Commission(XE) 283883 - NMI3-II Institutional support: RVO:61389005 Keywords : magnetic property * magnetic structure refinement * approximants of quasicrystals Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 2.346, year: 2014
Energy Technology Data Exchange (ETDEWEB)
Garcia-Fernandez, P.; Velarde, M.G.
1988-05-01
To a first approximation the effects of detuning and/or space inhomogeneity on the stability domain of a model for a laser with a saturable absorber are presented. It appears that the space dependence increases the domain of the emissionless state, thus delaying the laser action.
DEFF Research Database (Denmark)
Abildskov, Jens; Constantinou, Leonidas; Gani, Rafiqul
1996-01-01
A simple modification of group contribution based models for estimation of liquid phase activity coefficients is proposed. The main feature of this modification is that contributions estimated from the present first-order groups in many instances are found insufficient since the first-order groups...... correlation/prediction capabilities, distinction between isomers and ability to overcome proximity effects....
Quasiparticle scattering image in hidden order phases and chiral superconductors
Energy Technology Data Exchange (ETDEWEB)
Thalmeier, Peter [Max Planck Institute for Chemical Physics of Solids, 01187 Dresden (Germany); Akbari, Alireza, E-mail: alireza@apctp.org [Asia Pacific Center for Theoretical Physics, Pohang, Gyeongbuk 790-784 (Korea, Republic of); Department of Physics, and Max Planck POSTECH Center for Complex Phase Materials, POSTECH, Pohang 790-784 (Korea, Republic of)
2016-02-15
The technique of Bogoliubov quasiparticle interference (QPI) has been successfully used to investigate the symmetry of unconventional superconducting gaps, also in heavy fermion compounds. It was demonstrated that QPI can distinguish between the d-wave singlet candidates in CeCoIn{sub 5}. In URu{sub 2}Si{sub 2} presumably a chiral d-wave singlet superconducting (SC) state exists inside a multipolar hidden order (HO) phase. We show that hidden order leaves an imprint on the symmetry of QPI pattern that may be used to determine the essential question whether HO in URu{sub 2}Si{sub 2} breaks the in-plane rotational symmetry or not. We also demonstrate that the chiral d-wave SC gap leads to a crossover to a quasi-2D QPI spectrum below T{sub c} which sharpens the HO features. Furthermore we investigate the QPI image of chiral p-wave multigap superconductor Sr{sub 2}RuO{sub 4}. - Highlights: • The chiral multigap structure of Sr{sub 2}RuO{sub 4} leads to rotation of QPI spectrum with bias voltage. • 5f band reconstruction in hidden order phase of URu{sub 2}Si{sub 2} is obtained from two orbital model. • The chiral superconductivity in URu{sub 2}Si{sub 2} leads to quasi-2D quasiparticle interference (QPI).
DEFF Research Database (Denmark)
Polat, Burak; Meincke, Peter
2004-01-01
A forward model for ground penetrating radar imaging of buried 3-D perfect electric conductors is addressed within the framework of diffraction tomography. The similarity of the present forward model derived within the physical optics approximation with that derived within the first Born...
Trattner, Sigal; Feigin, Micha; Greenspan, Hayit; Sochen, Nir
2008-03-01
The differential interference contrast (DIC) microscope is commonly used for the visualization of live biological specimens. It enables the view of the transparent specimens while preserving their viability, being a non-invasive modality. Fertility clinics often use the DIC microscope for evaluation of human embryos quality. Towards quantification and reconstruction of the visualized specimens, an image formation model for DIC imaging is sought and the interaction of light waves with biological matter is examined. In many image formation models the light-matter interaction is expressed via the first Born approximation. The validity region of this approximation is defined in a theoretical bound which limits its use to very small specimens with low dielectric contrast. In this work the Born approximation is investigated via the Helmholtz equation, which describes the interaction between the specimen and light. A solution on the lens field is derived using the Gaussian Legendre quadrature formulation. This numerical scheme is considered both accurate and efficient and has shortened significantly the computation time as compared to integration methods that required a great amount of sampling for satisfying the Whittaker - Shannon sampling theorem. By comparing the numerical results with the theoretical values it is shown that the theoretical bound is not directly relevant to microscopic imaging and is far too limiting. The numerical exhaustive experiments show that the Born approximation is inappropriate for modeling the visualization of thick human embryos.
DEFF Research Database (Denmark)
Zhou, Bo; Ai, Xiaomeng; Fang, Jiakun
2017-01-01
With the rapid development and deployment of voltage source converter (VSC) based HVDC, the traditional power system is evolving to the hybrid AC-DC grid. New optimization methods are urgently needed for these hybrid AC-DC power systems. In this paper, mixed-integer second order cone programming...... (MISOCP) for the hybrid AC-DC power systems is proposed. The second order cone (SOC) relaxation is adopted to transform the AC and DC power flow constraints to MISOCP. Several IEEE test systems are used to validate the proposed MISCOP formulation of the optimal power flow (OPF) and unit commitment (UC...
Although it is well established that human adipose tissue (AT) shows circadian rhythmicity, published studies have been discussed as if tissues or systems showed only one or few circadian rhythms at a time. To provide an overall view of the internal temporal order of circadian rhythms in human AT in...
Approximated transport-of-intensity equation for coded-aperture x-ray phase-contrast imaging.
Das, Mini; Liang, Zhihua
2014-09-15
Transport-of-intensity equations (TIEs) allow better understanding of image formation and assist in simplifying the "phase problem" associated with phase-sensitive x-ray measurements. In this Letter, we present for the first time to our knowledge a simplified form of TIE that models x-ray differential phase-contrast (DPC) imaging with coded-aperture (CA) geometry. The validity of our approximation is demonstrated through comparison with an exact TIE in numerical simulations. The relative contributions of absorption, phase, and differential phase to the acquired phase-sensitive intensity images are made readily apparent with the approximate TIE, which may prove useful for solving the inverse phase-retrieval problem associated with these CA geometry based DPC.
Energy Technology Data Exchange (ETDEWEB)
Martini, Till; Uwer, Peter [Humboldt-Universität zu Berlin, Institut für Physik,Newtonstraße 15, 12489 Berlin (Germany)
2015-09-14
In this article we illustrate how event weights for jet events can be calculated efficiently at next-to-leading order (NLO) accuracy in QCD. This is a crucial prerequisite for the application of the Matrix Element Method in NLO. We modify the recombination procedure used in jet algorithms, to allow a factorisation of the phase space for the real corrections into resolved and unresolved regions. Using an appropriate infrared regulator the latter can be integrated numerically. As illustration, we reproduce differential distributions at NLO for two sample processes. As further application and proof of concept, we apply the Matrix Element Method in NLO accuracy to the mass determination of top quarks produced in e{sup +}e{sup −} annihilation. This analysis is relevant for a future Linear Collider. We observe a significant shift in the extracted mass depending on whether the Matrix Element Method is used in leading or next-to-leading order.
International Nuclear Information System (INIS)
Martini, Till; Uwer, Peter
2015-01-01
In this article we illustrate how event weights for jet events can be calculated efficiently at next-to-leading order (NLO) accuracy in QCD. This is a crucial prerequisite for the application of the Matrix Element Method in NLO. We modify the recombination procedure used in jet algorithms, to allow a factorisation of the phase space for the real corrections into resolved and unresolved regions. Using an appropriate infrared regulator the latter can be integrated numerically. As illustration, we reproduce differential distributions at NLO for two sample processes. As further application and proof of concept, we apply the Matrix Element Method in NLO accuracy to the mass determination of top quarks produced in e"+e"− annihilation. This analysis is relevant for a future Linear Collider. We observe a significant shift in the extracted mass depending on whether the Matrix Element Method is used in leading or next-to-leading order.
Order 1/N corrections to the time-dependent Hartree approximation for a system of N+1 oscillators
International Nuclear Information System (INIS)
Mihaila, B.; Dawson, J.F.; Cooper, F.
1997-01-01
We solve numerically to order 1/N the time evolution of a quantum dynamical system of N oscillators of mass m coupled quadratically to a massless dynamic variable. We use Schwingers closed time path formalism to derive the equations. We compare two methods which differ by terms of order 1/N 2 . The first method is a direct perturbation theory in 1/N using the path integral. The second solves exactly the theory defined by the effective action to order 1/N. We compare the results of both methods as a function of N. At N=1, where we expect the expansion to be quite innacurate, we compare our results to an exact numerical solution of the Schroedinger equation. In this case we find that when the two methods disagree they also diverge from the exact answer. We also find at N=1 that the 1/N corrected evolutions track the exact answer for the expectation values much longer than the mean field (N=∞) result. copyright 1997 The American Physical Society
Prestack wavefield approximations
Alkhalifah, Tariq
2013-01-01
The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.
Prestack wavefield approximations
Alkhalifah, Tariq
2013-09-01
The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.
Digital approximation to extended depth of field in no telecentric imaging systems
International Nuclear Information System (INIS)
Meneses, J E; Contreras, C R
2011-01-01
A method used to digitally extend the depth of field of an imaging system consists to move the object of study along the optical axis of the system and different images will contain different areas that are sharp; those images are stored and processed digitally to obtain a fused image, in that image will be sharp all regions of the object. The implementation of this method, although widely used, imposes certain experimental conditions that should be evaluated for to study the degree of validity of the image final obtained. An experimental condition is related with the conservation of the geometric magnification factor when there is a relative movement between the object and the observation system; this implies that the system must be telecentric, which leads to a reduction of the observation field and the use of expensive systems if the application includes microscopic observation. This paper presents a technique that makes possible to extend depth of filed of an imaging system non telecentric; this system is used to realize applications in Optical Metrology with systems that have great observation field.
A universal approximation to grain size from images of non-cohesive sediment
Buscombe, D.; Rubin, D.M.; Warrick, J.A.
2010-01-01
The two-dimensional spectral decomposition of an image of sediment provides a direct statistical estimate, grid-by-number style, of the mean of all intermediate axes of all single particles within the image. We develop and test this new method which, unlike existing techniques, requires neither image processing algorithms for detection and measurement of individual grains, nor calibration. The only information required of the operator is the spatial resolution of the image. The method is tested with images of bed sediment from nine different sedimentary environments (five beaches, three rivers, and one continental shelf), across the range 0.1 mm to 150 mm, taken in air and underwater. Each population was photographed using a different camera and lighting conditions. We term it a “universal approximation” because it has produced accurate estimates for all populations we have tested it with, without calibration. We use three approaches (theory, computational experiments, and physical experiments) to both understand and explore the sensitivities and limits of this new method. Based on 443 samples, the root-mean-squared (RMS) error between size estimates from the new method and known mean grain size (obtained from point counts on the image) was found to be ±≈16%, with a 95% probability of estimates within ±31% of the true mean grain size (measured in a linear scale). The RMS error reduces to ≈11%, with a 95% probability of estimates within ±20% of the true mean grain size if point counts from a few images are used to correct bias for a specific population of sediment images. It thus appears it is transferable between sedimentary populations with different grain size, but factors such as particle shape and packing may introduce bias which may need to be calibrated for. For the first time, an attempt has been made to mathematically relate the spatial distribution of pixel intensity within the image of sediment to the grain size.
International Nuclear Information System (INIS)
Potluri, U S; Madanayake, A; Rajapaksha, N; Cintra, R J; Bayer, F M
2012-01-01
Multi-beamforming is an important requirement for broadband space imaging applications based on dense aperture arrays (AAs). Usually, the discrete Fourier transform is the transform of choice for AA electromagnetic imaging. Here, the discrete cosine transform (DCT) is proposed as an alternative, enabling the use of emerging fast algorithms that offer greatly reduced complexity in digital arithmetic circuits. We propose two novel high-speed digital architectures for recently proposed fast algorithms (Bouguezel, Ahmad and Swamy 2008 Electron. Lett. 44 1249–50) (BAS-2008) and (Cintra and Bayer 2011 IEEE Signal Process. Lett. 18 579–82) (CB-2011) that provide good approximations to the DCT at zero multiplicative complexity. Further, we propose a novel DCT approximation having zero multiplicative complexity that is shown to be better for multi-beamforming AAs when compared to BAS-2008 and CB-2011. The far-field array pattern of ideal DCT, BAS-2008, CB-2011 and proposed approximation are investigated with error analysis. Extensive hardware realizations, implementation details and performance metrics are provided for synchronous field programmable gate array (FPGA) technology from Xilinx. The resource consumption and speed metrics of BAS-2008, CB-2011 and the proposed approximation are investigated as functions of system word size. The 8-bit versions are mapped to emerging asynchronous FPGAs leading to significantly increased real-time throughput with clock rates at up to 925.6 MHz implying the fastest DCT approximations using reconfigurable logic devices in the literature. (paper)
Deterministic simulation of first-order scattering in virtual X-ray imaging
Energy Technology Data Exchange (ETDEWEB)
Freud, N. E-mail: nicolas.freud@insa-lyon.fr; Duvauchelle, P.; Pistrui-Maximean, S.A.; Letang, J.-M.; Babot, D
2004-07-01
A deterministic algorithm is proposed to compute the contribution of first-order Compton- and Rayleigh-scattered radiation in X-ray imaging. This algorithm has been implemented in a simulation code named virtual X-ray imaging. The physical models chosen to account for photon scattering are the well-known form factor and incoherent scattering function approximations, which are recalled in this paper and whose limits of validity are briefly discussed. The proposed algorithm, based on a voxel discretization of the inspected object, is presented in detail, as well as its results in simple configurations, which are shown to converge when the sampling steps are chosen sufficiently small. Simple criteria for choosing correct sampling steps (voxel and pixel size) are established. The order of magnitude of the computation time necessary to simulate first-order scattering images amounts to hours with a PC architecture and can even be decreased down to minutes, if only a profile is computed (along a linear detector). Finally, the results obtained with the proposed algorithm are compared to the ones given by the Monte Carlo code Geant4 and found to be in excellent accordance, which constitutes a validation of our algorithm. The advantages and drawbacks of the proposed deterministic method versus the Monte Carlo method are briefly discussed.
Seismic Imaging and Velocity Analysis Using a Pseudo Inverse to the Extended Born Approximation
Alali, Abdullah A.
2018-01-01
the correct model. The most commonly used technique is differential semblance optimization (DSO), which depends on applying an image extension and penalizing the energy in the non-physical extension. However, studies show that the conventional DSO gradient
Analytical approximations of diving-wave imaging in constant-gradient medium
Stovas, Alexey; Alkhalifah, Tariq Ali
2014-01-01
behavior and traveltime in a constant-gradient medium to develop insights into the traveltime moveout of diving waves and the image (model) point dispersal (residual) when the wrong velocity is used. The explicit formulations that describe these phenomena
GARAULET, MARTA; ORDOVÁS, JOSÉ M.; GÓMEZ-ABELLÁN, PURIFICACIÓN; MARTÍNEZ, JOSE A.; MADRID, JUAN A.
2015-01-01
Although it is well established that human adipose tissue (AT) shows circadian rhythmicity, published studies have been discussed as if tissues or systems showed only one or few circadian rhythms at a time. To provide an overall view of the internal temporal order of circadian rhythms in human AT including genes implicated in metabolic processes such as energy intake and expenditure, insulin resistance, adipocyte differentiation, dyslipidemia, and body fat distribution. Visceral and subcutaneous abdominal AT biopsies (n = 6) were obtained from morbid obese women (BMI ≥ 40 kg/m2). To investigate rhythmic expression pattern, AT explants were cultured during 24-h and gene expression was analyzed at the following times: 08:00, 14:00, 20:00, 02:00 h using quantitative real-time PCR. Clock genes, glucocorticoid metabolism-related genes, leptin, adiponectin and their receptors were studied. Significant differences were found both in achrophases and relative-amplitude among genes (P 30%). When interpreting the phase map of gene expression in both depots, data indicated that circadian rhythmicity of the genes studied followed a predictable physiological pattern, particularly for subcutaneous AT. Interesting are the relationships between adiponectin, leptin, and glucocorticoid metabolism-related genes circadian profiles. Their metabolic significance is discussed. Visceral AT behaved in a different way than subcutaneous for most of the genes studied. For every gene, protein mRNA levels fluctuated during the day in synchrony with its receptors. We have provided an overall view of the internal temporal order of circadian rhythms in human adipose tissue. PMID:21520059
Sergeev, A.; Alharbi, F. H.; Jovanovic, R.; Kais, S.
2016-04-01
The gradient expansion of the kinetic energy density functional, when applied to atoms or finite systems, usually grossly overestimates the energy in the fourth order and generally diverges in the sixth order. We avoid the divergence of the integral by replacing the asymptotic series including the sixth order term in the integrand by a rational function. Padé approximants show moderate improvements in accuracy in comparison with partial sums of the series. The results are discussed for atoms and Hooke’s law model for two-electron atoms.
Evaluation of image reconstruction methods for 123I-MIBG-SPECT. A rank-order study
International Nuclear Information System (INIS)
Soederberg, Marcus; Mattsson, Soeren; Oddstig, Jenny; Uusijaervi-Lizana, Helena; Leide-Svegborn, Sigrid; Valind, Sven; Thorsson, Ola; Garpered, Sabine; Prautzsch, Tilmann; Tischenko, Oleg
2012-01-01
Background: There is an opportunity to improve the image quality and lesion detectability in single photon emission computed tomography (SPECT) by choosing an appropriate reconstruction method and optimal parameters for the reconstruction. Purpose: To optimize the use of the Flash 3D reconstruction algorithm in terms of equivalent iteration (EI) number (number of subsets times the number of iterations) and to compare with two recently developed reconstruction algorithms ReSPECT and orthogonal polynomial expansion on disc (OPED) for application on 123 I-metaiodobenzylguanidine (MIBG)-SPECT. Material and Methods: Eleven adult patients underwent SPECT 4 h and 14 patients 24 h after injection of approximately 200 MBq 123 I-MIBG using a Siemens Symbia T6 SPECT/CT. Images were reconstructed from raw data using the Flash 3D algorithm at eight different EI numbers. The images were ranked by three experienced nuclear medicine physicians according to their overall impression of the image quality. The obtained optimal images were then compared in one further visual comparison with images reconstructed using the ReSPECT and OPED algorithms. Results: The optimal EI number for Flash 3D was determined to be 32 for acquisition 4 h and 24 h after injection. The average rank order (best first) for the different reconstructions for acquisition after 4 h was: Flash 3D 32 > ReSPECT > Flash 3D 64 > OPED, and after 24 h: Flash 3D 16 > ReSPECT > Flash 3D 32 > OPED. A fair level of inter-observer agreement concerning optimal EI number and reconstruction algorithm was obtained, which may be explained by the different individual preferences of what is appropriate image quality. Conclusion: Using Siemens Symbia T6 SPECT/CT and specified acquisition parameters, Flash 3D 32 (4 h) and Flash 3D 16 (24 h), followed by ReSPECT, were assessed to be the preferable reconstruction algorithms in visual assessment of 123 I-MIBG images
Seismic Imaging and Velocity Analysis Using a Pseudo Inverse to the Extended Born Approximation
Alali, Abdullah A.
2018-05-01
Prestack depth migration requires an accurate kinematic velocity model to image the subsurface correctly. Wave equation migration velocity analysis techniques aim to update the background velocity model by minimizing image residuals to achieve the correct model. The most commonly used technique is differential semblance optimization (DSO), which depends on applying an image extension and penalizing the energy in the non-physical extension. However, studies show that the conventional DSO gradient is contaminated with artifact noise and unwanted oscillations which might lead to local minima. To deal with this issue and improve the stability of DSO, recent studies proposed to use an inversion formula rather than migration to obtain the image. Migration is defined as the adjoint of Born modeling. Since the inversion is complicated and expensive, a pseudo inverse is used instead. A pseudo inverse formula has been developed recently for the horizontal space shift extended Born. This formula preserves the true amplitude and reduces the artifact noise even when an incorrect velocity is used. Although the theory for such an inverse is well developed, it has only been derived and tested on laterally homogeneous models. This is because the formula contains a derivative of the image with respect to a vertical extension evaluated at zero offset. Implementing the vertical extension is computationally expensive, which means this derivative needs to be computed without applying the additional extension. For laterally invariant models, the inverse is simplified and this derivative is eliminated. I implement the full asymptotic inverse to the extended Born to account for laterally heterogeneity. I compute the derivative of the image with respect to a vertical extension without performing any additional shift. This is accomplished by applying the derivative to the imaging condition and utilizing the chain rule. The fact that this derivative is evaluated at zero offset vertical
Vitality Detection in Face Images using Second Order Gradient
Aruni Singh
2012-01-01
Spoofing is a very big challenge in biometrics, specially in face image. So many artificial techniques are available to tamper or hide the original face. To ensure the actual presence of live face image in contrast to fake face image this research has been contributed. The intended purpose of proposed approach is also to endorse the biometric authentication, by joining the liveness awareness with Facial Recognition Technology (FRT). In this research 200 dummy face images and 200 real face ima...
Approximate inverse for the common offset acquisition geometry in 2D seismic imaging
Grathwohl, Christine; Kunstmann, Peer; Quinto, Eric Todd; Rieder, Andreas
2018-01-01
We explore how the concept of approximate inverse can be used and implemented to recover singularities in the sound speed from common offset measurements in two space dimensions. Numerical experiments demonstrate the performance of the method. We gratefully acknowledge financial support by the Deutsche Forschungsgemeinschaft (DFG) through CRC 1173. Quinto additionally thanks the Otto Mønsteds Fond and U.S. National Science Foundation (under grants DMS 1311558 and DMS 1712207) for their support. He thanks colleagues at DTU and KIT for their warm hospitality while this research was being done.
Theoretical scheme of thermal-light many-ghost imaging by Nth-order intensity correlation
International Nuclear Information System (INIS)
Liu Yingchuan; Kuang Leman
2011-01-01
In this paper, we propose a theoretical scheme of many-ghost imaging in terms of Nth-order correlated thermal light. We obtain the Gaussian thin lens equations in the many-ghost imaging protocol. We show that it is possible to produce N-1 ghost images of an object at different places in a nonlocal fashion by means of a higher order correlated imaging process with an Nth-order correlated thermal source and correlation measurements. We investigate the visibility of the ghost images in the scheme and obtain the upper bounds of the visibility for the Nth-order correlated thermal-light ghost imaging. It is found that the visibility of the ghost images can be dramatically enhanced when the order of correlation becomes larger. It is pointed out that the many-ghost imaging phenomenon is an observable physical effect induced by higher order coherence or higher order correlations of optical fields.
Analytical approximations of diving-wave imaging in constant-gradient medium
Stovas, Alexey
2014-06-24
Full-waveform inversion (FWI) in practical applications is currently used to invert the direct arrivals (diving waves, no reflections) using relatively long offsets. This is driven mainly by the high nonlinearity introduced to the inversion problem when reflection data are included, which in some cases require extremely low frequency for convergence. However, analytical insights into diving waves have lagged behind this sudden interest. We use analytical formulas that describe the diving wave’s behavior and traveltime in a constant-gradient medium to develop insights into the traveltime moveout of diving waves and the image (model) point dispersal (residual) when the wrong velocity is used. The explicit formulations that describe these phenomena reveal the high dependence of diving-wave imaging on the gradient and the initial velocity. The analytical image point residual equation can be further used to scan for the best-fit linear velocity model, which is now becoming a common sight as an initial velocity model for FWI. We determined the accuracy and versatility of these analytical formulas through numerical tests.
DEFF Research Database (Denmark)
Köylüoglu, H. U.; Nielsen, Søren R. K.; Cakmak, A. S.
Geometrically non-linear multi-degree-of-freedom (MDOF) systems subject to random excitation are considered. New semi-analytical approximate forward difference equations for the lower order non-stationary statistical moments of the response are derived from the stochastic differential equations...... of motion, and, the accuracy of these equations is numerically investigated. For stationary excitations, the proposed method computes the stationary statistical moments of the response from the solution of non-linear algebraic equations....
Statistical image reconstruction for transmission tomography using relaxed ordered subset algorithms
International Nuclear Information System (INIS)
Kole, J S
2005-01-01
Statistical reconstruction methods offer possibilities for improving image quality as compared to analytical methods, but current reconstruction times prohibit routine clinical applications in x-ray computed tomography (CT). To reduce reconstruction times, we have applied (under) relaxation to ordered subset algorithms. This enables us to use subsets consisting of only single projection angle, effectively increasing the number of image updates within an entire iteration. A second advantage of applying relaxation is that it can help improve convergence by removing the limit cycle behaviour of ordered subset algorithms, which normally do not converge to an optimal solution but rather a suboptimal limit cycle consisting of as many points as there are subsets. Relaxation suppresses the limit cycle behaviour by decreasing the stepsize for approaching the solution. A simulation study for a 2D mathematical phantom and three different ordered subset algorithms shows that all three algorithms benefit from relaxation: equal noise-to-resolution trade-off can be achieved using fewer iterations than the conventional algorithms, while a lower minimal normalized mean square error (NMSE) clearly indicates a better convergence. Two different schemes for setting the relaxation parameter are studied, and both schemes yield approximately the same minimal NMSE
International Nuclear Information System (INIS)
Khwaja, F.A.
1980-08-01
The calculations for the temperature dependence of the first shell short-range order (SRO) parameter for Ni 3 Fe using the cubic approximation of Tahir Kheli, and the concentration dependence of order-disorder temperature Tsub(c) for Ni-Fe and Ni-Pt systems using the linear approximation, have been carried out in the framework of pseudopotential theory. It is shown that the cubic approximation yields a good agreement between the theoretical prediction of the α 1 and the experimental data. Results for the concentration dependence of the Tsub(c) show that improvements in the statistical pseudo-potential approach are essential to achieve a good agreement with experiment. (author)
Ordering of diagnostic information in encoded medical images. Accuracy progression
Przelaskowski, A.; Jóźwiak, R.; Krzyżewski, T.; Wróblewska, A.
2008-03-01
A concept of diagnostic accuracy progression for embedded coding of medical images was presented. Implementation of JPEG2000 encoder with a modified PCRD optimization algorithm was realized and initially verified as a tool for accurate medical image streaming. Mean square error as a distortion measure was replaced by other numerical measures to revise quality progression according to diagnostic importance of successively encoded image information. A faster increment of image diagnostic importance during reconstruction of initial packets of code stream was reached. Modified Jasper code was initially tested on a set of mammograms containing clusters of microcalcifications and malignant masses, and other radiograms. Teleradiologic applications were considered as the first area of interests.
A new approximate algorithm for image reconstruction in cone-beam spiral CT at small cone-angles
International Nuclear Information System (INIS)
Schaller, S.; Flohr, T.; Steffen, P.
1996-01-01
This paper presents a new approximate algorithm for image reconstruction with cone-beam spiral CT data at relatively small cone-angles. Based on the algorithm of Wang et al., our method combines a special complementary interpolation with filtered backprojection. The presented algorithm has three main advantages over Wang's algorithm: (1) It overcomes the pitch limitation of Wang's algorithm. (2) It significantly improves z-resolution when suitable sampling schemes are applied. (3) It avoids the waste of applied radiation dose inherent to Wang's algorithm. Usage of the total applied dose is an important requirement in medical imaging. Our method has been implemented on a standard workstation. Reconstructions of computer-simulated data of different phantoms, assuming sampling conditions and image quality requirements typical to medical CT, show encouraging results
Ideologeme "Order" in Modern American Linguistic World Image
Ibatova, Aygul Z.; Vdovichenko, Larisa V.; Ilyashenko, Lubov K.
2016-01-01
The paper studies the topic of modern American linguistic world image. It is known that any language is the most important instrument of cognition of the world by a person but there is also no doubt that any language is the way of perception and conceptualization of this knowledge about the world. In modern linguistics linguistic world image is…
International Nuclear Information System (INIS)
Badano, Aldo; Freed, Melanie; Fang Yuan
2011-01-01
Purpose: The authors describe the modifications to a previously developed analytical model of indirect CsI:Tl-based detector response required for studying oblique x-ray incidence effects in direct semiconductor-based detectors. This first-order approximation analysis allows the authors to describe the associated degradation in resolution in direct detectors and compare the predictions to the published data for indirect detectors. Methods: The proposed model is based on a physics-based analytical description developed by Freed et al. [''A fast, angle-dependent, analytical model of CsI detector response for optimization of 3D x-ray breast imaging systems,'' Med. Phys. 37(6), 2593-2605 (2010)] that describes detector response functions for indirect detectors and oblique incident x rays. The model, modified in this work to address direct detector response, describes the dependence of the response with x-ray energy, thickness of the transducer layer, and the depth-dependent blur and collection efficiency. Results: The authors report the detector response functions for indirect and direct detector models for typical thicknesses utilized in clinical systems for full-field digital mammography (150 μm for indirect CsI:Tl and 200 μm for a-Se direct detectors). The results suggest that the oblique incidence effect in a semiconductor detector differs from that in indirect detectors in two ways: The direct detector model produces a sharper overall PRF compared to the response corresponding to the indirect detector model for normal x-ray incidence and a larger relative increase in blur along the x-ray incidence direction compared to that found in indirect detectors with respect to the response at normal incidence angles. Conclusions: Compared to the effect seen in indirect detectors, the direct detector model exhibits a sharper response at normal x-ray incidence and a larger relative increase in blur along the x-ray incidence direction with respect to the blur in the
Second order statistical analysis of US image texture
International Nuclear Information System (INIS)
Tanzi, F.; Novario, R.
1999-01-01
The study reports the sonographic image texture of the neonatal heart in different stages of development by calculating numerical parameters extracted from the gray scale co-occurrence matrix. To show pixel values differences and enhance texture structure, images were equalized and then the gray level range was reduced to 16 to allow sufficiently high occupancy frequency of the co-occurrence matrix. Differences are so little significant that they may be due to different factors affecting image texture and the variability introduced by manual ROI positioning; therefore no definitive conclusions can be drawn as to considering this kind of analysis capable of discriminating different stages of myocardial development [it
International Nuclear Information System (INIS)
Kaoutar, M.
1986-09-01
After a survey of main algorithms for piecewise linear approximation, a new method is suggested. It consists of two stages: a sequential detection stage and an optimization stage, which derives from general dynamic clustering principle. It is applied to control rod step counting in a nuclear reactor core and images contours characterization. Another version of our method is presented. Its originality cames from the variability of the line segments number during iterations. A comparative study is made by comparing the results of the proposed method with of another methods already existing thereby it attests the efficiency and reliability of our method [fr
Directory of Open Access Journals (Sweden)
Wei Li
2012-01-01
Full Text Available An extended finite element method (XFEM for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SPN. In XFEM scheme of SPN equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC method, the validation results show the merits and potential of the XFEM for optical imaging.
International Nuclear Information System (INIS)
Cheng, J-C; Rahmim, Arman; Blinder, Stephan; Camborde, Marie-Laure; Raywood, Kelvin; Sossi, Vesna
2007-01-01
We describe an ordinary Poisson list-mode expectation maximization (OP-LMEM) algorithm with a sinogram-based scatter correction method based on the single scatter simulation (SSS) technique and a random correction method based on the variance-reduced delayed-coincidence technique. We also describe a practical approximate scatter and random-estimation approach for dynamic PET studies based on a time-averaged scatter and random estimate followed by scaling according to the global numbers of true coincidences and randoms for each temporal frame. The quantitative accuracy achieved using OP-LMEM was compared to that obtained using the histogram-mode 3D ordinary Poisson ordered subset expectation maximization (3D-OP) algorithm with similar scatter and random correction methods, and they showed excellent agreement. The accuracy of the approximated scatter and random estimates was tested by comparing time activity curves (TACs) as well as the spatial scatter distribution from dynamic non-human primate studies obtained from the conventional (frame-based) approach and those obtained from the approximate approach. An excellent agreement was found, and the time required for the calculation of scatter and random estimates in the dynamic studies became much less dependent on the number of frames (we achieved a nearly four times faster performance on the scatter and random estimates by applying the proposed method). The precision of the scatter fraction was also demonstrated for the conventional and the approximate approach using phantom studies
Moore, Christopher; Marchant, Thomas
2017-07-12
Reconstructive volumetric imaging permeates medical practice because of its apparently clear depiction of anatomy. However, the tell tale signs of abnormality and its delineation for treatment demand experts work at the threshold of visibility for hints of structure. Hitherto, a suitable assistive metric that chimes with clinical experience has been absent. This paper develops the complexity measure approximate entropy (ApEn) from its 1D physiological origin into a three-dimensional (3D) algorithm to fill this gap. The first 3D algorithm for this is presented in detail. Validation results for known test arrays are followed by a comparison of fan-beam and cone-beam x-ray computed tomography image volumes used in image guided radiotherapy for cancer. Results show the structural detail down to individual voxel level, the strength of which is calibrated by the ApEn process itself. The potential for application in machine assisted manual interaction and automated image processing and interrogation, including radiomics associated with predictive outcome modeling, is discussed.
Moore, Christopher; Marchant, Thomas
2017-08-01
Reconstructive volumetric imaging permeates medical practice because of its apparently clear depiction of anatomy. However, the tell tale signs of abnormality and its delineation for treatment demand experts work at the threshold of visibility for hints of structure. Hitherto, a suitable assistive metric that chimes with clinical experience has been absent. This paper develops the complexity measure approximate entropy (ApEn) from its 1D physiological origin into a three-dimensional (3D) algorithm to fill this gap. The first 3D algorithm for this is presented in detail. Validation results for known test arrays are followed by a comparison of fan-beam and cone-beam x-ray computed tomography image volumes used in image guided radiotherapy for cancer. Results show the structural detail down to individual voxel level, the strength of which is calibrated by the ApEn process itself. The potential for application in machine assisted manual interaction and automated image processing and interrogation, including radiomics associated with predictive outcome modeling, is discussed.
Chan, Der-Sheng; Chau, Yuan-Fong
2013-01-01
An innovative fabrication processes of a photonic crystal composed of an approximately ordered array of laurel-crown-like structures by employing an anodic aluminum oxide (AAO) template is presented. We found that the intensity of the electric field is affected by the microstructure and surface morphology of aluminum foil after etching the scalloped barrier oxide layer (BOL). In addition, the electric current is strongly dependent on the electric field distribution in the scalloped BOL at the pore bottoms. By using a different step potential (DSP) of 30-60 V in series, the proposed photonic crystal is fabricated and possesses a large complete photonic bandgap.
Izsák, Róbert; Neese, Frank
2013-07-01
The 'chain of spheres' approximation, developed earlier for the efficient evaluation of the self-consistent field exchange term, is introduced here into the evaluation of the external exchange term of higher order correlation methods. Its performance is studied in the specific case of the spin-component-scaled third-order Møller--Plesset perturbation (SCS-MP3) theory. The results indicate that the approximation performs excellently in terms of both computer time and achievable accuracy. Significant speedups over a conventional method are obtained for larger systems and basis sets. Owing to this development, SCS-MP3 calculations on molecules of the size of penicillin (42 atoms) with a polarised triple-zeta basis set can be performed in ∼3 hours using 16 cores of an Intel Xeon E7-8837 processor with a 2.67 GHz clock speed, which represents a speedup by a factor of 8-9 compared to the previously most efficient algorithm. Thus, the increased accuracy offered by SCS-MP3 can now be explored for at least medium-sized molecules.
Higher-order Spatial Accuracy in Diffeomorphic Image Registration
DEFF Research Database (Denmark)
Jacobs, Henry O.; Sommer, Stefan
-jets. We show that the solutions convergence to optimal solutions of the original cost functional as the number of particles increases with a convergence rate of O(hd+k) where h is a resolution parameter. The effect of this approach over traditional particle methods is illustrated on synthetic examples......We discretize a cost functional for image registration problems by deriving Taylor expansions for the matching term. Minima of the discretized cost functionals can be computed with no spatial discretization error, and the optimal solutions are equivalent to minimal energy curves in the space of kk...
Chen, Weitian; Sica, Christopher T; Meyer, Craig H
2008-11-01
Off-resonance effects can cause image blurring in spiral scanning and various forms of image degradation in other MRI methods. Off-resonance effects can be caused by both B0 inhomogeneity and concomitant gradient fields. Previously developed off-resonance correction methods focus on the correction of a single source of off-resonance. This work introduces a computationally efficient method of correcting for B0 inhomogeneity and concomitant gradients simultaneously. The method is a fast alternative to conjugate phase reconstruction, with the off-resonance phase term approximated by Chebyshev polynomials. The proposed algorithm is well suited for semiautomatic off-resonance correction, which works well even with an inaccurate or low-resolution field map. The proposed algorithm is demonstrated using phantom and in vivo data sets acquired by spiral scanning. Semiautomatic off-resonance correction alone is shown to provide a moderate amount of correction for concomitant gradient field effects, in addition to B0 imhomogeneity effects. However, better correction is provided by the proposed combined method. The best results were produced using the semiautomatic version of the proposed combined method.
A new color image encryption scheme using CML and a fractional-order chaotic system.
Directory of Open Access Journals (Sweden)
Xiangjun Wu
Full Text Available The chaos-based image cryptosystems have been widely investigated in recent years to provide real-time encryption and transmission. In this paper, a novel color image encryption algorithm by using coupled-map lattices (CML and a fractional-order chaotic system is proposed to enhance the security and robustness of the encryption algorithms with a permutation-diffusion structure. To make the encryption procedure more confusing and complex, an image division-shuffling process is put forward, where the plain-image is first divided into four sub-images, and then the position of the pixels in the whole image is shuffled. In order to generate initial conditions and parameters of two chaotic systems, a 280-bit long external secret key is employed. The key space analysis, various statistical analysis, information entropy analysis, differential analysis and key sensitivity analysis are introduced to test the security of the new image encryption algorithm. The cryptosystem speed is analyzed and tested as well. Experimental results confirm that, in comparison to other image encryption schemes, the new algorithm has higher security and is fast for practical image encryption. Moreover, an extensive tolerance analysis of some common image processing operations such as noise adding, cropping, JPEG compression, rotation, brightening and darkening, has been performed on the proposed image encryption technique. Corresponding results reveal that the proposed image encryption method has good robustness against some image processing operations and geometric attacks.
Davies, Patrick Laurie
2014-01-01
Introduction IntroductionApproximate Models Notation Two Modes of Statistical AnalysisTowards One Mode of Analysis Approximation, Randomness, Chaos, Determinism ApproximationA Concept of Approximation Approximation Approximating a Data Set by a Model Approximation Regions Functionals and EquivarianceRegularization and Optimality Metrics and DiscrepanciesStrong and Weak Topologies On Being (almost) Honest Simulations and Tables Degree of Approximation and p-values ScalesStability of Analysis The Choice of En(α, P) Independence Procedures, Approximation and VaguenessDiscrete Models The Empirical Density Metrics and Discrepancies The Total Variation Metric The Kullback-Leibler and Chi-Squared Discrepancies The Po(λ) ModelThe b(k, p) and nb(k, p) Models The Flying Bomb Data The Student Study Times Data OutliersOutliers, Data Analysis and Models Breakdown Points and Equivariance Identifying Outliers and Breakdown Outliers in Multivariate Data Outliers in Linear Regression Outliers in Structured Data The Location...
Simon, P.; Semboloni, E.; van Waerbeke, L.; Hoekstra, H.; Erben, T.; Fu, L.; Harnois-Déraps, J.; Heymans, C.; Hildebrandt, H.; Kilbinger, M.; Kitching, T. D.; Miller, L.; Schrabback, T.
2015-05-01
We study the correlations of the shear signal between triplets of sources in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) to probe cosmological parameters via the matter bispectrum. In contrast to previous studies, we adopt a non-Gaussian model of the data likelihood which is supported by our simulations of the survey. We find that for state-of-the-art surveys, similar to CFHTLenS, a Gaussian likelihood analysis is a reasonable approximation, albeit small differences in the parameter constraints are already visible. For future surveys we expect that a Gaussian model becomes inaccurate. Our algorithm for a refined non-Gaussian analysis and data compression is then of great utility especially because it is not much more elaborate if simulated data are available. Applying this algorithm to the third-order correlations of shear alone in a blind analysis, we find a good agreement with the standard cosmological model: Σ _8=σ _8(Ω _m/0.27)^{0.64}=0.79^{+0.08}_{-0.11} for a flat Λ cold dark matter cosmology with h = 0.7 ± 0.04 (68 per cent credible interval). Nevertheless our models provide only moderately good fits as indicated by χ2/dof = 2.9, including a 20 per cent rms uncertainty in the predicted signal amplitude. The models cannot explain a signal drop on scales around 15 arcmin, which may be caused by systematics. It is unclear whether the discrepancy can be fully explained by residual point spread function systematics of which we find evidence at least on scales of a few arcmin. Therefore we need a better understanding of higher order correlations of cosmic shear and their systematics to confidently apply them as cosmological probes.
Theoretical analysis of dynamic chemical imaging with lasers using high-order harmonic generation
International Nuclear Information System (INIS)
Van-Hoang Le; Anh-Thu Le; Xie Ruihua; Lin, C. D.
2007-01-01
We report theoretical investigations of the tomographic procedure suggested by Itatani et al. [Nature (London) 432, 867 (2004)] for reconstructing highest occupied molecular orbitals (HOMOs) using high-order harmonic generation (HHG). Due to the limited range of harmonics from the plateau region, we found that even under the most favorable assumptions, it is still very difficult to obtain accurate HOMO wave functions using the tomographic procedure, but the symmetry of the HOMOs and the internuclear separation between the atoms can be accurately extracted, especially when lasers of longer wavelengths are used to generate the HHG. Since the tomographic procedure relies on approximating the continuum wave functions in the recombination process by plane waves, the method can no longer be applied upon the improvement of the theory. For future chemical imaging with lasers, we suggest that one may want to focus on how to extract the positions of atoms in molecules instead, by developing an iterative method such that the theoretically calculated macroscopic HHG spectra can best fit the experimental HHG data
Short, Daniel J.
be applied to characterize the refractive effects. To help with the time-lapse image refraction analysis process, a second order ray trace scheme has been developed. The ray trace is based on existing lens system tracing procedures, but is adapted for use with the atmospheric refractivity profile. The standard practice of ray tracing uses linear approximations through each element to obtain a ray path, however, the method described in this dissertation uses a quadratic correction term in order to more accurately and efficiently simulate the curvature of rays as they propagate through a gradient refractive index medium such as the atmosphere. Although a variety of finite element solutions have been implemented to describe ray trajectories in nonlinear refractive mediums, the new ray tracer described here is much easier to implement and provides quick, intuitive results. The method is tested against exact analytical ray height solutions for known profiles and was found to give nearly identical results. The ray trace was then applied to real atmospheric data and was found to give plausible results. The tay trace gives a visual aid in understanding the physical path the light takes in traversing the potential field. This will be beneficial in linking optical data to weather model data in an effort to develop a forecasting model for refraction. By selecting the correct boundary and initial conditions, we are able to model rays through the profile. Understanding the system will ultimately help in later analysis. A primary objective of this dissertation is to expand on the work mentioned above on image dislocation and consider the effects of towering (stretching) and stooping (compression) in the imagery. These effects can be explained as a type of lensing by the atmosphere due to nonlinear gradients. To achieve towering and stooping, a curved vertical index profile is required. Where a positive lensing action by the medium causes some ray focusing, back projection from at
Fractional-Order Total Variation Image Restoration Based on Primal-Dual Algorithm
Chen, Dali; Chen, YangQuan; Xue, Dingyu
2013-01-01
This paper proposes a fractional-order total variation image denoising algorithm based on the primal-dual method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, convergence rate, and blocky effect. The fractional-order total variation model is introduced by generalizing the first-order model, and the corresponding saddle-point and dual formulation are constructed in theory. In order to guarantee $O(1/{N}^{2})$ conv...
First- and Second-Order Full-Differential in Edge Analysis of Images
Directory of Open Access Journals (Sweden)
Dong-Mei Pu
2014-01-01
mathematics. We propose and reformulate them with a uniform definition framework. Based on our observation and analysis with the difference, we propose an algorithm to detect the edge from image. Experiments on Corel5K and PASCAL VOC 2007 are done to show the difference between the first order and the second order. After comparison with Canny operator and the proposed first-order differential, the main result is that the second-order differential has the better performance in analysis of changes of the context of images with good selection of control parameter.
Akkerman, Erik M.
2010-01-01
Both in diffusion tensor imaging (DTI) and in generalized diffusion tensor imaging (GDTI) the relation between the diffusion tensor and the measured apparent diffusion coefficients is given by a tensorial equation, which needs to be inverted in order to solve the diffusion tensor. The traditional
A new image encryption algorithm based on the fractional-order hyperchaotic Lorenz system
Wang, Zhen; Huang, Xia; Li, Yu-Xia; Song, Xiao-Na
2013-01-01
We propose a new image encryption algorithm on the basis of the fractional-order hyperchaotic Lorenz system. While in the process of generating a key stream, the system parameters and the derivative order are embedded in the proposed algorithm to enhance the security. Such an algorithm is detailed in terms of security analyses, including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. The experimental results demonstrate that the proposed image encryption scheme has the advantages of large key space and high security for practical image encryption.
Multi-domain, higher order level set scheme for 3D image segmentation on the GPU
DEFF Research Database (Denmark)
Sharma, Ojaswa; Zhang, Qin; Anton, François
2010-01-01
to evaluate level set surfaces that are $C^2$ continuous, but are slow due to high computational burden. In this paper, we provide a higher order GPU based solver for fast and efficient segmentation of large volumetric images. We also extend the higher order method to multi-domain segmentation. Our streaming...
Target recognition of ladar range images using even-order Zernike moments.
Liu, Zheng-Jun; Li, Qi; Xia, Zhi-Wei; Wang, Qi
2012-11-01
Ladar range images have attracted considerable attention in automatic target recognition fields. In this paper, Zernike moments (ZMs) are applied to classify the target of the range image from an arbitrary azimuth angle. However, ZMs suffer from high computational costs. To improve the performance of target recognition based on small samples, even-order ZMs with serial-parallel backpropagation neural networks (BPNNs) are applied to recognize the target of the range image. It is found that the rotation invariance and classified performance of the even-order ZMs are both better than for odd-order moments and for moments compressed by principal component analysis. The experimental results demonstrate that combining the even-order ZMs with serial-parallel BPNNs can significantly improve the recognition rate for small samples.
HIDING TEXT IN DIGITAL IMAGES USING PERMUTATION ORDERING AND COMPACT KEY BASED DICTIONARY
Directory of Open Access Journals (Sweden)
Nagalinga Rajan
2017-05-01
Full Text Available Digital image steganography is an emerging technique in secure communication for the modern connected world. It protects the content of the message without arousing suspicion in a passive observer. A novel steganography method is presented to hide text in digital images. A compact dictionary is designed to efficiently communicate all types of secret messages. The sorting order of pixels in image blocks are chosen as the carrier of embedded information. The high correlation in image pixel values means reordering within image blocks do not cause high distortion. The image is divided into blocks and perturbed to create non repeating sequences of intensity values. These values are then sorted according to the message. At the receiver end, the message is read from the sorting order of the pixels in image blocks. Only those image blocks with standard deviation lesser than a given threshold are chosen for embedding to alleviate visual distortion. Information Security is provided by shuffling the dictionary according to a shared key. Experimental Results and Analysis show that the method is capable of hiding text with more than 4000 words in a 512×512 grayscale image with a peak signal to noise ratio above 40 decibels.
National Research Council Canada - National Science Library
Stanford, Derek C; Raftery, Adrian E
2001-01-01
.... This is motivated by medical and satellite image segmentation, and may also be useful for color and gray scale image quantization, the display and storage of computer-generated holograms, and the use...
The Jovian ring was imaged at 2.26+/-0.03 microns at approximately 7:00 UT on 12 July, 1994, from NA
2002-01-01
The Jovian ring was imaged at 2.26+/-0.03 microns at approximately 7:00 UT on 12 July, 1994, from NASA's Infrared Telescope Facility. The image was coadded from three 30-second exposures with sky subtracted. The resolution was 0.31 arcseconds per pixel. A S/N 5 per pixel was obtained for the coadded images. Photometry on the ring image is pending. This is part of a program to monitor the effects of the dust from Comet Shoemaker-Levy 9 on the Jovian ring system. More images will be taken during and after the impacts of the fragments. The image was obtained by Philip Esterle (University of Maryland), Casey Lisse (NASA/Goddard Space Flight Center), and Mark Shure (University of Hawaii).
Directory of Open Access Journals (Sweden)
Qiang Yu
Full Text Available Texture enhancement is one of the most important techniques in digital image processing and plays an essential role in medical imaging since textures discriminate information. Most image texture enhancement techniques use classical integral order differential mask operators or fractional differential mask operators using fixed fractional order. These masks can produce excessive enhancement of low spatial frequency content, insufficient enhancement of large spatial frequency content, and retention of high spatial frequency noise. To improve upon existing approaches of texture enhancement, we derive an improved Variable Order Fractional Centered Difference (VOFCD scheme which dynamically adjusts the fractional differential order instead of fixing it. The new VOFCD technique is based on the second order Riesz fractional differential operator using a Lagrange 3-point interpolation formula, for both grey scale and colour image enhancement. We then use this method to enhance photographs and a set of medical images related to patients with stroke and Parkinson's disease. The experiments show that our improved fractional differential mask has a higher signal to noise ratio value than the other fractional differential mask operators. Based on the corresponding quantitative analysis we conclude that the new method offers a superior texture enhancement over existing methods.
Efficient nonlinear registration of 3D images using high order co-ordinate transfer functions.
Barber, D C
1999-01-01
There is an increasing interest in image registration for a variety of medical imaging applications. Image registration is achieved through the use of a co-ordinate transfer function (CTF) which maps voxels in one image to voxels in the other image, including in the general case changes in mapped voxel intensity. If images of the same subject are to be registered the co-ordinate transfer function needs to implement a spatial transformation consisting of a displacement and a rigid rotation. In order to achieve registration a common approach is to choose a suitable quality-of-registration measure and devise a method for the efficient generation of the parameters of the CTF which minimize this measure. For registration of images from different subjects more complex transforms are required. In general function minimization is too slow to allow the use of CTFs with more than a small number of parameters. However, provided the images are from the same modality and the CTF can be expanded in terms of an appropriate set of basis functions this paper will show how relatively complex CTFs can be used for registration. The use of increasingly complex CTFs to minimize the within group standard deviation of a set of normal single photon emission tomography brain images is used to demonstrate the improved registration of images from different subjects using CTFs of increasing complexity.
Fractional order integration and fuzzy logic based filter for denoising of echocardiographic image.
Saadia, Ayesha; Rashdi, Adnan
2016-12-01
Ultrasound is widely used for imaging due to its cost effectiveness and safety feature. However, ultrasound images are inherently corrupted with speckle noise which severely affects the quality of these images and create difficulty for physicians in diagnosis. To get maximum benefit from ultrasound imaging, image denoising is an essential requirement. To perform image denoising, a two stage methodology using fuzzy weighted mean and fractional integration filter has been proposed in this research work. In stage-1, image pixels are processed by applying a 3 × 3 window around each pixel and fuzzy logic is used to assign weights to the pixels in each window, replacing central pixel of the window with weighted mean of all neighboring pixels present in the same window. Noise suppression is achieved by assigning weights to the pixels while preserving edges and other important features of an image. In stage-2, the resultant image is further improved by fractional order integration filter. Effectiveness of the proposed methodology has been analyzed for standard test images artificially corrupted with speckle noise and real ultrasound B-mode images. Results of the proposed technique have been compared with different state-of-the-art techniques including Lsmv, Wiener, Geometric filter, Bilateral, Non-local means, Wavelet, Perona et al., Total variation (TV), Global Adaptive Fractional Integral Algorithm (GAFIA) and Improved Fractional Order Differential (IFD) model. Comparison has been done on quantitative and qualitative basis. For quantitative analysis different metrics like Peak Signal to Noise Ratio (PSNR), Speckle Suppression Index (SSI), Structural Similarity (SSIM), Edge Preservation Index (β) and Correlation Coefficient (ρ) have been used. Simulations have been done using Matlab. Simulation results of artificially corrupted standard test images and two real Echocardiographic images reveal that the proposed method outperforms existing image denoising techniques
Calatroni, Luca
2013-08-01
We present directional operator splitting schemes for the numerical solution of a fourth-order, nonlinear partial differential evolution equation which arises in image processing. This equation constitutes the H -1-gradient flow of the total variation and represents a prototype of higher-order equations of similar type which are popular in imaging for denoising, deblurring and inpainting problems. The efficient numerical solution of this equation is very challenging due to the stiffness of most numerical schemes. We show that the combination of directional splitting schemes with implicit time-stepping provides a stable and computationally cheap numerical realisation of the equation.
Calatroni, Luca; Dü ring, Bertram; Schö nlieb, Carola-Bibiane
2013-01-01
We present directional operator splitting schemes for the numerical solution of a fourth-order, nonlinear partial differential evolution equation which arises in image processing. This equation constitutes the H -1-gradient flow of the total variation and represents a prototype of higher-order equations of similar type which are popular in imaging for denoising, deblurring and inpainting problems. The efficient numerical solution of this equation is very challenging due to the stiffness of most numerical schemes. We show that the combination of directional splitting schemes with implicit time-stepping provides a stable and computationally cheap numerical realisation of the equation.
A Novel Image Encryption Algorithm Based on a Fractional-Order Hyperchaotic System and DNA Computing
Directory of Open Access Journals (Sweden)
Taiyong Li
2017-01-01
Full Text Available In the era of the Internet, image encryption plays an important role in information security. Chaotic systems and DNA operations have been proven to be powerful for image encryption. To further enhance the security of image, in this paper, we propose a novel algorithm that combines the fractional-order hyperchaotic Lorenz system and DNA computing (FOHCLDNA for image encryption. Specifically, the algorithm consists of four parts: firstly, we use a fractional-order hyperchaotic Lorenz system to generate a pseudorandom sequence that will be utilized during the whole encryption process; secondly, a simple but effective diffusion scheme is performed to spread the little change in one pixel to all the other pixels; thirdly, the plain image is encoded by DNA rules and corresponding DNA operations are performed; finally, global permutation and 2D and 3D permutation are performed on pixels, bits, and acid bases. The extensive experimental results on eight publicly available testing images demonstrate that the encryption algorithm can achieve state-of-the-art performance in terms of security and robustness when compared with some existing methods, showing that the FOHCLDNA is promising for image encryption.
Directory of Open Access Journals (Sweden)
Frank J Brooks
Full Text Available There is increasing interest in applying image texture quantifiers to assess the intra-tumor heterogeneity observed in FDG-PET images of various cancers. Use of these quantifiers as prognostic indicators of disease outcome and/or treatment response has yielded inconsistent results. We study the general applicability of some well-established texture quantifiers to the image data unique to FDG-PET.We first created computer-simulated test images with statistical properties consistent with clinical image data for cancers of the uterine cervix. We specifically isolated second-order statistical effects from low-order effects and analyzed the resulting variation in common texture quantifiers in response to contrived image variations. We then analyzed the quantifiers computed for FIGOIIb cervical cancers via receiver operating characteristic (ROC curves and via contingency table analysis of detrended quantifier values.We found that image texture quantifiers depend strongly on low-effects such as tumor volume and SUV distribution. When low-order effects are controlled, the image texture quantifiers tested were not able to discern only the second-order effects. Furthermore, the results of clinical tumor heterogeneity studies might be tunable via choice of patient population analyzed.Some image texture quantifiers are strongly affected by factors distinct from the second-order effects researchers ostensibly seek to assess via those quantifiers.
Characterization of a new series of fluorescent probes for imaging membrane order.
Directory of Open Access Journals (Sweden)
Joanna M Kwiatek
Full Text Available Visualization and quantification of lipid order is an important tool in membrane biophysics and cell biology, but the availability of environmentally sensitive fluorescent membrane probes is limited. Here, we present the characterization of the novel fluorescent dyes PY3304, PY3174 and PY3184, whose fluorescence properties are sensitive to membrane lipid order. In artificial bilayers, the fluorescence emission spectra are red-shifted between the liquid-ordered and liquid-disordered phases. Using ratiometric imaging we demonstrate that the degree of membrane order can be quantitatively determined in artificial liposomes as well as live cells and intact, live zebrafish embryos. Finally, we show that the fluorescence lifetime of the dyes is also dependent on bilayer order. These probes expand the current palate of lipid order-sensing fluorophores affording greater flexibility in the excitation/emission wavelengths and possibly new opportunities in membrane biology.
Wehde, M. E.
1995-01-01
The common method of digital image comparison by subtraction imposes various constraints on the image contents. Precise registration of images is required to assure proper evaluation of surface locations. The attribute being measured and the calibration and scaling of the sensor are also important to the validity and interpretability of the subtraction result. Influences of sensor gains and offsets complicate the subtraction process. The presence of any uniform systematic transformation component in one of two images to be compared distorts the subtraction results and requires analyst intervention to interpret or remove it. A new technique has been developed to overcome these constraints. Images to be compared are first transformed using the cumulative relative frequency as a transfer function. The transformed images represent the contextual relationship of each surface location with respect to all others within the image. The process of differentiating between the transformed images results in a percentile rank ordered difference. This process produces consistent terrain-change information even when the above requirements necessary for subtraction are relaxed. This technique may be valuable to an appropriately designed hierarchical terrain-monitoring methodology because it does not require human participation in the process.
First-order Convex Optimization Methods for Signal and Image Processing
DEFF Research Database (Denmark)
Jensen, Tobias Lindstrøm
2012-01-01
In this thesis we investigate the use of first-order convex optimization methods applied to problems in signal and image processing. First we make a general introduction to convex optimization, first-order methods and their iteration complexity. Then we look at different techniques, which can...... be used with first-order methods such as smoothing, Lagrange multipliers and proximal gradient methods. We continue by presenting different applications of convex optimization and notable convex formulations with an emphasis on inverse problems and sparse signal processing. We also describe the multiple...
A mixed-order nonlinear diffusion compressed sensing MR image reconstruction.
Joy, Ajin; Paul, Joseph Suresh
2018-03-07
Avoid formation of staircase artifacts in nonlinear diffusion-based MR image reconstruction without compromising computational speed. Whereas second-order diffusion encourages the evolution of pixel neighborhood with uniform intensities, fourth-order diffusion considers smooth region to be not necessarily a uniform intensity region but also a planar region. Therefore, a controlled application of fourth-order diffusivity function is used to encourage second-order diffusion to reconstruct the smooth regions of the image as a plane rather than a group of blocks, while not being strong enough to introduce the undesirable speckle effect. Proposed method is compared with second- and fourth-order nonlinear diffusion reconstruction, total variation (TV), total generalized variation, and higher degree TV using in vivo data sets for different undersampling levels with application to dictionary learning-based reconstruction. It is observed that the proposed technique preserves sharp boundaries in the image while preventing the formation of staircase artifacts in the regions of smoothly varying pixel intensities. It also shows reduced error measures compared with second-order nonlinear diffusion reconstruction or TV and converges faster than TV-based methods. Because nonlinear diffusion is known to be an effective alternative to TV for edge-preserving reconstruction, the crucial aspect of staircase artifact removal is addressed. Reconstruction is found to be stable for the experimentally determined range of fourth-order regularization parameter, and therefore not does not introduce a parameter search. Hence, the computational simplicity of second-order diffusion is retained. © 2018 International Society for Magnetic Resonance in Medicine.
Inochkin, F. M.; Kruglov, S. K.; Bronshtein, I. G.; Kompan, T. A.; Kondratjev, S. V.; Korenev, A. S.; Pukhov, N. F.
2017-06-01
A new method for precise subpixel edge estimation is presented. The principle of the method is the iterative image approximation in 2D with subpixel accuracy until the appropriate simulated is found, matching the simulated and acquired images. A numerical image model is presented consisting of three parts: an edge model, object and background brightness distribution model, lens aberrations model including diffraction. The optimal values of model parameters are determined by means of conjugate-gradient numerical optimization of a merit function corresponding to the L2 distance between acquired and simulated images. Computationally-effective procedure for the merit function calculation along with sufficient gradient approximation is described. Subpixel-accuracy image simulation is performed in a Fourier domain with theoretically unlimited precision of edge points location. The method is capable of compensating lens aberrations and obtaining the edge information with increased resolution. Experimental method verification with digital micromirror device applied to physically simulate an object with known edge geometry is shown. Experimental results for various high-temperature materials within the temperature range of 1000°C..2400°C are presented.
Imaging of first-order surface-related multiples by reverse-time migration
Liu, Xuejian; Liu, Yike; Hu, Hao; Li, Peng; Khan, Majid
2017-02-01
Surface-related multiples have been utilized in the reverse-time migration (RTM) procedures, and additional illumination for subsurface can be provided. Meanwhile, many cross-talks are generated from undesired interactions between forward- and backward-propagated seismic waves. In this paper, subsequent to analysing and categorizing these cross-talks, we propose RTM of first-order multiples to avoid most undesired interactions in RTM of all-order multiples, where only primaries are forward-propagated and crosscorrelated with the backward-propagated first-order multiples. With primaries and multiples separated during regular seismic data processing as the input data, first-order multiples can be obtained by a two-step scheme: (1) the dual-prediction of higher-order multiples; and (2) the adaptive subtraction of predicted higher-order multiples from all-order multiples within local offset-time windows. In numerical experiments, two synthetic and a marine field data sets are used, where different cross-talks generated by RTM of all-order multiples can be identified and the proposed RTM of first-order multiples can provide a very interpretable image with a few cross-talks.
An Image Processing Approach to Pre-compensation for Higher-Order Aberrations in the Eye
Directory of Open Access Journals (Sweden)
Miguel Alonso Jr
2004-06-01
Full Text Available Human beings rely heavily on vision for almost all of the tasks that are required in daily life. Because of this dependence on vision, humans with visual limitations, caused by genetic inheritance, disease, or age, will have difficulty in completing many of the tasks required of them. Some individuals with severe visual impairments, known as high-order aberrations, may have difficulty in interacting with computers, even when using a traditional means of visual correction (e.g., spectacles, contact lenses. This is, in part, because these correction mechanisms can only compensate for the most regular (low-order distortions or aberrations of the image in the eye. This paper presents an image processing approach that will pre-compensate the images displayed on the computer screen, so as to counter the effect of the eye's aberrations on the image. The characterization of the eye required to perform this customized pre-compensation is the eye's Point Spread Function (PSF. Ophthalmic instruments generically called "Wavefront Analyzers" can now measure this description of the eye's optical properties. The characterization provided by these instruments also includes the "higher-order aberration components" and could, therefore, lead to a more comprehensive vision correction than traditional mechanisms. This paper explains the theoretical foundation of the methods proposed and illustrates them with experiments involving the emulation of a known and constant PSF by interposing a lens in the field of view of normally sighted test subjects.
Meng, Shukai; Mo, Yu L.
2001-09-01
Image segmentation is one of the most important operations in many image analysis problems, which is the process that subdivides an image into its constituents and extracts those parts of interest. In this paper, we present a new second order difference gray-scale image segmentation algorithm based on cellular neural networks. A 3x3 CNN cloning template is applied, which can make smooth processing and has a good ability to deal with the conflict between the capability of noise resistance and the edge detection of complex shapes. We use second order difference operator to calculate the coefficients of the control template, which are not constant but rather depend on the input gray-scale values. It is similar to Contour Extraction CNN in construction, but there are some different in algorithm. The result of experiment shows that the second order difference CNN has a good capability in edge detection. It is better than Contour Extraction CNN in detail detection and more effective than the Laplacian of Gauss (LOG) algorithm.
A Combined First and Second Order Variational Approach for Image Reconstruction
Papafitsoros, K.
2013-05-10
In this paper we study a variational problem in the space of functions of bounded Hessian. Our model constitutes a straightforward higher-order extension of the well known ROF functional (total variation minimisation) to which we add a non-smooth second order regulariser. It combines convex functions of the total variation and the total variation of the first derivatives. In what follows, we prove existence and uniqueness of minimisers of the combined model and present the numerical solution of the corresponding discretised problem by employing the split Bregman method. The paper is furnished with applications of our model to image denoising, deblurring as well as image inpainting. The obtained numerical results are compared with results obtained from total generalised variation (TGV), infimal convolution and Euler\\'s elastica, three other state of the art higher-order models. The numerical discussion confirms that the proposed higher-order model competes with models of its kind in avoiding the creation of undesirable artifacts and blocky-like structures in the reconstructed images-a known disadvantage of the ROF model-while being simple and efficiently numerically solvable. ©Springer Science+Business Media New York 2013.
Image encryption based on a delayed fractional-order chaotic logistic system
International Nuclear Information System (INIS)
Wang Zhen; Li Ning; Huang Xia; Song Xiao-Na
2012-01-01
A new image encryption scheme is proposed based on a delayed fractional-order chaotic logistic system. In the process of generating a key stream, the time-varying delay and fractional derivative are embedded in the proposed scheme to improve the security. Such a scheme is described in detail with security analyses including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. Experimental results show that the newly proposed image encryption scheme possesses high security. (general)
Image encryption based on a delayed fractional-order chaotic logistic system
Wang, Zhen; Huang, Xia; Li, Ning; Song, Xiao-Na
2012-05-01
A new image encryption scheme is proposed based on a delayed fractional-order chaotic logistic system. In the process of generating a key stream, the time-varying delay and fractional derivative are embedded in the proposed scheme to improve the security. Such a scheme is described in detail with security analyses including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. Experimental results show that the newly proposed image encryption scheme possesses high security.
Removal of impulse noise clusters from color images with local order statistics
Ruchay, Alexey; Kober, Vitaly
2017-09-01
This paper proposes a novel algorithm for restoring images corrupted with clusters of impulse noise. The noise clusters often occur when the probability of impulse noise is very high. The proposed noise removal algorithm consists of detection of bulky impulse noise in three color channels with local order statistics followed by removal of the detected clusters by means of vector median filtering. With the help of computer simulation we show that the proposed algorithm is able to effectively remove clustered impulse noise. The performance of the proposed algorithm is compared in terms of image restoration metrics with that of common successful algorithms.
DEFF Research Database (Denmark)
Micaletti, R. C.; Cakmak, A. S.; Nielsen, Søren R. K.
Differential equations are derived which exactly govern the evolution of the second-order response moments of a single-degree-of-freedom (SDOF) bilinear hysteretic oscillator subject to stationary Gaussian white noise excitation. Then, considering cases for which response stationarity...
Lu, Weihua; Chen, Xinjian; Zhu, Weifang; Yang, Lei; Cao, Zhaoyuan; Chen, Haoyu
2015-03-01
In this paper, we proposed a method based on the Freeman chain code to segment and count rhesus choroid-retinal vascular endothelial cells (RF/6A) automatically for fluorescence microscopy images. The proposed method consists of four main steps. First, a threshold filter and morphological transform were applied to reduce the noise. Second, the boundary information was used to generate the Freeman chain codes. Third, the concave points were found based on the relationship between the difference of the chain code and the curvature. Finally, cells segmentation and counting were completed based on the characteristics of the number of the concave points, the area and shape of the cells. The proposed method was tested on 100 fluorescence microscopic cell images, and the average true positive rate (TPR) is 98.13% and the average false positive rate (FPR) is 4.47%, respectively. The preliminary results showed the feasibility and efficiency of the proposed method.
Evaluation of image reconstruction methods for {sup 123}I-MIBG-SPECT. A rank-order study
Energy Technology Data Exchange (ETDEWEB)
Soederberg, Marcus; Mattsson, Soeren; Oddstig, Jenny; Uusijaervi-Lizana, Helena; Leide-Svegborn, Sigrid [Medical Radiation Physics, Dept. of Clinical Sciences Malmoe, Lund Univ., Skaane Univ. Hospital, Malmoe (Sweden)], e-mail: marcus.soderberg@med.lu.se; Valind, Sven; Thorsson, Ola; Garpered, Sabine [Dept. of Clinical Physiology, Skaane Univ. Hospital, Malmoe (Sweden); Prautzsch, Tilmann [Scivis wissenschaftlice Bildverarbeitung GmbH, Goettingen (Germany); Tischenko, Oleg [Research Unit Medical Radiation Physics and Diagnostics (AMSD), Helmholtz Zentrum Muenchen (Germany); German Research Center for Environmental Health, Neuherberg (Germany)
2012-09-15
Background: There is an opportunity to improve the image quality and lesion detectability in single photon emission computed tomography (SPECT) by choosing an appropriate reconstruction method and optimal parameters for the reconstruction. Purpose: To optimize the use of the Flash 3D reconstruction algorithm in terms of equivalent iteration (EI) number (number of subsets times the number of iterations) and to compare with two recently developed reconstruction algorithms ReSPECT and orthogonal polynomial expansion on disc (OPED) for application on {sup 123}I-metaiodobenzylguanidine (MIBG)-SPECT. Material and Methods: Eleven adult patients underwent SPECT 4 h and 14 patients 24 h after injection of approximately 200 MBq {sup 123}I-MIBG using a Siemens Symbia T6 SPECT/CT. Images were reconstructed from raw data using the Flash 3D algorithm at eight different EI numbers. The images were ranked by three experienced nuclear medicine physicians according to their overall impression of the image quality. The obtained optimal images were then compared in one further visual comparison with images reconstructed using the ReSPECT and OPED algorithms. Results: The optimal EI number for Flash 3D was determined to be 32 for acquisition 4 h and 24 h after injection. The average rank order (best first) for the different reconstructions for acquisition after 4 h was: Flash 3D{sub 32} > ReSPECT > Flash 3D{sub 64} > OPED, and after 24 h: Flash 3D{sub 16} > ReSPECT > Flash 3D{sub 32} > OPED. A fair level of inter-observer agreement concerning optimal EI number and reconstruction algorithm was obtained, which may be explained by the different individual preferences of what is appropriate image quality. Conclusion: Using Siemens Symbia T6 SPECT/CT and specified acquisition parameters, Flash 3D{sub 32} (4 h) and Flash 3D{sub 16} (24 h), followed by ReSPECT, were assessed to be the preferable reconstruction algorithms in visual assessment of {sup 123}I-MIBG images.
Partition calculation for zero-order and conjugate image removal in digital in-line holography.
Ma, Lihong; Wang, Hui; Li, Yong; Jin, Hongzhen
2012-01-16
Conventional digital in-line holography requires at least two phase-shifting holograms to reconstruct an original object without zero-order and conjugate image noise. We present a novel approach in which only one in-line hologram and two intensity values (namely the object wave intensity and the reference wave intensity) are required. First, by subtracting the two intensity values the zero-order diffraction can be completely eliminated. Then, an algorithm, called partition calculation, is proposed to numerically remove the conjugate image. A preliminary experimental result is given to confirm the proposed method. The method can simplify the procedure of phase-shifting digital holography and improve the practical feasibility for digital in-line holography.
How daylight influences high-order chromatic descriptors in natural images.
Ojeda, Juan; Nieves, Juan Luis; Romero, Javier
2017-07-01
Despite the global and local daylight changes naturally occurring in natural scenes, the human visual system usually adapts quite well to those changes, developing a stable color perception. Nevertheless, the influence of daylight in modeling natural image statistics is not fully understood and has received little attention. The aim of this work was to analyze the influence of daylight changes in different high-order chromatic descriptors (i.e., color volume, color gamut, and number of discernible colors) derived from 350 color images, which were rendered under 108 natural illuminants with Correlated Color Temperatures (CCT) from 2735 to 25,889 K. Results suggest that chromatic and luminance information is almost constant and does not depend on the CCT of the illuminant for values above 14,000 K. Nevertheless, differences between the red-green and blue-yellow image components were found below that CCT, with most of the statistical descriptors analyzed showing local extremes in the range 2950 K-6300 K. Uniform regions and areas of the images attracting observers' attention were also considered in this analysis and were characterized by their patchiness index and their saliency maps. Meanwhile, the results of the patchiness index do not show a clear dependence on CCT, and it is remarkable that a significant reduction in the number of discernible colors (58% on average) was found when the images were masked with their corresponding saliency maps. Our results suggest that chromatic diversity, as defined in terms of the discernible colors, can be strongly reduced when an observer scans a natural scene. These findings support the idea that a reduction in the number of discernible colors will guide visual saliency and attention. Whatever the modeling is mediating the neural representation of natural images, natural image statistics, it is clear that natural image statistics should take into account those local maxima and minima depending on the daylight illumination and
International Nuclear Information System (INIS)
Zhang Li-Min; Sun Ke-Hui; Liu Wen-Hao; He Shao-Bo
2017-01-01
In this paper, Adomian decomposition method (ADM) with high accuracy and fast convergence is introduced to solve the fractional-order piecewise-linear (PWL) hyperchaotic system. Based on the obtained hyperchaotic sequences, a novel color image encryption algorithm is proposed by employing a hybrid model of bidirectional circular permutation and DNA masking. In this scheme, the pixel positions of image are scrambled by circular permutation, and the pixel values are substituted by DNA sequence operations. In the DNA sequence operations, addition and substraction operations are performed according to traditional addition and subtraction in the binary, and two rounds of addition rules are used to encrypt the pixel values. The simulation results and security analysis show that the hyperchaotic map is suitable for image encryption, and the proposed encryption algorithm has good encryption effect and strong key sensitivity. It can resist brute-force attack, statistical attack, differential attack, known-plaintext, and chosen-plaintext attacks. (paper)
Directory of Open Access Journals (Sweden)
Yunjiao Bai
2015-01-01
Full Text Available The traditional fourth-order nonlinear diffusion denoising model suffers the isolated speckles and the loss of fine details in the processed image. For this reason, a new fourth-order partial differential equation based on the patch similarity modulus and the difference curvature is proposed for image denoising. First, based on the intensity similarity of neighbor pixels, this paper presents a new edge indicator called patch similarity modulus, which is strongly robust to noise. Furthermore, the difference curvature which can effectively distinguish between edges and noise is incorporated into the denoising algorithm to determine the diffusion process by adaptively adjusting the size of the diffusion coefficient. The experimental results show that the proposed algorithm can not only preserve edges and texture details, but also avoid isolated speckles and staircase effect while filtering out noise. And the proposed algorithm has a better performance for the images with abundant details. Additionally, the subjective visual quality and objective evaluation index of the denoised image obtained by the proposed algorithm are higher than the ones from the related methods.
CERN. Geneva
2015-01-01
Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...
Schmidt, Wolfgang M
1980-01-01
"In 1970, at the U. of Colorado, the author delivered a course of lectures on his famous generalization, then just established, relating to Roth's theorem on rational approxi- mations to algebraic numbers. The present volume is an ex- panded and up-dated version of the original mimeographed notes on the course. As an introduction to the author's own remarkable achievements relating to the Thue-Siegel-Roth theory, the text can hardly be bettered and the tract can already be regarded as a classic in its field."(Bull.LMS) "Schmidt's work on approximations by algebraic numbers belongs to the deepest and most satisfactory parts of number theory. These notes give the best accessible way to learn the subject. ... this book is highly recommended." (Mededelingen van het Wiskundig Genootschap)
Real-space imaging of non-collinear antiferromagnetic order with a single-spin magnetometer
Gross, I.; Akhtar, W.; Garcia, V.; Martínez, L. J.; Chouaieb, S.; Garcia, K.; Carrétéro, C.; Barthélémy, A.; Appel, P.; Maletinsky, P.; Kim, J.-V.; Chauleau, J. Y.; Jaouen, N.; Viret, M.; Bibes, M.; Fusil, S.; Jacques, V.
2017-09-01
Although ferromagnets have many applications, their large magnetization and the resulting energy cost for switching magnetic moments bring into question their suitability for reliable low-power spintronic devices. Non-collinear antiferromagnetic systems do not suffer from this problem, and often have extra functionalities: non-collinear spin order may break space-inversion symmetry and thus allow electric-field control of magnetism, or may produce emergent spin-orbit effects that enable efficient spin-charge interconversion. To harness these traits for next-generation spintronics, the nanoscale control and imaging capabilities that are now routine for ferromagnets must be developed for antiferromagnetic systems. Here, using a non-invasive, scanning single-spin magnetometer based on a nitrogen-vacancy defect in diamond, we demonstrate real-space visualization of non-collinear antiferromagnetic order in a magnetic thin film at room temperature. We image the spin cycloid of a multiferroic bismuth ferrite (BiFeO3) thin film and extract a period of about 70 nanometres, consistent with values determined by macroscopic diffraction. In addition, we take advantage of the magnetoelectric coupling present in BiFeO3 to manipulate the cycloid propagation direction by an electric field. Besides highlighting the potential of nitrogen-vacancy magnetometry for imaging complex antiferromagnetic orders at the nanoscale, these results demonstrate how BiFeO3 can be used in the design of reconfigurable nanoscale spin textures.
Approximating distributions from moments
Pawula, R. F.
1987-11-01
A method based upon Pearson-type approximations from statistics is developed for approximating a symmetric probability density function from its moments. The extended Fokker-Planck equation for non-Markov processes is shown to be the underlying foundation for the approximations. The approximation is shown to be exact for the beta probability density function. The applicability of the general method is illustrated by numerous pithy examples from linear and nonlinear filtering of both Markov and non-Markov dichotomous noise. New approximations are given for the probability density function in two cases in which exact solutions are unavailable, those of (i) the filter-limiter-filter problem and (ii) second-order Butterworth filtering of the random telegraph signal. The approximate results are compared with previously published Monte Carlo simulations in these two cases.
Multi-linear sparse reconstruction for SAR imaging based on higher-order SVD
Gao, Yu-Fei; Gui, Guan; Cong, Xun-Chao; Yang, Yue; Zou, Yan-Bin; Wan, Qun
2017-12-01
This paper focuses on the spotlight synthetic aperture radar (SAR) imaging for point scattering targets based on tensor modeling. In a real-world scenario, scatterers usually distribute in the block sparse pattern. Such a distribution feature has been scarcely utilized by the previous studies of SAR imaging. Our work takes advantage of this structure property of the target scene, constructing a multi-linear sparse reconstruction algorithm for SAR imaging. The multi-linear block sparsity is introduced into higher-order singular value decomposition (SVD) with a dictionary constructing procedure by this research. The simulation experiments for ideal point targets show the robustness of the proposed algorithm to the noise and sidelobe disturbance which always influence the imaging quality of the conventional methods. The computational resources requirement is further investigated in this paper. As a consequence of the algorithm complexity analysis, the present method possesses the superiority on resource consumption compared with the classic matching pursuit method. The imaging implementations for practical measured data also demonstrate the effectiveness of the algorithm developed in this paper.
International Nuclear Information System (INIS)
Gramm, E.
1982-01-01
This is a study of possibilities to improve X-ray pictures of the teeth with regard to detail sharpness in the interdental space in a closed row of lateral teeth. For this purpose, X-ray pictures were made of a phantom showing a closed row of lateral teeth, with two different films being used. The row of teeth was made to include two healthy teeth, one tooth with two spots of initial caries, and one tooth with a caries lesion showing already a cavity. The two films used were the usual one, SUPER DOZAHN, and a fine-grain, insensitive film usually chosen for materials testing (NDT 55). The loss in contrast with increasing kV was observed with all X-ray pictures; the insensitive film was in every case more rich in contrast than the usual dental X-ray film. Use of a special paste on the teeth in the interdental space lead to an improved detail sharpness for visual detection. The spots of special interest, i.e. those with initial caries could in no case be clearly defined as such, whereas the caries lesion became evident on all images. The radiation dose was 4,4 times higher when using the insensitive, fine-grain film, as compared to the dental film; use of the paste still increased the radiation dose by a factor of 1.6. The results show that the measures studied in this thesis are not suited to improving the diagnostic value of the X-ray pictures taken as described above. (orig./MG) [de
DEFF Research Database (Denmark)
Du, Yigang; Fan, Rui; Li, Yong
2016-01-01
An ultrasound imaging framework modeled with the first order nonlinear pressure–velocity relations (NPVR) based simulation and implemented by a half-time staggered solution and pseudospectral method is presented in this paper. The framework is capable of simulating linear and nonlinear ultrasound...... propagation and reflections in a heterogeneous medium with different sound speeds and densities. It can be initialized with arbitrary focus, excitation and apodization for multiple individual channels in both 2D and 3D spatial fields. The simulated channel data can be generated using this framework......, and ultrasound image can be obtained by beamforming the simulated channel data. Various results simulated by different algorithms are illustrated for comparisons. The root mean square (RMS) errors for each compared pulses are calculated. The linear propagation is validated by an angular spectrum approach (ASA...
Approximation by Cylinder Surfaces
DEFF Research Database (Denmark)
Randrup, Thomas
1997-01-01
We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...
Potvin, Guy
2015-10-01
We examine how the Rytov approximation describing log-amplitude and phase fluctuations of a wave propagating through weak uniform turbulence can be generalized to the case of turbulence with a large-scale nonuniform component. We show how the large-scale refractive index field creates Fermat rays using the path integral formulation for paraxial propagation. We then show how the second-order derivatives of the Fermat ray action affect the Rytov approximation, and we discuss how a numerical algorithm would model the general Rytov approximation.
Robust rooftop extraction from visible band images using higher order CRF
Li, Er
2015-08-01
In this paper, we propose a robust framework for building extraction in visible band images. We first get an initial classification of the pixels based on an unsupervised presegmentation. Then, we develop a novel conditional random field (CRF) formulation to achieve accurate rooftops extraction, which incorporates pixel-level information and segment-level information for the identification of rooftops. Comparing with the commonly used CRF model, a higher order potential defined on segment is added in our model, by exploiting region consistency and shape feature at segment level. Our experiments show that the proposed higher order CRF model outperforms the state-of-the-art methods both at pixel and object levels on rooftops with complex structures and sizes in challenging environments. © 1980-2012 IEEE.
Spatio-chromatic adaptation via higher-order canonical correlation analysis of natural images.
Gutmann, Michael U; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús
2014-01-01
Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation.
Hotspot detection using image pattern recognition based on higher-order local auto-correlation
Maeda, Shimon; Matsunawa, Tetsuaki; Ogawa, Ryuji; Ichikawa, Hirotaka; Takahata, Kazuhiro; Miyairi, Masahiro; Kotani, Toshiya; Nojima, Shigeki; Tanaka, Satoshi; Nakagawa, Kei; Saito, Tamaki; Mimotogi, Shoji; Inoue, Soichi; Nosato, Hirokazu; Sakanashi, Hidenori; Kobayashi, Takumi; Murakawa, Masahiro; Higuchi, Tetsuya; Takahashi, Eiichi; Otsu, Nobuyuki
2011-04-01
Below 40nm design node, systematic variation due to lithography must be taken into consideration during the early stage of design. So far, litho-aware design using lithography simulation models has been widely applied to assure that designs are printed on silicon without any error. However, the lithography simulation approach is very time consuming, and under time-to-market pressure, repetitive redesign by this approach may result in the missing of the market window. This paper proposes a fast hotspot detection support method by flexible and intelligent vision system image pattern recognition based on Higher-Order Local Autocorrelation. Our method learns the geometrical properties of the given design data without any defects as normal patterns, and automatically detects the design patterns with hotspots from the test data as abnormal patterns. The Higher-Order Local Autocorrelation method can extract features from the graphic image of design pattern, and computational cost of the extraction is constant regardless of the number of design pattern polygons. This approach can reduce turnaround time (TAT) dramatically only on 1CPU, compared with the conventional simulation-based approach, and by distributed processing, this has proven to deliver linear scalability with each additional CPU.
Adaptive approximation of higher order posterior statistics
Lee, Wonjung
2014-01-01
Filtering is an approach for incorporating observed data into time-evolving systems. Instead of a family of Dirac delta masses that is widely used in Monte Carlo methods, we here use the Wiener chaos expansion for the parametrization
Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun
2018-05-01
Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.
Collagene order of articular cartilage by clinical magnetic resonance images and its age dependency
Energy Technology Data Exchange (ETDEWEB)
Seidel, P.; Gruender, W. [Inst. of Medical Physics and Biophysics, Univ. of Leipzig (Germany)
2005-07-01
The present papers describes a novel method to obtain information on the degree of order of the collagen network of the knee meniscal cartilage by means of a single clinical MRI. Images were obtained from 34 healthy volunteers aged between 6 and 76 years as well as from one patient with clinically-diagnosed arthrosis at the age of 32 and 37 years. A siemens vision (1.5 T) MRT with TR = 750 ms, TE = 50 ms, FoV = 160 mm, and Matrix 512 x 512 was used for this purpose. The MR signal intensities of the cartilage were read out along slices with constant height above the subchondral bone and plotted versus the actual angle to the external magnetic field. The obtained intensity curves were fitted by a model distribution, and the degree of order of the collagen fibers was calculated. For the knee meniscal cartilage, there was an age-dependency of the degree of order and a significant deviation of the volunteer with arthrosis from the normal curve. The results are discussed in view of the arcade model and of a possible use of non-invasive clinical MRT for the detection of early arthrotic changes of cartilage. (orig.)
A Higher-Order Neural Network Design for Improving Segmentation Performance in Medical Image Series
International Nuclear Information System (INIS)
Selvi, Eşref; Selver, M Alper; Güzeliş, Cüneyt; Dicle, Oǧuz
2014-01-01
Segmentation of anatomical structures from medical image series is an ongoing field of research. Although, organs of interest are three-dimensional in nature, slice-by-slice approaches are widely used in clinical applications because of their ease of integration with the current manual segmentation scheme. To be able to use slice-by-slice techniques effectively, adjacent slice information, which represents likelihood of a region to be the structure of interest, plays critical role. Recent studies focus on using distance transform directly as a feature or to increase the feature values at the vicinity of the search area. This study presents a novel approach by constructing a higher order neural network, the input layer of which receives features together with their multiplications with the distance transform. This allows higher-order interactions between features through the non-linearity introduced by the multiplication. The application of the proposed method to 9 CT datasets for segmentation of the liver shows higher performance than well-known higher order classification neural networks
International Nuclear Information System (INIS)
Ma Qingyu; Zhang Dong; Gong Xiufen; Ma Yong
2007-01-01
Second or higher order harmonic imaging shows significant improvement in image clarity but is degraded by low signal-noise ratio (SNR) compared with fundamental imaging. This paper presents a phase-coded multi-pulse technique to provide the enhancement of SNR for the desired high-order harmonic ultrasonic imaging. In this technique, with N phase-coded pulses excitation, the received Nth harmonic signal is enhanced by 20 log 10 N dB compared with that in the single-pulse mode, whereas the fundamental and other order harmonic components are efficiently suppressed to reduce image confusion. The principle of this technique is theoretically discussed based on the theory of the finite amplitude sound waves, and examined by measurements of the axial and lateral beam profiles as well as the phase shift of the harmonics. In the experimental imaging for two biological tissue specimens, a plane piston source at 2 MHz is used to transmit a sequence of multiple pulses with equidistant phase shift. The second to fifth harmonic images are obtained using this technique with N = 2 to 5, and compared with the images obtained at the fundamental frequency. Results demonstrate that this technique of relying on higher order harmonics seems to provide a better resolution and contrast of ultrasonic images
Decomposition of Polarimetric SAR Images Based on Second- and Third-order Statics Analysis
Kojima, S.; Hensley, S.
2012-12-01
There are many papers concerning the research of the decomposition of polerimetric SAR imagery. Most of them are based on second-order statics analysis that Freeman and Durden [1] suggested for the reflection symmetry condition that implies that the co-polarization and cross-polarization correlations are close to zero. Since then a number of improvements and enhancements have been proposed to better understand the underlying backscattering mechanisms present in polarimetric SAR images. For example, Yamaguchi et al. [2] added the helix component into Freeman's model and developed a 4 component scattering model for the non-reflection symmetry condition. In addition, Arii et al. [3] developed an adaptive model-based decomposition method that could estimate both the mean orientation angle and a degree of randomness for the canopy scattering for each pixel in a SAR image without the reflection symmetry condition. This purpose of this research is to develop a new decomposition method based on second- and third-order statics analysis to estimate the surface, dihedral, volume and helix scattering components from polarimetric SAR images without the specific assumptions concerning the model for the volume scattering. In addition, we evaluate this method by using both simulation and real UAVSAR data and compare this method with other methods. We express the volume scattering component using the wire formula and formulate the relationship equation between backscattering echo and each component such as the surface, dihedral, volume and helix via linearization based on second- and third-order statics. In third-order statics, we calculate the correlation of the correlation coefficients for each polerimetric data and get one new relationship equation to estimate each polarization component such as HH, VV and VH for the volume. As a result, the equation for the helix component in this method is the same formula as one in Yamaguchi's method. However, the equation for the volume
International Nuclear Information System (INIS)
El Sawi, M.
1983-07-01
A simple approach employing properties of solutions of differential equations is adopted to derive an appropriate extension of the WKBJ method. Some of the earlier techniques that are commonly in use are unified, whereby the general approximate solution to a second-order homogeneous linear differential equation is presented in a standard form that is valid for all orders. In comparison to other methods, the present one is shown to be leading in the order of iteration, and thus possibly has the ability of accelerating the convergence of the solution. The method is also extended for the solution of inhomogeneous equations. (author)
Directory of Open Access Journals (Sweden)
Seung Oh Lee
2013-10-01
Full Text Available Collection and investigation of flood information are essential to understand the nature of floods, but this has proved difficult in data-poor environments, or in developing or under-developed countries due to economic and technological limitations. The development of remote sensing data, GIS, and modeling techniques have, therefore, proved to be useful tools in the analysis of the nature of floods. Accordingly, this study attempts to estimate a flood discharge using the generalized likelihood uncertainty estimation (GLUE methodology and a 1D hydraulic model, with remote sensing data and topographic data, under the assumed condition that there is no gauge station in the Missouri river, Nebraska, and Wabash River, Indiana, in the United States. The results show that the use of Landsat leads to a better discharge approximation on a large-scale reach than on a small-scale. Discharge approximation using the GLUE depended on the selection of likelihood measures. Consideration of physical conditions in study reaches could, therefore, contribute to an appropriate selection of informal likely measurements. The river discharge assessed by using Landsat image and the GLUE Methodology could be useful in supplementing flood information for flood risk management at a planning level in ungauged basins. However, it should be noted that this approach to the real-time application might be difficult due to the GLUE procedure.
An improved saddlepoint approximation.
Gillespie, Colin S; Renshaw, Eric
2007-08-01
Given a set of third- or higher-order moments, not only is the saddlepoint approximation the only realistic 'family-free' technique available for constructing an associated probability distribution, but it is 'optimal' in the sense that it is based on the highly efficient numerical method of steepest descents. However, it suffers from the problem of not always yielding full support, and whilst [S. Wang, General saddlepoint approximations in the bootstrap, Prob. Stat. Lett. 27 (1992) 61.] neat scaling approach provides a solution to this hurdle, it leads to potentially inaccurate and aberrant results. We therefore propose several new ways of surmounting such difficulties, including: extending the inversion of the cumulant generating function to second-order; selecting an appropriate probability structure for higher-order cumulants (the standard moment closure procedure takes them to be zero); and, making subtle changes to the target cumulants and then optimising via the simplex algorithm.
Self-similar factor approximants
International Nuclear Information System (INIS)
Gluzman, S.; Yukalov, V.I.; Sornette, D.
2003-01-01
The problem of reconstructing functions from their asymptotic expansions in powers of a small variable is addressed by deriving an improved type of approximants. The derivation is based on the self-similar approximation theory, which presents the passage from one approximant to another as the motion realized by a dynamical system with the property of group self-similarity. The derived approximants, because of their form, are called self-similar factor approximants. These complement the obtained earlier self-similar exponential approximants and self-similar root approximants. The specific feature of self-similar factor approximants is that their control functions, providing convergence of the computational algorithm, are completely defined from the accuracy-through-order conditions. These approximants contain the Pade approximants as a particular case, and in some limit they can be reduced to the self-similar exponential approximants previously introduced by two of us. It is proved that the self-similar factor approximants are able to reproduce exactly a wide class of functions, which include a variety of nonalgebraic functions. For other functions, not pertaining to this exactly reproducible class, the factor approximants provide very accurate approximations, whose accuracy surpasses significantly that of the most accurate Pade approximants. This is illustrated by a number of examples showing the generality and accuracy of the factor approximants even when conventional techniques meet serious difficulties
Energy Technology Data Exchange (ETDEWEB)
Sun, Hongwei; Pistorius, Stephen [Department of Physics and Astronomy, University of Manitoba, CancerCare, Manitoba (Canada)
2016-08-15
PET images are affected by the presence of scattered photons. Incorrect scatter-correction may cause artifacts, particularly in 3D PET systems. Current scatter reconstruction methods do not distinguish between single and higher order scattered photons. A dual-scattered reconstruction method (GDS-MLEM) that is independent of the number of Compton scattering interactions and less sensitive to the need for high energy resolution detectors, is proposed. To avoid overcorrecting for scattered coincidences, the attenuation coefficient was calculated by integrating the differential Klein-Nishina cross-section over a restricted energy range, accounting only for scattered photons that were not detected. The optimum image can be selected by choosing an energy threshold which is the upper energy limit for the calculation of the cross-section and the lower limit for scattered photons in the reconstruction. Data was simulated using the GATE platform. 500,000 multiple scattered photon coincidences with perfect energy resolution were reconstructed using various methods. The GDS-MLEM algorithm had the highest confidence (98%) in locating the annihilation position and was capable of reconstructing the two largest hot regions. 100,000 photon coincidences, with a scatter fraction of 40%, were used to test the energy resolution dependence of different algorithms. With a 350–650 keV energy window and the restricted attenuation correction model, the GDS-MLEM algorithm was able to improve contrast recovery and reduce the noise by 7.56%–13.24% and 12.4%–24.03%, respectively. This approach is less sensitive to the energy resolution and shows promise if detector energy resolutions of 12% can be achieved.
Bayesian image reconstruction in SPECT using higher order mechanical models as priors
International Nuclear Information System (INIS)
Lee, S.J.; Gindi, G.; Rangarajan, A.
1995-01-01
While the ML-EM (maximum-likelihood-expectation maximization) algorithm for reconstruction for emission tomography is unstable due to the ill-posed nature of the problem, Bayesian reconstruction methods overcome this instability by introducing prior information, often in the form of a spatial smoothness regularizer. More elaborate forms of smoothness constraints may be used to extend the role of the prior beyond that of a stabilizer in order to capture actual spatial information about the object. Previously proposed forms of such prior distributions were based on the assumption of a piecewise constant source distribution. Here, the authors propose an extension to a piecewise linear model--the weak plate--which is more expressive than the piecewise constant model. The weak plate prior not only preserves edges but also allows for piecewise ramplike regions in the reconstruction. Indeed, for the application in SPECT, such ramplike regions are observed in ground-truth source distributions in the form of primate autoradiographs of rCBF radionuclides. To incorporate the weak plate prior in a MAP approach, the authors model the prior as a Gibbs distribution and use a GEM formulation for the optimization. They compare quantitative performance of the ML-EM algorithm, a GEM algorithm with a prior favoring piecewise constant regions, and a GEM algorithm with the weak plate prior. Pointwise and regional bias and variance of ensemble image reconstructions are used as indications of image quality. The results show that the weak plate and membrane priors exhibit improved bias and variance relative to ML-EM techniques
International Conference Approximation Theory XIV
Schumaker, Larry
2014-01-01
This volume developed from papers presented at the international conference Approximation Theory XIV, held April 7–10, 2013 in San Antonio, Texas. The proceedings contains surveys by invited speakers, covering topics such as splines on non-tensor-product meshes, Wachspress and mean value coordinates, curvelets and shearlets, barycentric interpolation, and polynomial approximation on spheres and balls. Other contributed papers address a variety of current topics in approximation theory, including eigenvalue sequences of positive integral operators, image registration, and support vector machines. This book will be of interest to mathematicians, engineers, and computer scientists working in approximation theory, computer-aided geometric design, numerical analysis, and related approximation areas.
Directional x-ray dark-field imaging of strongly ordered systems
Jensen, Torben Haugaard; Bech, Martin; Zanette, Irene; Weitkamp, Timm; David, Christian; Deyhle, Hans; Rutishauser, Simon; Reznikova, Elena; Mohr, Jürgen; Feidenhans'L, Robert; Pfeiffer, Franz
2010-12-01
Recently a novel grating based x-ray imaging approach called directional x-ray dark-field imaging was introduced. Directional x-ray dark-field imaging yields information about the local texture of structures smaller than the pixel size of the imaging system. In this work we extend the theoretical description and data processing schemes for directional dark-field imaging to strongly scattering systems, which could not be described previously. We develop a simple scattering model to account for these recent observations and subsequently demonstrate the model using experimental data. The experimental data includes directional dark-field images of polypropylene fibers and a human tooth slice.
Astola, L.J.; Florack, L.M.J.
2011-01-01
We study 3D-multidirectional images, using Finsler geometry. The application considered here is in medical image analysis, specifically in High Angular Resolution Diffusion Imaging (HARDI) (Tuch et al. in Magn. Reson. Med. 48(6):1358–1372, 2004) of the brain. The goal is to reveal the architecture
Astola, L.; Florack, L.
2011-01-01
We study 3D-multidirectional images, using Finsler geometry. The application considered here is in medical image analysis, specifically in High Angular Resolution Diffusion Imaging (HARDI) (Tuch et al. in Magn. Reson. Med. 48(6):1358–1372, 2004) of the brain. The goal is to reveal the architecture
Astola, L.J.; Florack, L.M.J.
2010-01-01
We study 3D-multidirectional images, using Finsler geometry. The application considered here is in medical image analysis, specifically in High Angular Resolution Diffusion Imaging (HARDI) [24] of the brain. The goal is to reveal the architecture of the neural fibers in brain white matter. To the
Diophantine approximation and badly approximable sets
DEFF Research Database (Denmark)
Kristensen, S.; Thorn, R.; Velani, S.
2006-01-01
. The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...
Claret, Pierre-Géraud; Bobbia, Xavier; Macri, Francesco; Stowell, Andrew; Motté, Antony; Landais, Paul; Beregi, Jean-Paul; de La Coussaye, Jean-Emmanuel
2016-06-01
The adoption of computerized physician order entry is an important cornerstone of using health information technology (HIT) in health care. The transition from paper to computer forms presents a change in physicians' practices. The main objective of this study was to investigate the impact of implementing a computer-based order entry (CPOE) system without clinical decision support on the number of radiographs ordered for patients admitted in the emergency department. This single-center pre-/post-intervention study was conducted in January, 2013 (before CPOE period) and January, 2014 (after CPOE period) at the emergency department at Nîmes University Hospital. All patients admitted in the emergency department who had undergone medical imaging were included in the study. Emergency department admissions have increased since the implementation of CPOE (5388 in the period before CPOE implementation vs. 5808 patients after CPOE implementation, p=.008). In the period before CPOE implementation, 2345 patients (44%) had undergone medical imaging; in the period after CPOE implementation, 2306 patients (40%) had undergone medical imaging (p=.008). In the period before CPOE, 2916 medical imaging procedures were ordered; in the period after CPOE, 2876 medical imaging procedures were ordered (p=.006). In the period before CPOE, 1885 radiographs were ordered; in the period after CPOE, 1776 radiographs were ordered (pmedical imaging did not vary between the two periods. Our results show a decrease in the number of radiograph requests after a CPOE system without clinical decision support was implemented in our emergency department. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Liu, Chaocheng; Desai, Shashwat; Krebs, Lynette D; Kirkland, Scott W; Keto-Lambert, Diana; Rowe, Brian H
2018-01-08
Low back pain (LBP) is an extremely frequent reason for patients to present to an emergency department (ED). Despite evidence against the utility of imaging, simple and advanced imaging (i.e., computed tomography [CT], magnetic resonance imaging) for patients with LBP has become increasingly frequent in the ED. The objective of this review was to identify and examine the effectiveness of interventions aimed at reducing image ordering in the ED for LBP patients. A protocol was developed a priori, following the PRISMA guidelines, and registered with PROSPERO. Six bibliographic databases (including MEDLINE, EMBASE, EBM Reviews, SCOPUS, CINAHL, and Dissertation Abstracts) and the gray literature were searched. Comparative studies assessing interventions that targeted image ordering in the ED for adult patients with LBP were eligible for inclusion. Two reviewers independently screened study eligibility and completed data extraction. Study quality was completed independently by two reviewers using the before-after quality assessment checklist, with a third-party mediator resolving any differences. Due to a limited number of studies and significant heterogeneity, only a descriptive analysis was performed. The search yielded 603 unique citations of which a total of five before-after studies were included. Quality assessment identified potential biases relating to comparability between the pre- and postintervention groups, reliable assessment of outcomes, and an overall lack of information on the intervention (i.e., time point, description, intervention data collection). The type of interventions utilized included clinical decision support tools, clinical practice guidelines, a knowledge translation initiative, and multidisciplinary protocols. Overall, four studies reported a decrease in the relative percentage change in imaging in a specific image modality (22.7%-47.4%) following implementation of the interventions; however, one study reported a 35% increase in patient
Hewawasam, Kuravi; Mendillo, Christopher B.; Howe, Glenn A.; Martel, Jason; Finn, Susanna C.; Cook, Timothy A.; Chakrabarti, Supriya
2017-09-01
The Planetary Imaging Concept Testbed Using a Recoverable Experiment - Coronagraph (PICTURE-C) mission will directly image debris disks and exozodiacal dust around nearby stars from a high-altitude balloon using a vector vortex coronagraph. The PICTURE-C low-order wavefront control (LOWC) system will be used to correct time-varying low-order aberrations due to pointing jitter, gravity sag, thermal deformation, and the gondola pendulum motion. We present the hardware and software implementation of the low-order ShackHartmann and reflective Lyot stop sensors. Development of the high-speed image acquisition and processing system is discussed with the emphasis on the reduction of hardware and computational latencies through the use of a real-time operating system and optimized data handling. By characterizing all of the LOWC latencies, we describe techniques to achieve a framerate of 200 Hz with a mean latency of ˜378 μs
Multi-surface segmentation of OCT images with AMD using sparse high order potentials.
Oliveira, Jorge; Pereira, Sérgio; Gonçalves, Luís; Ferreira, Manuel; Silva, Carlos A
2017-01-01
In age-related macular degeneration (AMD), the quantification of drusen is important because it is correlated with the evolution of the disease to an advanced stage. Therefore, we propose an algorithm based on a multi-surface framework for the segmentation of the limiting boundaries of drusen: the inner boundary of the retinal pigment epithelium + drusen complex (IRPEDC) and the Bruch's membrane (BM). Several segmentation methods have been considerably successful in segmenting retinal layers of healthy retinas in optical coherence tomography (OCT) images. These methods are successful because they incorporate prior information and regularization. Nonetheless, these factors tend to hinder the segmentation for diseased retinas. The proposed algorithm takes into account the presence of drusen and geographic atrophy (GA) related to AMD by excluding prior information and regularization just valid for healthy regions. However, even with this algorithm, prior information and regularization still cause the oversmoothing of drusen in some locations. Thus, we propose the integration of local shape prior in the form of a sparse high order potentials (SHOPs) into the algorithm to reduce the oversmoothing of drusen. The proposed algorithm was evaluated in a public database. The mean unsigned errors, relative to the average of two experts, for the inner limiting membrane (ILM), IRPEDC and BM were 2.94±2.69, 5.53±5.66 and 4.00±4.00 µ m, respectively. Drusen areas measurements were evaluated, relative to the average of two expert graders, by the mean absolute area difference and overlap ratio, which were 1579.7 ± 2106.8 µ m 2 and 0.78 ± 0.11, respectively.
International Nuclear Information System (INIS)
Yang Yi; Tang Xiangyang
2012-01-01
Purpose: The x-ray differential phase contrast imaging implemented with the Talbot interferometry has recently been reported to be capable of providing tomographic images corresponding to attenuation-contrast, phase-contrast, and dark-field contrast, simultaneously, from a single set of projection data. The authors believe that, along with small-angle x-ray scattering, the second-order phase derivative Φ ″ s (x) plays a role in the generation of dark-field contrast. In this paper, the authors derive the analytic formulae to characterize the contribution made by the second-order phase derivative to the dark-field contrast (namely, second-order differential phase contrast) and validate them via computer simulation study. By proposing a practical retrieval method, the authors investigate the potential of second-order differential phase contrast imaging for extensive applications. Methods: The theoretical derivation starts at assuming that the refractive index decrement of an object can be decomposed into δ=δ s +δ f , where δ f corresponds to the object's fine structures and manifests itself in the dark-field contrast via small-angle scattering. Based on the paraxial Fresnel-Kirchhoff theory, the analytic formulae to characterize the contribution made by δ s , which corresponds to the object's smooth structures, to the dark-field contrast are derived. Through computer simulation with specially designed numerical phantoms, an x-ray differential phase contrast imaging system implemented with the Talbot interferometry is utilized to evaluate and validate the derived formulae. The same imaging system is also utilized to evaluate and verify the capability of the proposed method to retrieve the second-order differential phase contrast for imaging, as well as its robustness over the dimension of detector cell and the number of steps in grating shifting. Results: Both analytic formulae and computer simulations show that, in addition to small-angle scattering, the
Energy Technology Data Exchange (ETDEWEB)
Wang, Tianhan; Zhu, Diling; Benny Wu,; Graves, Catherine; Schaffert, Stefan; Rander, Torbjorn; Muller, leonard; Vodungbo, Boris; Baumier, Cedric; Bernstein, David P.; Brauer, Bjorn; Cros, Vincent; Jong, Sanne de; Delaunay, Renaud; Fognini, Andreas; Kukreja, Roopali; Lee, Sooheyong; Lopez-Flores, Victor; Mohanty, Jyoti; Pfau, Bastian; Popescu, 5 Horia
2012-05-15
We present the first single-shot images of ferromagnetic, nanoscale spin order taken with femtosecond x-ray pulses. X-ray-induced electron and spin dynamics can be outrun with pulses shorter than 80 fs in the investigated fluence regime, and no permanent aftereffects in the samples are observed below a fluence of 25 mJ/cm{sup 2}. Employing resonant spatially-muliplexed x-ray holography results in a low imaging threshold of 5 mJ/cm{sup 2}. Our results open new ways to combine ultrafast laser spectroscopy with sequential snapshot imaging on a single sample, generating a movie of excited state dynamics.
High-order noise analysis for low dose iterative image reconstruction methods: ASIR, IRIS, and MBAI
Do, Synho; Singh, Sarabjeet; Kalra, Mannudeep K.; Karl, W. Clem; Brady, Thomas J.; Pien, Homer
2011-03-01
Iterative reconstruction techniques (IRTs) has been shown to suppress noise significantly in low dose CT imaging. However, medical doctors hesitate to accept this new technology because visual impression of IRT images are different from full-dose filtered back-projection (FBP) images. Most common noise measurements such as the mean and standard deviation of homogeneous region in the image that do not provide sufficient characterization of noise statistics when probability density function becomes non-Gaussian. In this study, we measure L-moments of intensity values of images acquired at 10% of normal dose and reconstructed by IRT methods of two state-of-art clinical scanners (i.e., GE HDCT and Siemens DSCT flash) by keeping dosage level identical to each other. The high- and low-dose scans (i.e., 10% of high dose) were acquired from each scanner and L-moments of noise patches were calculated for the comparison.
Third order harmonic imaging for biological tissues using three phase-coded pulses.
Ma, Qingyu; Gong, Xiufen; Zhang, Dong
2006-12-22
Compared to the fundamental and the second harmonic imaging, the third harmonic imaging shows significant improvements in image quality due to the better resolution, but it is degraded by the lower sound pressure and signal-to-noise ratio (SNR). In this study, a phase-coded pulse technique is proposed to selectively enhance the sound pressure of the third harmonic by 9.5 dB whereas the fundamental and the second harmonic components are efficiently suppressed and SNR is also increased by 4.7 dB. Based on the solution of the KZK nonlinear equation, the axial and lateral beam profiles of harmonics radiated from a planar piston transducer were theoretically simulated and experimentally examined. Finally, the third harmonic images using this technique were performed for several biological tissues and compared with the images obtained by the fundamental and the second harmonic imaging. Results demonstrate that the phase-coded pulse technique yields a dramatically cleaner and sharper contrast image.
Lima, C S; Barbosa, D; Ramos, J; Tavares, A; Monteiro, L; Carvalho, L
2008-01-01
This paper presents a system to support medical diagnosis and detection of abnormal lesions by processing capsule endoscopic images. Endoscopic images possess rich information expressed by texture. Texture information can be efficiently extracted from medium scales of the wavelet transform. The set of features proposed in this paper to code textural information is named color wavelet covariance (CWC). CWC coefficients are based on the covariances of second order textural measures, an optimum subset of them is proposed. Third and forth order moments are added to cope with distributions that tend to become non-Gaussian, especially in some pathological cases. The proposed approach is supported by a classifier based on radial basis functions procedure for the characterization of the image regions along the video frames. The whole methodology has been applied on real data containing 6 full endoscopic exams and reached 95% specificity and 93% sensitivity.
Robust rooftop extraction from visible band images using higher order CRF
Li, Er; Femiani, John; Xu, Shibiao; Zhang, Xiaopeng; Wonka, Peter
2015-01-01
In this paper, we propose a robust framework for building extraction in visible band images. We first get an initial classification of the pixels based on an unsupervised presegmentation. Then, we develop a novel conditional random field (CRF
A Combined First and Second Order Variational Approach for Image Reconstruction
Papafitsoros, K.; Schö nlieb, C. B.
2013-01-01
the creation of undesirable artifacts and blocky-like structures in the reconstructed images-a known disadvantage of the ROF model-while being simple and efficiently numerically solvable. ©Springer Science+Business Media New York 2013.
Stoeck, Christian T; von Deuster, Constantin; Fleischmann, Thea; Lipiski, Miriam; Cesarovic, Nikola; Kozerke, Sebastian
2018-04-01
To directly compare in vivo versus postmortem second-order motion-compensated spin-echo diffusion tensor imaging of the porcine heart. Second-order motion-compensated spin-echo cardiac diffusion tensor imaging was performed during systolic contraction in vivo and repeated upon cardiac arrest by bariumchloride without repositioning of the study animal or replaning of imaging slices. In vivo and postmortem reproducibility was assessed by repeat measurements. Comparison of helix, transverse, and sheet (E2A) angulation as well as mean diffusivity and fractional anisotropy was performed. Intraclass correlation coefficients for repeated measurements (postmortem/in vivo) were 0.95/0.96 for helix, 0.70/0.66 for transverse, and 0.79/0.72 for E2A angulation; 0.83/0.72 for mean diffusivity; and 0.78/0.76 for fractional anisotropy. The corresponding 95% levels of agreement across the left ventricle were: helix 14 to 18°/12 to 15°, transverse 9 to 10°/10 to 11°, E2A 15 to 20°/16 to 18°. The 95% levels of agreement across the left ventricle for the comparison of postmortem versus in vivo were 20 to 22° for helix, 13 to 19° for transverse, and 24 to 31° for E2A angulation. Parameters derived from in vivo second-order motion-compensated spin-echo diffusion tensor imaging agreed well with postmortem imaging, indicating sufficient suppression of motion-induced signal distortions of in vivo cardiac diffusion tensor imaging. Magn Reson Med 79:2265-2276, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Rational approximations for tomographic reconstructions
International Nuclear Information System (INIS)
Reynolds, Matthew; Beylkin, Gregory; Monzón, Lucas
2013-01-01
We use optimal rational approximations of projection data collected in x-ray tomography to improve image resolution. Under the assumption that the object of interest is described by functions with jump discontinuities, for each projection we construct its rational approximation with a small (near optimal) number of terms for a given accuracy threshold. This allows us to augment the measured data, i.e., double the number of available samples in each projection or, equivalently, extend (double) the domain of their Fourier transform. We also develop a new, fast, polar coordinate Fourier domain algorithm which uses our nonlinear approximation of projection data in a natural way. Using augmented projections of the Shepp–Logan phantom, we provide a comparison between the new algorithm and the standard filtered back-projection algorithm. We demonstrate that the reconstructed image has improved resolution without additional artifacts near sharp transitions in the image. (paper)
X-ray holographic imaging of magnetic order in meander domain structures
Directory of Open Access Journals (Sweden)
Jaouen Nicolas
2013-01-01
Full Text Available We performed x-ray holography experiments using synchrotron radiation. By analyzing the scattering of coherent circularly polarized x-rays tuned at the Co-2p resonance, we imaged perpendicular magnetic domains in a Co/Pd multilayer. We compare results obtained for continuous and laterally confined films.
Vecellio, Elia; Georgiou, Andrew
2016-01-01
Repeat and redundant procedures in medical imaging are associated with increases in resource utilisation and labour costs. Unnecessary medical imaging in some modalities, such as X-Ray (XR) and Computed Tomography (CT) is an important safety issue because it exposes patients to ionising radiation which can be carcinogenic and is associated with higher rates of cancer. The aim of this study was to assess the impact of implementing an integrated Computerised Provider Order Entry (CPOE)/Radiology Information System (RIS)/Picture Archiving and Communications System (PACS) system on the number of XR and CT imaging procedures (including repeat imaging requests) for inpatients at a large metropolitan hospital. The study found that patients had an average 0.47 fewer XR procedures and 0.07 fewer CT procedures after the implementation of the integrated system. Part of this reduction was driven by a lower rate of repeat procedures: the average inpatient had 0.13 fewer repeat XR procedures within 24-hours of the previous identical XR procedure. A similar decrease was not evident for repeat CT procedures. Reduced utilisation of imaging procedures (especially those within very short intervals from the previous identical procedure, which are more likely to be redundant) has implications for the safety of patients and the cost of medical imaging services.
National Research Council Canada - National Science Library
Hathi, N. P; Jansen, R. A; Windhorst, R. A; Cohen, S. H; Keel, W. C; Corbin, M. R; Ryan, Jr, R. E
2007-01-01
The Hubble Ultra Deep Field (HUDF) contains a significant number of B-, V-, and iota'-band dropout objects, many of which were recently confirmed to be young star-forming galaxies at Z approximately equal 4-6...
Directory of Open Access Journals (Sweden)
Jian-feng Zhao
2017-01-01
Full Text Available This paper presents a three-dimensional autonomous chaotic system with high fraction dimension. It is noted that the nonlinear characteristic of the improper fractional-order chaos is interesting. Based on the continuous chaos and the discrete wavelet function map, an image encryption algorithm is put forward. The key space is formed by the initial state variables, parameters, and orders of the system. Every pixel value is included in secret key, so as to improve antiattack capability of the algorithm. The obtained simulation results and extensive security analyses demonstrate the high level of security of the algorithm and show its robustness against various types of attacks.
International Nuclear Information System (INIS)
Cao, Xiaoqing; Xie, Qingguo; Xiao, Peng
2015-01-01
List mode format is commonly used in modern positron emission tomography (PET) for image reconstruction due to certain special advantages. In this work, we proposed a list mode based regularized relaxed ordered subset (LMROS) algorithm for static PET imaging. LMROS is able to work with regularization terms which can be formulated as twice differentiable convex functions. Such a versatility would make LMROS a convenient and general framework for fulfilling different regularized list mode reconstruction methods. LMROS was applied to two simulated undersampling PET imaging scenarios to verify its effectiveness. Convex quadratic function, total variation constraint, non-local means and dictionary learning based regularization methods were successfully realized for different cases. The results showed that the LMROS algorithm was effective and some regularization methods greatly reduced the distortions and artifacts caused by undersampling. (paper)
On Scientific Data and Image Compression Based on Adaptive Higher-Order FEM
Czech Academy of Sciences Publication Activity Database
Šolín, Pavel; Andrš, David
2009-01-01
Roč. 1, č. 1 (2009), s. 56-68 ISSN 2070-0733 R&D Projects: GA ČR(CZ) GA102/07/0496; GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z20570509 Keywords : data compress ion * image compress ion * adaptive hp-FEM Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering http://www.global-sci.org/aamm
International Nuclear Information System (INIS)
Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei
2014-01-01
As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate O(1/k 2 ). In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques. (paper)
Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei
2014-06-21
As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate [Formula: see text]. In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques.
International Nuclear Information System (INIS)
Kajan, Zahra Dalili; Davalloo, Reza Tayefeh; Tavangar, Mayam; Valizade, Fatemeh
2015-01-01
Contrast, sharpness, enhancement, and density can be changed in digital systems. The important question is to what extent the changes in these variables affect the accuracy of caries detection. Forty eight extracted human posterior teeth with healthy or proximal caries surfaces were imaged using a photostimulable phosphor (PSP) sensor. All original images were processed using a six-step method: (1) applying 'Sharpening 2' and 'Noise Reduction' processing options to the original images; (2) applying the 'Magnification 1:3' option to the image obtained in the first step; (3) enhancing the original images by using the 'Diagonal/' option; (4) reviewing the changes brought about by the third step of image processing and then, applying 'Magnification 1:3'; (5) applying 'Sharpening UM' to the original images; and (6) analyzing the changes brought about by the fifth step of image processing, and finally, applying 'Magnification 1:3.' Three observers evaluated the images. The tooth sections were evaluated histologically as the gold standard. The diagnostic accuracy of the observers was compared using a chi-squared test. The accuracy levels irrespective of the image processing method ranged from weak (18.8%) to intermediate (54.2%), but the highest accuracy was achieved at the sixth image processing step. The overall diagnostic accuracy level showed a statistically significant difference (p=0.0001). This study shows that the application of 'Sharpening UM' along with the 'Magnification 1:3' processing option improved the diagnostic accuracy and the observer agreement more effectively than the other processing procedures.
Energy Technology Data Exchange (ETDEWEB)
Kajan, Zahra Dalili; Davalloo, Reza Tayefeh; Tavangar, Mayam; Valizade, Fatemeh [Faculty of Dentistry, Guilan University of Medical Sciences, Rasht (Iran, Islamic Republic of)
2015-06-15
Contrast, sharpness, enhancement, and density can be changed in digital systems. The important question is to what extent the changes in these variables affect the accuracy of caries detection. Forty eight extracted human posterior teeth with healthy or proximal caries surfaces were imaged using a photostimulable phosphor (PSP) sensor. All original images were processed using a six-step method: (1) applying 'Sharpening 2' and 'Noise Reduction' processing options to the original images; (2) applying the 'Magnification 1:3' option to the image obtained in the first step; (3) enhancing the original images by using the 'Diagonal/' option; (4) reviewing the changes brought about by the third step of image processing and then, applying 'Magnification 1:3'; (5) applying 'Sharpening UM' to the original images; and (6) analyzing the changes brought about by the fifth step of image processing, and finally, applying 'Magnification 1:3.' Three observers evaluated the images. The tooth sections were evaluated histologically as the gold standard. The diagnostic accuracy of the observers was compared using a chi-squared test. The accuracy levels irrespective of the image processing method ranged from weak (18.8%) to intermediate (54.2%), but the highest accuracy was achieved at the sixth image processing step. The overall diagnostic accuracy level showed a statistically significant difference (p=0.0001). This study shows that the application of 'Sharpening UM' along with the 'Magnification 1:3' processing option improved the diagnostic accuracy and the observer agreement more effectively than the other processing procedures.
Intensity-based hierarchical elastic registration using approximating splines.
Serifovic-Trbalic, Amira; Demirovic, Damir; Cattin, Philippe C
2014-01-01
We introduce a new hierarchical approach for elastic medical image registration using approximating splines. In order to obtain the dense deformation field, we employ Gaussian elastic body splines (GEBS) that incorporate anisotropic landmark errors and rotation information. Since the GEBS approach is based on a physical model in form of analytical solutions of the Navier equation, it can very well cope with the local as well as global deformations present in the images by varying the standard deviation of the Gaussian forces. The proposed GEBS approximating model is integrated into the elastic hierarchical image registration framework, which decomposes a nonrigid registration problem into numerous local rigid transformations. The approximating GEBS registration scheme incorporates anisotropic landmark errors as well as rotation information. The anisotropic landmark localization uncertainties can be estimated directly from the image data, and in this case, they represent the minimal stochastic localization error, i.e., the Cramér-Rao bound. The rotation information of each landmark obtained from the hierarchical procedure is transposed in an additional angular landmark, doubling the number of landmarks in the GEBS model. The modified hierarchical registration using the approximating GEBS model is applied to register 161 image pairs from a digital mammogram database. The obtained results are very encouraging, and the proposed approach significantly improved all registrations comparing the mean-square error in relation to approximating TPS with the rotation information. On artificially deformed breast images, the newly proposed method performed better than the state-of-the-art registration algorithm introduced by Rueckert et al. (IEEE Trans Med Imaging 18:712-721, 1999). The average error per breast tissue pixel was less than 2.23 pixels compared to 2.46 pixels for Rueckert's method. The proposed hierarchical elastic image registration approach incorporates the GEBS
Atomic Resolution Imaging of Nanoscale Structural Ordering in a Complex Metal Oxide Catalyst
Zhu, Yihan
2012-08-28
The determination of the atomic structure of a functional material is crucial to understanding its "structure-to-property" relationship (e.g., the active sites in a catalyst), which is however challenging if the structure possesses complex inhomogeneities. Here, we report an atomic structure study of an important MoVTeO complex metal oxide catalyst that is potentially useful for the industrially relevant propane-based BP/SOHIO process. We combined aberration-corrected scanning transmission electron microscopy with synchrotron powder X-ray crystallography to explore the structure at both nanoscopic and macroscopic scales. At the nanoscopic scale, this material exhibits structural and compositional order within nanosized "domains", while the domains show disordered distribution at the macroscopic scale. We proposed that the intradomain compositional ordering and the interdomain electric dipolar interaction synergistically induce the displacement of Te atoms in the Mo-V-O channels, which determines the geometry of the multifunctional metal oxo-active sites.
Study of three-dimensional PET and MR image registration based on higher-order mutual information
International Nuclear Information System (INIS)
Ren Haiping; Chen Shengzu; Wu Wenkai; Yang Hu
2002-01-01
Mutual information has currently been one of the most intensively researched measures. It has been proven to be accurate and effective registration measure. Despite the general promising results, mutual information sometimes might lead to misregistration because of neglecting spatial information and treating intensity variations with undue sensitivity. An extension of mutual information framework was proposed in which higher-order spatial information regarding image structures was incorporated into the registration processing of PET and MR. The second-order estimate of mutual information algorithm was applied to the registration of seven patients. Evaluation from Vanderbilt University and authors' visual inspection showed that sub-voxel accuracy and robust results were achieved in all cases with second-order mutual information as the similarity measure and with Powell's multidimensional direction set method as optimization strategy
Z-Contrast STEM Imaging of Long-Range Ordered Structures in Epitaxially Grown CoPt Nanoparticles
Directory of Open Access Journals (Sweden)
Kazuhisa Sato
2013-01-01
Full Text Available We report on atomic structure imaging of epitaxial L10 CoPt nanoparticles using chemically sensitive high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM. Highly ordered nanoparticles formed by annealing at 973 K show single-variant structure with perpendicular c-axis orientation, while multivariant ordered domains are frequently observed for specimens annealed at 873 K. It was found that the (001 facets of the multivariant particles are terminated by Co atoms rather than by Pt, presumably due to the intermediate stage of atomic ordering. Coexistence of single-variant particles and multivariant particles in the same specimen film suggests that the interfacial energy between variant domains be small enough to form such structural domains in a nanoparticle as small as 4 nm in diameter.
Approximation of Surfaces by Cylinders
DEFF Research Database (Denmark)
Randrup, Thomas
1998-01-01
We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...
Shearlets and Optimally Sparse Approximations
DEFF Research Database (Denmark)
Kutyniok, Gitta; Lemvig, Jakob; Lim, Wang-Q
2012-01-01
Multivariate functions are typically governed by anisotropic features such as edges in images or shock fronts in solutions of transport-dominated equations. One major goal both for the purpose of compression as well as for an efficient analysis is the provision of optimally sparse approximations...... optimally sparse approximations of this model class in 2D as well as 3D. Even more, in contrast to all other directional representation systems, a theory for compactly supported shearlet frames was derived which moreover also satisfy this optimality benchmark. This chapter shall serve as an introduction...... to and a survey about sparse approximations of cartoon-like images by band-limited and also compactly supported shearlet frames as well as a reference for the state-of-the-art of this research field....
Saranathan, Manojkumar; Rettmann, Dan W; Hargreaves, Brian A; Clarke, Sharon E; Vasanawala, Shreyas S
2012-06-01
To develop and evaluate a multiphasic contrast-enhanced MRI method called DIfferential Sub-sampling with Cartesian Ordering (DISCO) for abdominal imaging. A three-dimensional, variable density pseudo-random k-space segmentation scheme was developed and combined with a Dixon-based fat-water separation algorithm to generate high temporal resolution images with robust fat suppression and without compromise in spatial resolution or coverage. With institutional review board approval and informed consent, 11 consecutive patients referred for abdominal MRI at 3 Tesla (T) were imaged with both DISCO and a routine clinical three-dimensional SPGR-Dixon (LAVA FLEX) sequence. All images were graded by two radiologists using quality of fat suppression, severity of artifacts, and overall image quality as scoring criteria. For assessment of arterial phase capture efficiency, the number of temporal phases with angiographic phase and hepatic arterial phase was recorded. There were no significant differences in quality of fat suppression, artifact severity or overall image quality between DISCO and LAVA FLEX images (P > 0.05, Wilcoxon signed rank test). The angiographic and arterial phases were captured in all 11 patients scanned using the DISCO acquisition (mean number of phases were two and three, respectively). DISCO effectively captures the fast dynamics of abdominal pathology such as hyperenhancing hepatic lesions with a high spatio-temporal resolution. Typically, 1.1 × 1.5 × 3 mm spatial resolution over 60 slices was achieved with a temporal resolution of 4-5 s. Copyright © 2012 Wiley Periodicals, Inc.
Energy Technology Data Exchange (ETDEWEB)
Lin, Chii Dong [Kansas State Univ., Manhattan, KS (United States)
2016-03-21
Directly monitoring atomic motion during a molecular transformation with atomic-scale spatio-temporal resolution is a frontier of ultrafast optical science and physical chemistry. Here we provide the foundation for a new imaging method, fixed-angle broadband laser-induced electron scattering, based on structural retrieval by direct one-dimensional Fourier transform of a photoelectron energy distribution observed along the polarization direction of an intense ultrafast light pulse. The approach exploits the scattering of a broadband wave packet created by strong-field tunnel ionization to self-interrogate the molecular structure with picometre spatial resolution and bond specificity. With its inherent femtosecond resolution, combining our technique with molecular alignment can, in principle, provide the basis for time-resolved tomography for multi-dimensional transient structural determination.
Echelle grating multi-order imaging spectrometer utilizing a catadioptric lens
Chrisp, Michael P; Bowers, Joel M
2014-05-27
A cryogenically cooled imaging spectrometer that includes a spectrometer housing having a first side and a second side opposite the first side. An entrance slit is on the first side of the spectrometer housing and directs light to a cross-disperser grating. An echelle immersions grating and a catadioptric lens are positioned in the housing to receive the light. A cryogenically cooled detector is located in the housing on the second side of the spectrometer housing. Light from the entrance slit is directed to the cross-disperser grating. The light is directed from the cross-disperser grating to the echelle immersions grating. The light is directed from the echelle immersions grating to the cryogenically cooled detector on the second side of the spectrometer housing.
The efficiency of Flory approximation
International Nuclear Information System (INIS)
Obukhov, S.P.
1984-01-01
The Flory approximation for the self-avoiding chain problem is compared with a conventional perturbation theory expansion. While in perturbation theory each term is averaged over the unperturbed set of configurations, the Flory approximation is equivalent to the perturbation theory with the averaging over the stretched set of configurations. This imposes restrictions on the integration domain in higher order terms and they can be treated self-consistently. The accuracy δν/ν of Flory approximation for self-avoiding chain problems is estimated to be 2-5% for 1 < d < 4. (orig.)
Prestack traveltime approximations
Alkhalifah, Tariq Ali
2012-05-01
Many of the explicit prestack traveltime relations used in practice are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multifocusing, based on the double square-root (DSR) equation, and the common reflection stack (CRS) approaches. Using the DSR equation, I constructed the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I recasted the eikonal in terms of the reflection angle, and thus, derived expansion based solutions of this eikonal in terms of the difference between the source and receiver velocities in a generally inhomogenous background medium. The zero-order term solution, corresponding to ignoring the lateral velocity variation in estimating the prestack part, is free of singularities and can be used to estimate traveltimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. The higher-order terms include limitations for horizontally traveling waves, however, we can readily enforce stability constraints to avoid such singularities. In fact, another expansion over reflection angle can help us avoid these singularities by requiring the source and receiver velocities to be different. On the other hand, expansions in terms of reflection angles result in singularity free equations. For a homogenous background medium, as a test, the solutions are reasonably accurate to large reflection and dip angles. A Marmousi example demonstrated the usefulness and versatility of the formulation. © 2012 Society of Exploration Geophysicists.
Directory of Open Access Journals (Sweden)
Sakae Meguro
2016-05-01
Full Text Available An observation system of centimeter-order of view of magnetic domain with local magnetization direction was developed by designing a telecentric optical system of finite design through the extension of microscope technology. The field of view realized in the developed system was 1.40 × 1.05 cm as suppressing defocus and distortion. Detection of the local magnetization direction has become possible by longitudinal Kerr observation from the orthogonal two directions. This system can be applied to the domain observation of rough surface samples and time resolved analysis for soft magnetic materials such as amorphous foil strips and soft magnetic thin films.
Yu, H.; Barriga, S.; Agurto, C.; Zamora, G.; Bauman, W.; Soliz, P.
2012-03-01
Retinal vasculature is one of the most important anatomical structures in digital retinal photographs. Accurate segmentation of retinal blood vessels is an essential task in automated analysis of retinopathy. This paper presents a new and effective vessel segmentation algorithm that features computational simplicity and fast implementation. This method uses morphological pre-processing to decrease the disturbance of bright structures and lesions before vessel extraction. Next, a vessel probability map is generated by computing the eigenvalues of the second derivatives of Gaussian filtered image at multiple scales. Then, the second order local entropy thresholding is applied to segment the vessel map. Lastly, a rule-based decision step, which measures the geometric shape difference between vessels and lesions is applied to reduce false positives. The algorithm is evaluated on the low-resolution DRIVE and STARE databases and the publicly available high-resolution image database from Friedrich-Alexander University Erlangen-Nuremberg, Germany). The proposed method achieved comparable performance to state of the art unsupervised vessel segmentation methods with a competitive faster speed on the DRIVE and STARE databases. For the high resolution fundus image database, the proposed algorithm outperforms an existing approach both on performance and speed. The efficiency and robustness make the blood vessel segmentation method described here suitable for broad application in automated analysis of retinal images.
Farshidfar, Z; Faeghi, F; Haghighatkhah, H R; Abdolmohammadi, J
2017-09-01
Magnetic resonance imaging (MRI) is the most sensitive technique to detect multiple sclerosis (MS) plaques in central nervous system. In some cases, the patients who were suspected to MS, Whereas MRI images are normal, but whether patients don't have MS plaques or MRI images are not enough optimized enough in order to show MS plaques? The aim of the current study is evaluating the efficiency of different MRI sequences in order to better detection of MS plaques. In this cross-sectional study which was performed at Shohada-E Tajrish in Tehran - Iran hospital between October, 2011 to April, 2012, included 20 patients who suspected to MS disease were selected by the method of random sampling and underwent routine brain Pulse sequences (Axial T2w, Axial T1w, Coronal T2w, Sagittal T1w, Axial FLAIR) by Siemens, Avanto, 1.5 Tesla system. If any lesion which is suspected to the MS disease was observed, additional sequences such as: Sagittal FLAIR Fat Sat, Sagittal PDw-fat Sat, Sagittal PDw-water sat was also performed. This study was performed in about 52 lesions and the results in more than 19 lesions showed that, for the Subcortical and Infratentorial areas, PDWw sequence with fat suppression is the best choice, And in nearly 33 plaques located in Periventricular area, FLAIR Fat Sat was the most effective sequence than both PDw fat and water suppression pulse sequences. Although large plaques may visible in all images, but important problem in patients with suspected MS is screening the tiny MS plaques. This study showed that for revealing the MS plaques located in the Subcortical and Infratentorial areas, PDw-fat sat is the most effective sequence, and for MS plaques in the periventricular area, FLAIR fat Sat is the best choice.
International Nuclear Information System (INIS)
Ginsburg, C.A.
1980-01-01
In many problems, a desired property A of a function f(x) is determined by the behaviour of f(x) approximately equal to g(x,A) as x→xsup(*). In this letter, a method for resuming the power series in x of f(x) and approximating A (modulated Pade approximant) is presented. This new approximant is an extension of a resumation method for f(x) in terms of rational functions. (author)
Kimura, Michio; Kuranishi, Makoto; Sukenobu, Yoshiharu; Watanabe, Hiroki; Tani, Shigeki; Sakusabe, Takaya; Nakajima, Takashi; Morimura, Shinya; Kabata, Shun
2002-06-01
The digital imaging and communications in medicine (DICOM) standard includes parts regarding nonimage data information, such as image study ordering data and performed procedure data, and is used for sharing information between HIS/RIS and modality systems, which is essential for IHE. To bring such parts of the DICOM standard into force in Japan, a joint committee of JIRA and JAHIS established the JJ1017 management guideline, specifying, for example, which items are legally required in Japan, while remaining optional in the DICOM standard. In Japan, the contents of orders from referring physicians for radiographic examinations include details of the examination. Such details are not used typically by referring physicians requesting radiographic examinations in the United States, because radiologists in the United States often determine the examination protocol. The DICOM standard has code tables for examination type, region, and direction for image examination orders. However, this investigation found that it does not include items that are detailed sufficiently for use in Japan, because of the above-mentioned reason. To overcome these drawbacks, we have generated the JJ1017 code for these 3 codes for use based on the JJ1017 guidelines. This report introduces the JJ1017 code. These codes (the study type codes in particular) must be expandable to keep up with technical advances in equipment. Expansion has 2 directions: width for covering more categories and depth for specifying the information in more detail (finer categories). The JJ1017 code takes these requirements into consideration and clearly distinguishes between the stem part as the common term and the expansion. The stem part of the JJ1017 code partially utilizes the DICOM codes to remain in line with the DICOM standard. This work is an example of how local requirements can be met by using the DICOM standard and extending it.
Face Recognition using Approximate Arithmetic
DEFF Research Database (Denmark)
Marso, Karol
Face recognition is image processing technique which aims to identify human faces and found its use in various diﬀerent ﬁelds for example in security. Throughout the years this ﬁeld evolved and there are many approaches and many diﬀerent algorithms which aim to make the face recognition as eﬀective...... processing applications the results do not need to be completely precise and use of the approximate arithmetic can lead to reduction in terms of delay, space and power consumption. In this paper we examine possible use of approximate arithmetic in face recognition using Eigenfaces algorithm....
Sparse approximation with bases
2015-01-01
This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications. The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...
Approximate quantum Markov chains
Sutter, David
2018-01-01
This book is an introduction to quantum Markov chains and explains how this concept is connected to the question of how well a lost quantum mechanical system can be recovered from a correlated subsystem. To achieve this goal, we strengthen the data-processing inequality such that it reveals a statement about the reconstruction of lost information. The main difficulty in order to understand the behavior of quantum Markov chains arises from the fact that quantum mechanical operators do not commute in general. As a result we start by explaining two techniques of how to deal with non-commuting matrices: the spectral pinching method and complex interpolation theory. Once the reader is familiar with these techniques a novel inequality is presented that extends the celebrated Golden-Thompson inequality to arbitrarily many matrices. This inequality is the key ingredient in understanding approximate quantum Markov chains and it answers a question from matrix analysis that was open since 1973, i.e., if Lieb's triple ma...
Vilardy, Juan M.; Millán, María S.; Pérez-Cabré, Elisabet
2017-02-01
A novel nonlinear image encryption scheme based on a fully phase nonzero-order joint transform correlator architecture (JTC) in the Gyrator domain (GD) is proposed. In this encryption scheme, the two non-overlapping data distributions of the input plane of the JTC are fully encoded in phase and this input plane is transformed using the Gyrator transform (GT); the intensity distribution captured in the GD represents a new definition of the joint Gyrator power distribution (JGPD). The JGPD is modified by two nonlinear operations with the purpose of retrieving the encrypted image, with enhancement of the decrypted signal quality and improvement of the overall security. There are three keys used in the encryption scheme, two random phase masks and the rotation angle of the GT, which are all necessary for a proper decryption. Decryption is highly sensitivity to changes of the rotation angle of the GT as well as to little changes in other parameters or keys. The proposed encryption scheme in the GD still preserves the shift-invariance properties originated in the JTC-based encryption in the Fourier domain. The proposed encryption scheme is more resistant to brute force attacks, chosen-plaintext attacks, known-plaintext attacks, and ciphertext-only attacks, as they have been introduced in the cryptanalysis of the JTC-based encryption system. Numerical results are presented and discussed in order to verify and analyze the feasibility and validity of the novel encryption-decryption scheme.
Approximate symmetries of Hamiltonians
Chubb, Christopher T.; Flammia, Steven T.
2017-08-01
We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.
Approximations to camera sensor noise
Jin, Xiaodan; Hirakawa, Keigo
2013-02-01
Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.
CONTRIBUTIONS TO RATIONAL APPROXIMATION,
Some of the key results of linear Chebyshev approximation theory are extended to generalized rational functions. Prominent among these is Haar’s...linear theorem which yields necessary and sufficient conditions for uniqueness. Some new results in the classic field of rational function Chebyshev...Furthermore a Weierstrass type theorem is proven for rational Chebyshev approximation. A characterization theorem for rational trigonometric Chebyshev approximation in terms of sign alternation is developed. (Author)
Approximation techniques for engineers
Komzsik, Louis
2006-01-01
Presenting numerous examples, algorithms, and industrial applications, Approximation Techniques for Engineers is your complete guide to the major techniques used in modern engineering practice. Whether you need approximations for discrete data of continuous functions, or you''re looking for approximate solutions to engineering problems, everything you need is nestled between the covers of this book. Now you can benefit from Louis Komzsik''s years of industrial experience to gain a working knowledge of a vast array of approximation techniques through this complete and self-contained resource.
Mulkens, Jan; Kubis, Michael; Hinnen, Paul; de Graaf, Roelof; van der Laan, Hans; Padiy, Alexander; Menchtchikov, Boris
2013-04-01
Immersion lithography is being extended to the 20-nm and 14-nm node and the lithography performance requirements need to be tightened further to enable this shrink. In this paper we present an integral method to enable high-order fieldto- field corrections for both imaging and overlay, and we show that this method improves the performance with 20% - 50%. The lithography architecture we build for these higher order corrections connects the dynamic scanner actuators with the angle resolved scatterometer via a separate application server. Improvements of CD uniformity are based on enabling the use of freeform intra-field dose actuator and field-to-field control of focus. The feedback control loop uses CD and focus targets placed on the production mask. For the overlay metrology we use small in-die diffraction based overlay targets. Improvements of overlay are based on using the high order intra-field correction actuators on a field-tofield basis. We use this to reduce the machine matching error, extending the heating control and extending the correction capability for process induced errors.
Directory of Open Access Journals (Sweden)
Pamina Fernández Camacho
2016-12-01
Full Text Available Atlas is a very complex mythical figure. A rebel Titan in epic tradition, he adopts the roles of both antagonist and helper of Herakles as the hero attempts to fulfill his task of stealing the apples of the Hesperides. Philosophers, geographers and historians consider him an Evemerized astronomer, a king in possession of fabulous riches, or identify him with a mountain. Often juggling positive and negative traits, Atlas’ various incarnations are connected to different themes such as divine genealogies, spatial and cosmogonic landmarks, Eastern influences, abundance in cattle and usurpation myths, which are all part of the image of the regions north and south of the Strait of Gibraltar created and spread by Graeco-Roman civilization. In this article, we propose a study of the literature and iconography related to this mythical character, as a way to give a different perspective to research about the mythical and literary geography of that area.
Expectation Consistent Approximate Inference
DEFF Research Database (Denmark)
Opper, Manfred; Winther, Ole
2005-01-01
We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability dis...
Directory of Open Access Journals (Sweden)
Habibeh Farazdaghi
2017-02-01
Full Text Available Background and Aims: Gender determination is an important step in identification. For gender determination, anthropometric evaluation is one of the main forensic evaluations. The aim of this study was the assessment of sphenoid sinus volume in order to determine sexual identity, using multi-slice CT images. Materials and Methods: For volumetric analysis, axial paranasal sinus CT scan with 3-mm slice thickness was used. For this study, 80 images (40 women and 40 men older than 18 years were selected. For the assessment of sphenoid sinus volume, Digimizer software was used. The volume of sphenoid sinus was calculated using the following equation: v=∑ (area of each slice × thickness of each slice. Statistical analysis was performed by independent T-test. Results: The mean volume of sphenoid sinus was significantly greater in male gender (P=0.01.The assessed Cut off point was 9/35 cm3, showing that 63.4% of volume assessments greater than cut off point was supposed to be male and 64.1% of volumetry lesser than cut off point were female. Conclusion: According to the area under Roc curve (1.65%, sphenoid sinus volume is not an appropriate factor for differentiation of male and female from each other, which means the predictability of cut off point (9/35 cm3 is 65/1% close to reality.
Directory of Open Access Journals (Sweden)
M. Amate
2007-01-01
Full Text Available An original algorithm for the detection of small objects in a noisy background is proposed. Its application to underwater objects detection by sonar imaging is addressed. This new method is based on the use of higher-order statistics (HOS that are locally estimated on the images. The proposed algorithm is divided into two steps. In a first step, HOS (skewness and kurtosis are estimated locally using a square sliding computation window. Small deterministic objects have different statistical properties from the background they are thus highlighted. The influence of the signal-to-noise ratio (SNR on the results is studied in the case of Gaussian noise. Mathematical expressions of the estimators and of the expected performances are derived and are experimentally confirmed. In a second step, the results are focused by a matched filter using a theoretical model. This enables the precise localization of the regions of interest. The proposed method generalizes to other statistical distributions and we derive the theoretical expressions of the HOS estimators in the case of a Weibull distribution (both when only noise is present or when a small deterministic object is present within the filtering window. This enables the application of the proposed technique to the processing of synthetic aperture sonar data containing underwater mines whose echoes have to be detected and located. Results on real data sets are presented and quantitatively evaluated using receiver operating characteristic (ROC curves.
Approximate and renormgroup symmetries
Energy Technology Data Exchange (ETDEWEB)
Ibragimov, Nail H. [Blekinge Institute of Technology, Karlskrona (Sweden). Dept. of Mathematics Science; Kovalev, Vladimir F. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Mathematical Modeling
2009-07-01
''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)
Approximate and renormgroup symmetries
International Nuclear Information System (INIS)
Ibragimov, Nail H.; Kovalev, Vladimir F.
2009-01-01
''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)
Approximations of Fuzzy Systems
Directory of Open Access Journals (Sweden)
Vinai K. Singh
2013-03-01
Full Text Available A fuzzy system can uniformly approximate any real continuous function on a compact domain to any degree of accuracy. Such results can be viewed as an existence of optimal fuzzy systems. Li-Xin Wang discussed a similar problem using Gaussian membership function and Stone-Weierstrass Theorem. He established that fuzzy systems, with product inference, centroid defuzzification and Gaussian functions are capable of approximating any real continuous function on a compact set to arbitrary accuracy. In this paper we study a similar approximation problem by using exponential membership functions
Geometric approximation algorithms
Har-Peled, Sariel
2011-01-01
Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.
International Nuclear Information System (INIS)
Knobloch, A.F.
1980-01-01
A simplified cost approximation for INTOR parameter sets in a narrow parameter range is shown. Plausible constraints permit the evaluation of the consequences of parameter variations on overall cost. (orig.) [de
Gautschi, Walter; Rassias, Themistocles M
2011-01-01
Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
On Covering Approximation Subspaces
Directory of Open Access Journals (Sweden)
Xun Ge
2009-06-01
Full Text Available Let (U';C' be a subspace of a covering approximation space (U;C and X⊂U'. In this paper, we show that and B'(X⊂B(X∩U'. Also, iff (U;C has Property Multiplication. Furthermore, some connections between outer (resp. inner definable subsets in (U;C and outer (resp. inner definable subsets in (U';C' are established. These results answer a question on covering approximation subspace posed by J. Li, and are helpful to obtain further applications of Pawlak rough set theory in pattern recognition and artificial intelligence.
On Convex Quadratic Approximation
den Hertog, D.; de Klerk, E.; Roos, J.
2000-01-01
In this paper we prove the counterintuitive result that the quadratic least squares approximation of a multivariate convex function in a finite set of points is not necessarily convex, even though it is convex for a univariate convex function. This result has many consequences both for the field of
DEFF Research Database (Denmark)
Madsen, Rasmus Elsborg
2005-01-01
The Dirichlet compound multinomial (DCM), which has recently been shown to be well suited for modeling for word burstiness in documents, is here investigated. A number of conceptual explanations that account for these recent results, are provided. An exponential family approximation of the DCM...
CSIR Research Space (South Africa)
Henriques, R
2010-08-31
Full Text Available -to-use reconstruction software coupled with image acquisition. Here, we present QuickPALM, an Image plugin, enabling real-time reconstruction of 3D super-resolution images during acquisition and drift correction. We illustrate its application by reconstructing Cy5...
Approximate convex hull of affine iterated function system attractors
International Nuclear Information System (INIS)
Mishkinis, Anton; Gentil, Christian; Lanquetin, Sandrine; Sokolov, Dmitry
2012-01-01
Highlights: ► We present an iterative algorithm to approximate affine IFS attractor convex hull. ► Elimination of the interior points significantly reduces the complexity. ► To optimize calculations, we merge the convex hull images at each iteration. ► Approximation by ellipses increases speed of convergence to the exact convex hull. ► We present a method of the output convex hull simplification. - Abstract: In this paper, we present an algorithm to construct an approximate convex hull of the attractors of an affine iterated function system (IFS). We construct a sequence of convex hull approximations for any required precision using the self-similarity property of the attractor in order to optimize calculations. Due to the affine properties of IFS transformations, the number of points considered in the construction is reduced. The time complexity of our algorithm is a linear function of the number of iterations and the number of points in the output approximate convex hull. The number of iterations and the execution time increases logarithmically with increasing accuracy. In addition, we introduce a method to simplify the approximate convex hull without loss of accuracy.
Prestack traveltime approximations
Alkhalifah, Tariq Ali
2011-01-01
Most prestack traveltime relations we tend work with are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multi-focusing or double square-root (DSR) and the common reflection stack (CRS) equations. Using the DSR equation, I analyze the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I derive expansion based solutions of this eikonal based on polynomial expansions in terms of the reflection and dip angles in a generally inhomogenous background medium. These approximate solutions are free of singularities and can be used to estimate travetimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. A Marmousi example demonstrates the usefulness of the approach. © 2011 Society of Exploration Geophysicists.
Topology, calculus and approximation
Komornik, Vilmos
2017-01-01
Presenting basic results of topology, calculus of several variables, and approximation theory which are rarely treated in a single volume, this textbook includes several beautiful, but almost forgotten, classical theorems of Descartes, Erdős, Fejér, Stieltjes, and Turán. The exposition style of Topology, Calculus and Approximation follows the Hungarian mathematical tradition of Paul Erdős and others. In the first part, the classical results of Alexandroff, Cantor, Hausdorff, Helly, Peano, Radon, Tietze and Urysohn illustrate the theories of metric, topological and normed spaces. Following this, the general framework of normed spaces and Carathéodory's definition of the derivative are shown to simplify the statement and proof of various theorems in calculus and ordinary differential equations. The third and final part is devoted to interpolation, orthogonal polynomials, numerical integration, asymptotic expansions and the numerical solution of algebraic and differential equations. Students of both pure an...
Approximate Bayesian recursive estimation
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav
2014-01-01
Roč. 285, č. 1 (2014), s. 100-111 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf
Approximating Preemptive Stochastic Scheduling
Megow Nicole; Vredeveld Tjark
2009-01-01
We present constant approximative policies for preemptive stochastic scheduling. We derive policies with a guaranteed performance ratio of 2 for scheduling jobs with release dates on identical parallel machines subject to minimizing the sum of weighted completion times. Our policies as well as their analysis apply also to the recently introduced more general model of stochastic online scheduling. The performance guarantee we give matches the best result known for the corresponding determinist...
Optimization and approximation
Pedregal, Pablo
2017-01-01
This book provides a basic, initial resource, introducing science and engineering students to the field of optimization. It covers three main areas: mathematical programming, calculus of variations and optimal control, highlighting the ideas and concepts and offering insights into the importance of optimality conditions in each area. It also systematically presents affordable approximation methods. Exercises at various levels have been included to support the learning process.
Uniform analytic approximation of Wigner rotation matrices
Hoffmann, Scott E.
2018-02-01
We derive the leading asymptotic approximation, for low angle θ, of the Wigner rotation matrix elements, dm1m2 j(θ ) , uniform in j, m1, and m2. The result is in terms of a Bessel function of integer order. We numerically investigate the error for a variety of cases and find that the approximation can be useful over a significant range of angles. This approximation has application in the partial wave analysis of wavepacket scattering.
Simultaneous approximation in scales of Banach spaces
International Nuclear Information System (INIS)
Bramble, J.H.; Scott, R.
1978-01-01
The problem of verifying optimal approximation simultaneously in different norms in a Banach scale is reduced to verification of optimal approximation in the highest order norm. The basic tool used is the Banach space interpolation method developed by Lions and Peetre. Applications are given to several problems arising in the theory of finite element methods
Energy Technology Data Exchange (ETDEWEB)
Li, Xin; Miller, Eric L.; Rappaport, Carey; Silevich, Michael
2000-04-11
A common problem in signal processing is to estimate the structure of an object from noisy measurements linearly related to the desired image. These problems are broadly known as inverse problems. A key feature which complicates the solution to such problems is their ill-posedness. That is, small perturbations in the data arising e.g. from noise can and do lead to severe, non-physical artifacts in the recovered image. The process of stabilizing these problems is known as regularization of which Tikhonov regularization is one of the most common. While this approach leads to a simple linear least squares problem to solve for generating the reconstruction, it has the unfortunate side effect of producing smooth images thereby obscuring important features such as edges. Therefore, over the past decade there has been much work in the development of edge-preserving regularizers. This technique leads to image estimates in which the important features are retained, but computationally the y require the solution of a nonlinear least squares problem, a daunting task in many practical multi-dimensional applications. In this thesis we explore low-order models for reducing the complexity of the re-construction process. Specifically, B-Splines are used to approximate the object. If a ''proper'' collection B-Splines are chosen that the object can be efficiently represented using a few basis functions, the dimensionality of the underlying problem will be significantly decreased. Consequently, an optimum distribution of splines needs to be determined. Here, an adaptive refining and pruning algorithm is developed to solve the problem. The refining part is based on curvature information, in which the intuition is that a relatively dense set of fine scale basis elements should cluster near regions of high curvature while a spares collection of basis vectors are required to adequately represent the object over spatially smooth areas. The pruning part is a greedy
Nonlinear Ritz approximation for Fredholm functionals
Directory of Open Access Journals (Sweden)
Mudhir A. Abdul Hussain
2015-11-01
Full Text Available In this article we use the modify Lyapunov-Schmidt reduction to find nonlinear Ritz approximation for a Fredholm functional. This functional corresponds to a nonlinear Fredholm operator defined by a nonlinear fourth-order differential equation.
Polynomial approximation of functions in Sobolev spaces
International Nuclear Information System (INIS)
Dupont, T.; Scott, R.
1980-01-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces
Cyclic approximation to stasis
Directory of Open Access Journals (Sweden)
Stewart D. Johnson
2009-06-01
Full Text Available Neighborhoods of points in $mathbb{R}^n$ where a positive linear combination of $C^1$ vector fields sum to zero contain, generically, cyclic trajectories that switch between the vector fields. Such points are called stasis points, and the approximating switching cycle can be chosen so that the timing of the switches exactly matches the positive linear weighting. In the case of two vector fields, the stasis points form one-dimensional $C^1$ manifolds containing nearby families of two-cycles. The generic case of two flows in $mathbb{R}^3$ can be diffeomorphed to a standard form with cubic curves as trajectories.
The relaxation time approximation
International Nuclear Information System (INIS)
Gairola, R.P.; Indu, B.D.
1991-01-01
A plausible approximation has been made to estimate the relaxation time from a knowledge of the transition probability of phonons from one state (r vector, q vector) to other state (r' vector, q' vector), as a result of collision. The relaxation time, thus obtained, shows a strong dependence on temperature and weak dependence on the wave vector. In view of this dependence, relaxation time has been expressed in terms of a temperature Taylor's series in the first Brillouin zone. Consequently, a simple model for estimating the thermal conductivity is suggested. the calculations become much easier than the Callaway model. (author). 14 refs
Polynomial approximation on polytopes
Totik, Vilmos
2014-01-01
Polynomial approximation on convex polytopes in \\mathbf{R}^d is considered in uniform and L^p-norms. For an appropriate modulus of smoothness matching direct and converse estimates are proven. In the L^p-case so called strong direct and converse results are also verified. The equivalence of the moduli of smoothness with an appropriate K-functional follows as a consequence. The results solve a problem that was left open since the mid 1980s when some of the present findings were established for special, so-called simple polytopes.
Finite elements and approximation
Zienkiewicz, O C
2006-01-01
A powerful tool for the approximate solution of differential equations, the finite element is extensively used in industry and research. This book offers students of engineering and physics a comprehensive view of the principles involved, with numerous illustrative examples and exercises.Starting with continuum boundary value problems and the need for numerical discretization, the text examines finite difference methods, weighted residual methods in the context of continuous trial functions, and piecewise defined trial functions and the finite element method. Additional topics include higher o
Approximate Bayesian computation.
Directory of Open Access Journals (Sweden)
Mikael Sunnåker
Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.
Asmar, Melissa; Wachtel, Heather; Yan, Yan; Fraker, Douglas L; Cohen, Debbie; Trerotola, Scott O
2015-08-01
Adrenal venous sampling (AVS) is the definitive evaluation for primary aldosteronism (PA). Pre-AVS cross-sectional imaging does not reduce the need for AVS. The goal of this study was to examine whether performing AVS prior to imaging could decrease the use of imaging in the evaluation of PA at a high volume, experienced center. We performed a retrospective analysis of all AVS procedures (n = 337) done for PA from 2001-2013. Patients whose cross-sectional imaging reports were unavailable (n = 90) or AVS was non-diagnostic (n = 12) were excluded. AVS was performed using modified Mayo technique. Univariate analysis utilized the χ² test and fisher's exact test. Of the 235 patients analyzed, 63% (n = 148) were male. The mean age was 55 ± 11 years. AVS was non-lateralizing in 43% (n = 101); these patients might have avoided imaging with an AVS-first approach. Imaging and AVS were concordant in 52% (n = 123). In patients ≤40yo (n = 23), 35% (n = 8) had no lateralization on AVS, and might have avoided imaging in an AVS-first approach. Imaging and AVS were concordant in 52% (n = 12) of patients ≤ 40yo, versus 52% (n = 111) of patients > 40 yo (P = 0.987). An AVS-first, imaging-second approach could have avoided CT/MRI in 43% of patients. At a high volume, experienced center, performing AVS first on patients with PA may reduce unnecessary cross-sectional imaging studies. © 2015 Wiley Periodicals, Inc.
The ordering principle in a fragment of approximate counting
Czech Academy of Sciences Publication Activity Database
Atserias, A.; Thapen, Neil
2014-01-01
Roč. 15, č. 4 (2014), s. 29 ISSN 1529-3785 R&D Projects: GA AV ČR IAA100190902; GA ČR GBP202/12/G061 Institutional support: RVO:67985840 Keywords : computational complexity * bounded arithmetic * propositional proof complexity Subject RIV: BA - General Mathematics Impact factor: 0.618, year: 2014 http://dl.acm.org/ citation .cfm?doid=2656934.2629555
The random phase approximation
International Nuclear Information System (INIS)
Schuck, P.
1985-01-01
RPA is the adequate theory to describe vibrations of the nucleus of very small amplitudes. These vibrations can either be forced by an external electromagnetic field or can be eigenmodes of the nucleus. In a one dimensional analogue the potential corresponding to such eigenmodes of very small amplitude should be rather stiff otherwise the motion risks to be a large amplitude one and to enter a region where the approximation is not valid. This means that nuclei which are supposedly well described by RPA must have a very stable groundstate configuration (must e.g. be very stiff against deformation). This is usually the case for doubly magic nuclei or close to magic nuclei which are in the middle of proton and neutron shells which develop a very stable groundstate deformation; we take the deformation as an example but there are many other possible degrees of freedom as, for example, compression modes, isovector degrees of freedom, spin degrees of freedom, and many more
The quasilocalized charge approximation
International Nuclear Information System (INIS)
Kalman, G J; Golden, K I; Donko, Z; Hartmann, P
2005-01-01
The quasilocalized charge approximation (QLCA) has been used for some time as a formalism for the calculation of the dielectric response and for determining the collective mode dispersion in strongly coupled Coulomb and Yukawa liquids. The approach is based on a microscopic model in which the charges are quasilocalized on a short-time scale in local potential fluctuations. We review the conceptual basis and theoretical structure of the QLC approach and together with recent results from molecular dynamics simulations that corroborate and quantify the theoretical concepts. We also summarize the major applications of the QLCA to various physical systems, combined with the corresponding results of the molecular dynamics simulations and point out the general agreement and instances of disagreement between the two
Aron, Miles; Browning, Richard; Carugo, Dario; Sezgin, Erdinc; Bernardino de la Serna, Jorge; Eggeling, Christian; Stride, Eleanor
2017-05-12
Spectral imaging with polarity-sensitive fluorescent probes enables the quantification of cell and model membrane physical properties, including local hydration, fluidity, and lateral lipid packing, usually characterized by the generalized polarization (GP) parameter. With the development of commercial microscopes equipped with spectral detectors, spectral imaging has become a convenient and powerful technique for measuring GP and other membrane properties. The existing tools for spectral image processing, however, are insufficient for processing the large data sets afforded by this technological advancement, and are unsuitable for processing images acquired with rapidly internalized fluorescent probes. Here we present a MATLAB spectral imaging toolbox with the aim of overcoming these limitations. In addition to common operations, such as the calculation of distributions of GP values, generation of pseudo-colored GP maps, and spectral analysis, a key highlight of this tool is reliable membrane segmentation for probes that are rapidly internalized. Furthermore, handling for hyperstacks, 3D reconstruction and batch processing facilitates analysis of data sets generated by time series, z-stack, and area scan microscope operations. Finally, the object size distribution is determined, which can provide insight into the mechanisms underlying changes in membrane properties and is desirable for e.g. studies involving model membranes and surfactant coated particles. Analysis is demonstrated for cell membranes, cell-derived vesicles, model membranes, and microbubbles with environmentally-sensitive probes Laurdan, carboxyl-modified Laurdan (C-Laurdan), Di-4-ANEPPDHQ, and Di-4-AN(F)EPPTEA (FE), for quantification of the local lateral density of lipids or lipid packing. The Spectral Imaging Toolbox is a powerful tool for the segmentation and processing of large spectral imaging datasets with a reliable method for membrane segmentation and no ability in programming required. The
Directory of Open Access Journals (Sweden)
Hongjuan Liu
2014-01-01
Full Text Available A new general and systematic coupling scheme is developed to achieve the modified projective synchronization (MPS of different fractional-order systems under parameter mismatch via the Open-Plus-Closed-Loop (OPCL control. Based on the stability theorem of linear fractional-order systems, some sufficient conditions for MPS are proposed. Two groups of numerical simulations on the incommensurate fraction-order system and commensurate fraction-order system are presented to justify the theoretical analysis. Due to the unpredictability of the scale factors and the use of fractional-order systems, the chaotic data from the MPS is selected to encrypt a plain image to obtain higher security. Simulation results show that our method is efficient with a large key space, high sensitivity to encryption keys, resistance to attack of differential attacks, and statistical analysis.
Cosmological applications of Padé approximant
International Nuclear Information System (INIS)
Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan
2014-01-01
As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation
Cosmological applications of Padé approximant
Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan
2014-01-01
As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation.
Parks, Connie L; Monson, Keith L
2018-01-01
This research examined how accurately 2D images (i.e., photographs) of 3D clay facial approximations were matched to corresponding photographs of the approximated individuals using an objective automated facial recognition system. Irrespective of search filter (i.e., blind, sex, or ancestry) or rank class (R 1 , R 10 , R 25 , and R 50 ) employed, few operationally informative results were observed. In only a single instance of 48 potential match opportunities was a clay approximation matched to a corresponding life photograph within the top 50 images (R 50 ) of a candidate list, even with relatively small gallery sizes created from the application of search filters (e.g., sex or ancestry search restrictions). Increasing the candidate lists to include the top 100 images (R 100 ) resulted in only two additional instances of correct match. Although other untested variables (e.g., approximation method, 2D photographic process, and practitioner skill level) may have impacted the observed results, this study suggests that 2D images of manually generated clay approximations are not readily matched to life photos by automated facial recognition systems. Further investigation is necessary in order to identify the underlying cause(s), if any, of the poor recognition results observed in this study (e.g., potential inferior facial feature detection and extraction). Additional inquiry exploring prospective remedial measures (e.g., stronger feature differentiation) is also warranted, particularly given the prominent use of clay approximations in unidentified persons casework. Copyright © 2017. Published by Elsevier B.V.
Kumar, Rajesh; Srivastava, Subodh; Srivastava, Rajeev
2017-07-01
For cancer detection from microscopic biopsy images, image segmentation step used for segmentation of cells and nuclei play an important role. Accuracy of segmentation approach dominate the final results. Also the microscopic biopsy images have intrinsic Poisson noise and if it is present in the image the segmentation results may not be accurate. The objective is to propose an efficient fuzzy c-means based segmentation approach which can also handle the noise present in the image during the segmentation process itself i.e. noise removal and segmentation is combined in one step. To address the above issues, in this paper a fourth order partial differential equation (FPDE) based nonlinear filter adapted to Poisson noise with fuzzy c-means segmentation method is proposed. This approach is capable of effectively handling the segmentation problem of blocky artifacts while achieving good tradeoff between Poisson noise removals and edge preservation of the microscopic biopsy images during segmentation process for cancer detection from cells. The proposed approach is tested on breast cancer microscopic biopsy data set with region of interest (ROI) segmented ground truth images. The microscopic biopsy data set contains 31 benign and 27 malignant images of size 896 × 768. The region of interest selected ground truth of all 58 images are also available for this data set. Finally, the result obtained from proposed approach is compared with the results of popular segmentation algorithms; fuzzy c-means, color k-means, texture based segmentation, and total variation fuzzy c-means approaches. The experimental results shows that proposed approach is providing better results in terms of various performance measures such as Jaccard coefficient, dice index, Tanimoto coefficient, area under curve, accuracy, true positive rate, true negative rate, false positive rate, false negative rate, random index, global consistency error, and variance of information as compared to other
Ichikawa, Shintaro; Motosugi, Utaroh; Oishi, Naoki; Shimizu, Tatsuya; Wakayama, Tetsuya; Enomoto, Nobuyuki; Matsuda, Masanori; Onishi, Hiroshi
2018-04-01
The aim of this study was to evaluate the efficacy of multiphasic hepatic arterial phase (HAP) imaging using DISCO (differential subsampling with Cartesian ordering) in increasing the confidence of diagnosis of hepatocellular carcinoma (HCC). This retrospective study was approved by the institutional review board, and the requirement for informed patient consent was waived. Consecutive patients (from 2 study periods) with malignant liver nodules were examined by gadoxetic acid-enhanced magnetic resonance imaging using either multiphasic (6 phases; n = 135) or single (n = 230) HAP imaging, which revealed 519 liver nodules other than benign ones (HCC, 497; cholangiocarcinoma, 11; metastases, 10; and malignant lymphoma, 1). All nodules were scored in accordance with the Liver Imaging Reporting and Data System (LI-RADS v2014), with or without consideration of ring-like enhancement in multiphasic HAP images as a major feature. In the multiphasic HAP group, 178 of 191 HCCs were scored as LR-3 to LR-5 (3 [1.69%], 85 [47.8%], and 90 [50.6%], respectively). Upon considering ring-like enhancement in multiphasic HAP images as a major feature, 5 more HCCs were scored as LR-5 (95 [53.4%]), which was a significantly more confident diagnosis than that with single HAP images (295 of 306 HCCs scored as LR-3 to LR-5: 13 [4.41%], 147 [49.8%], and 135 [45.8%], respectively; P = 0.0296). There was no significant difference in false-positive or false-negative diagnoses between the multiphasic and single HAP groups (P = 0.8400 and 0.1043, respectively). Multiphasic HAP imaging can improve the confidence of diagnosis of HCCs in gadoxetic acid-enhanced magnetic resonance imaging.
Finite approximations in fluid mechanics
International Nuclear Information System (INIS)
Hirschel, E.H.
1986-01-01
This book contains twenty papers on work which was conducted between 1983 and 1985 in the Priority Research Program ''Finite Approximations in Fluid Mechanics'' of the German Research Society (Deutsche Forschungsgemeinschaft). Scientists from numerical mathematics, fluid mechanics, and aerodynamics present their research on boundary-element methods, factorization methods, higher-order panel methods, multigrid methods for elliptical and parabolic problems, two-step schemes for the Euler equations, etc. Applications are made to channel flows, gas dynamical problems, large eddy simulation of turbulence, non-Newtonian flow, turbomachine flow, zonal solutions for viscous flow problems, etc. The contents include: multigrid methods for problems from fluid dynamics, development of a 2D-Transonic Potential Flow Solver; a boundary element spectral method for nonstationary viscous flows in 3 dimensions; navier-stokes computations of two-dimensional laminar flows in a channel with a backward facing step; calculations and experimental investigations of the laminar unsteady flow in a pipe expansion; calculation of the flow-field caused by shock wave and deflagration interaction; a multi-level discretization and solution method for potential flow problems in three dimensions; solutions of the conservation equations with the approximate factorization method; inviscid and viscous flow through rotating meridional contours; zonal solutions for viscous flow problems
Plasma Physics Approximations in Ares
International Nuclear Information System (INIS)
Managan, R. A.
2015-01-01
Lee & More derived analytic forms for the transport properties of a plasma. Many hydro-codes use their formulae for electrical and thermal conductivity. The coefficients are complex functions of Fermi-Dirac integrals, Fn( μ/θ ), the chemical potential, μ or ζ = ln(1+e μ/θ ), and the temperature, θ = kT. Since these formulae are expensive to compute, rational function approximations were fit to them. Approximations are also used to find the chemical potential, either μ or ζ . The fits use ζ as the independent variable instead of μ/θ . New fits are provided for A α (ζ ),A β (ζ ), ζ, f(ζ ) = (1 + e -μ/θ )F 1/2 (μ/θ), F 1/2 '/F 1/2 , F c α , and F c β . In each case the relative error of the fit is minimized since the functions can vary by many orders of magnitude. The new fits are designed to exactly preserve the limiting values in the non-degenerate and highly degenerate limits or as ζ→ 0 or ∞. The original fits due to Lee & More and George Zimmerman are presented for comparison.
Evaluation of variational approximations
International Nuclear Information System (INIS)
Trevisan, L.A.
1991-01-01
In Feynman's approach to quantum statistical mechanics, the partition function can e represented as a path integral. A recently proposed variation method of Feynman-Kleinert is able to transform the path integral into an integral in phase space, in which the quantum fluctuations have been taken care of by introducing the effective classical potential. This method has been testes with succeed for the smooth potentials and for the singular potential of delta. The method to the strong singular potentials is applied: a quadratic potential and a linear potential both with a rigid wall at the origin. By satisfying the condition that the density of the particle be vanish at the origin, and adapted method of Feynman-Kleinert in order to improve the method is introduced. (author)
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander; Genton, Marc G.; Sun, Ying
2015-01-01
We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-11-30
We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.
Magnus approximation in the adiabatic picture
International Nuclear Information System (INIS)
Klarsfeld, S.; Oteo, J.A.
1991-01-01
A simple approximate nonperturbative method is described for treating time-dependent problems that works well in the intermediate regime far from both the sudden and the adiabatic limits. The method consists of applying the Magnus expansion after transforming to the adiabatic basis defined by the eigenstates of the instantaneous Hamiltonian. A few exactly soluble examples are considered in order to assess the domain of validity of the approximation. (author) 32 refs., 4 figs
International Nuclear Information System (INIS)
Lemaitre, P.; Porcheron, E.; Marchand, D.; Nuboer, A.; Bouilloux, L.; Vendel, J.
2007-01-01
The aim of this paper is to present the capacity of the out-of-focus imaging in order to measure droplets size in presence of heat and mass exchanges. It is supported with optical simulations first based on geometrical optics, and then with the Lorenz-Mie theory. Finally, this technique is applied in presence of heat and mass transfers in the TOSQAN experiment. (authors)
Kolstein, M.; De Lorenzo, G.; Chmeissani, M.
2014-04-01
The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.
Ahmad, Moiz; Bazalova, Magdalena; Xiang, Liangzhong; Xing, Lei
2014-05-01
The purpose of this study was to increase the sensitivity of XFCT imaging by optimizing the data acquisition geometry for reduced scatter X-rays. The placement of detectors and detector energy window were chosen to minimize scatter X-rays. We performed both theoretical calculations and Monte Carlo simulations of this optimized detector configuration on a mouse-sized phantom containing various gold concentrations. The sensitivity limits were determined for three different X-ray spectra: a monoenergetic source, a Gaussian source, and a conventional X-ray tube source. Scatter X-rays were minimized using a backscatter detector orientation (scatter direction > 110(°) to the primary X-ray beam). The optimized configuration simultaneously reduced the number of detectors and improved the image signal-to-noise ratio. The sensitivity of the optimized configuration was 10 μg/mL (10 pM) at 2 mGy dose with the mono-energetic source, which is an order of magnitude improvement over the unoptimized configuration (102 pM without the optimization). Similar improvements were seen with the Gaussian spectrum source and conventional X-ray tube source. The optimization improvements were predicted in the theoretical model and also demonstrated in simulations. The sensitivity of XFCT imaging can be enhanced by an order of magnitude with the data acquisition optimization, greatly enhancing the potential of this modality for future use in clinical molecular imaging.
Hardness and Approximation for Network Flow Interdiction
Chestnut, Stephen R.; Zenklusen, Rico
2015-01-01
In the Network Flow Interdiction problem an adversary attacks a network in order to minimize the maximum s-t-flow. Very little is known about the approximatibility of this problem despite decades of interest in it. We present the first approximation hardness, showing that Network Flow Interdiction and several of its variants cannot be much easier to approximate than Densest k-Subgraph. In particular, any $n^{o(1)}$-approximation algorithm for Network Flow Interdiction would imply an $n^{o(1)}...
International Conference Approximation Theory XV
Schumaker, Larry
2017-01-01
These proceedings are based on papers presented at the international conference Approximation Theory XV, which was held May 22–25, 2016 in San Antonio, Texas. The conference was the fifteenth in a series of meetings in Approximation Theory held at various locations in the United States, and was attended by 146 participants. The book contains longer survey papers by some of the invited speakers covering topics such as compressive sensing, isogeometric analysis, and scaling limits of polynomials and entire functions of exponential type. The book also includes papers on a variety of current topics in Approximation Theory drawn from areas such as advances in kernel approximation with applications, approximation theory and algebraic geometry, multivariate splines for applications, practical function approximation, approximation of PDEs, wavelets and framelets with applications, approximation theory in signal processing, compressive sensing, rational interpolation, spline approximation in isogeometric analysis, a...
Seismic wave extrapolation using lowrank symbol approximation
Fomel, Sergey
2012-04-30
We consider the problem of constructing a wave extrapolation operator in a variable and possibly anisotropic medium. Our construction involves Fourier transforms in space combined with the help of a lowrank approximation of the space-wavenumber wave-propagator matrix. A lowrank approximation implies selecting a small set of representative spatial locations and a small set of representative wavenumbers. We present a mathematical derivation of this method, a description of the lowrank approximation algorithm and numerical examples that confirm the validity of the proposed approach. Wave extrapolation using lowrank approximation can be applied to seismic imaging by reverse-time migration in 3D heterogeneous isotropic or anisotropic media. © 2012 European Association of Geoscientists & Engineers.
Energy Technology Data Exchange (ETDEWEB)
Tanzi, F.; Novario, R. [Varese Ospedale di Circolo, Varese (Italy). Servizio di fisica sanitaria; Conte, L. [Varese Univ., Varese (Italy). Cattedra di biofisica e tecnologie biomediche; Tosetto, C.; Dimichele, R. [Milan Clinica Mangiagalli, Milan (Italy). Istituti clinici di perfezionamento; Goddi, A. [SME, Studio medico di diagnostica per immagini, Varese (Italy)
1999-05-01
The study reports the sonographic image texture of the neonatal heart in different stages of development by calculating numerical parameters extracted from the gray scale co-occurrence matrix. To show pixel values differences and enhance texture structure, images were equalized and then the gray level range was reduced to 16 to allow sufficiently high occupancy frequency of the co-occurrence matrix. Differences are so little significant that they may be due to different factors affecting image texture and the variability introduced by manual ROI positioning; therefore no definitive conclusions can be drawn as to considering this kind of analysis capable of discriminating different stages of myocardial development. [Italian] Scopo del lavoro e' la caratterizzazione della tessitura di immagini ecocardiografiche di neonati in diverse fasi dell'accrescimento mediante il valore di parametri ottenuti con il metodo della matrice delle co-occorrenze. Per evidenziare le differenze tra i pixel e quindi esaltare la tessitura, le immagini sono state elaborate mediante equalizzazione, successivamente sono state ridotte a 16 livelli di grigio al fine di avere alta frequenza di occupazione della matrice delle co-occorrenze. Nonostante alcuni confronti risultino significativi a livello del 95% dei casi, non si possono trarre conclusioni definitive circa la possibilita' di utilizzare questa metodica per disciminare fasi diverse della crescita miocardica, in quanto le differenze di tessitura sono cosi' lievi che possono essere ascrivibili alla normale variabilita' della tessitura introdotta dal metodo di acquisizione delle immagini e dal posizionamento delle regioni di interesse.
Hierarchical low-rank approximation for high dimensional approximation
Nouy, Anthony
2016-01-01
Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.
Hierarchical low-rank approximation for high dimensional approximation
Nouy, Anthony
2016-01-07
Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.
Forms of Approximate Radiation Transport
Brunner, G
2002-01-01
Photon radiation transport is described by the Boltzmann equation. Because this equation is difficult to solve, many different approximate forms have been implemented in computer codes. Several of the most common approximations are reviewed, and test problems illustrate the characteristics of each of the approximations. This document is designed as a tutorial so that code users can make an educated choice about which form of approximate radiation transport to use for their particular simulation.
Approximation by planar elastic curves
DEFF Research Database (Denmark)
Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge
2016-01-01
We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....
International Nuclear Information System (INIS)
Riffel, Philipp; Haneder, Stefan; Attenberger, Ulrike I.; Brade, Joachim; Schoenberg, Stefan O.; Michaely, Henrik J.
2012-01-01
Purpose: Different approaches exist for hybrid MRA of the calf station. So far, the order of the acquisition of the focused calf MRA and the large field-of-view MRA has not been scientifically evaluated. Therefore the aim of this study was to evaluate if the quality of the combined large field-of-view MRA (CTM MR angiography) and time-resolved MRA with stochastic interleaved trajectories (TWIST MRA) depends on the order of acquisition of the two contrast-enhanced studies. Methods: In this retrospective study, 40 consecutive patients (mean age 68.1 ± 8.7 years, 29 male/11 female) who had undergone an MR angiographic protocol that consisted of CTM-MRA (TR/TE, 2.4/1.0 ms; 21° flip angle; isotropic resolution 1.2 mm; gadolinium dose, 0.07 mmol/kg) and TWIST-MRA (TR/TE 2.8/1.1; 20° flip angle; isotropic resolution 1.1 mm; temporal resolution 5.5 s, gadolinium dose, 0.03 mmol/kg), were included. In the first group (group 1) TWIST-MRA of the calf station was performed 1–2 min after CTM-MRA. In the second group (group 2) CTM-MRA was performed 1–2 min after TWIST-MRA of the calf station. The image quality of CTM-MRA and TWIST-MRA were evaluated by 2 two independent radiologists in consensus according to a 4-point Likert-like rating scale assessing overall image quality on a segmental basis. Venous overlay was assessed per examination. Results: In the CTM-MRA, 1360 segments were included in the assessment of image quality. CTM-MRA was diagnostic in 95% (1289/1360) of segments. There was a significant difference (p < 0.0001) between both groups with regard to the number of segments rated as excellent and moderate. The image quality was rated as excellent in group 1 in 80% (514/640 segments) and in group 2 in 67% (432/649), respectively (p < 0.0001). In contrast, the image quality was rated as moderate in the first group in 5% (33/640) and in the second group in 19% (121/649) respectively (p < 0.0001). The venous overlay was disturbing in 10% in group 1 and 20% in group
Image compression of bone images
International Nuclear Information System (INIS)
Hayrapetian, A.; Kangarloo, H.; Chan, K.K.; Ho, B.; Huang, H.K.
1989-01-01
This paper reports a receiver operating characteristic (ROC) experiment conducted to compare the diagnostic performance of a compressed bone image with the original. The compression was done on custom hardware that implements an algorithm based on full-frame cosine transform. The compression ratio in this study is approximately 10:1, which was decided after a pilot experiment. The image set consisted of 45 hand images, including normal images and images containing osteomalacia and osteitis fibrosa. Each image was digitized with a laser film scanner to 2,048 x 2,048 x 8 bits. Six observers, all board-certified radiologists, participated in the experiment. For each ROC session, an independent ROC curve was constructed and the area under that curve calculated. The image set was randomized for each session, as was the order for viewing the original and reconstructed images. Analysis of variance was used to analyze the data and derive statistically significant results. The preliminary results indicate that the diagnostic quality of the reconstructed image is comparable to that of the original image
Exact constants in approximation theory
Korneichuk, N
1991-01-01
This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base
Gertz, Zachary M; O'Donnell, William; Raina, Amresh; Balderston, Jessica R; Litwack, Andrew J; Goldberg, Lee R
2016-10-15
The rising use of imaging cardiac stress tests has led to potentially unnecessary testing. Interventions designed to reduce inappropriate stress testing have focused on the ambulatory setting. We developed a computerized order entry tool intended to reduce the use of imaging cardiac stress tests and improve appropriate use in hospitalized patients. The tool was evaluated using preimplementation and postimplementation cohorts at a single urban academic teaching hospital. All hospitalized patients referred for testing were included. The co-primary outcomes were the use of imaging stress tests as a percentage of all stress tests and the percentage of inappropriate tests, compared between the 2 cohorts. There were 478 patients in the precohort and 463 in the postcohort. The indication was chest pain in 66% and preoperative in 18% and was not significantly different between groups. The use of nonimaging stress tests increased from 4% in the pregroup to 15% in the postgroup (p nonimaging stress tests increased from 7% to 25% (p nonimaging cardiac stress tests and reduced the use of imaging tests yet was not able to reduce inappropriate use. Our study highlights the differences in cardiac stress testing between hospitalized and ambulatory patients. Copyright © 2016 Elsevier Inc. All rights reserved.
Ultra-low-dose CT imaging of the thorax: decreasing the radiation dose by one order of magnitude
International Nuclear Information System (INIS)
Lambert, Lukas; Banerjee, Rohan; Votruba, Jiri; El-Lababidi, Nabil; Zeman, Jiri
2016-01-01
Computed tomography (CT) is an indispensable tool for imaging of the thorax and there is virtually no alternative without associated radiation burden. The authors demonstrate ultra-low-dose CT of the thorax in three interesting cases. In an 18-y-old girl with rheumatoid arthritis, CT of the thorax identified alveolitis in the posterior costophrenic angles (radiation dose = 0.2 mSv). Its resolution was demonstrated on a follow-up scan (4.2 mSv) performed elsewhere. In an 11-y-old girl, CT (0.1 mSv) showed changes of the right collar bone consistent with chronic recurrent multifocal osteomyelitis. CT (0.1 mSv) of a 9-y-old girl with mucopolysaccharidosis revealed altogether three hamartomas, peribronchial infiltrate, and spine deformity. In some indications, the radiation dose from CT of the thorax can approach that of several plain radiographs. This may help the pediatrician in deciding whether 'gentle' ultra-low-dose CT instead of observation or follow-up radiographs will alleviate the uncertainty of the diagnosis with little harm to the child. (author)
Huang, Hsuan-Ming; Hsiao, Ing-Tsung
2016-01-01
In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate.
Traveltime approximations for transversely isotropic media with an inhomogeneous background
Alkhalifah, Tariq
2011-05-01
A transversely isotropic (TI) model with a tilted symmetry axis is regarded as one of the most effective approximations to the Earth subsurface, especially for imaging purposes. However, we commonly utilize this model by setting the axis of symmetry normal to the reflector. This assumption may be accurate in many places, but deviations from this assumption will cause errors in the wavefield description. Using perturbation theory and Taylor\\'s series, I expand the solutions of the eikonal equation for 2D TI media with respect to the independent parameter θ, the angle the tilt of the axis of symmetry makes with the vertical, in a generally inhomogeneous TI background with a vertical axis of symmetry. I do an additional expansion in terms of the independent (anellipticity) parameter in a generally inhomogeneous elliptically anisotropic background medium. These new TI traveltime solutions are given by expansions in and θ with coefficients extracted from solving linear first-order partial differential equations. Pade approximations are used to enhance the accuracy of the representation by predicting the behavior of the higher-order terms of the expansion. A simplification of the expansion for homogenous media provides nonhyperbolic moveout descriptions of the traveltime for TI models that are more accurate than other recently derived approximations. In addition, for 3D media, I develop traveltime approximations using Taylor\\'s series type of expansions in the azimuth of the axis of symmetry. The coefficients of all these expansions can also provide us with the medium sensitivity gradients (Jacobian) for nonlinear tomographic-based inversion for the tilt in the symmetry axis. © 2011 Society of Exploration Geophysicists.
Traveltime approximations for transversely isotropic media with an inhomogeneous background
Alkhalifah, Tariq
2011-01-01
A transversely isotropic (TI) model with a tilted symmetry axis is regarded as one of the most effective approximations to the Earth subsurface, especially for imaging purposes. However, we commonly utilize this model by setting the axis of symmetry normal to the reflector. This assumption may be accurate in many places, but deviations from this assumption will cause errors in the wavefield description. Using perturbation theory and Taylor's series, I expand the solutions of the eikonal equation for 2D TI media with respect to the independent parameter θ, the angle the tilt of the axis of symmetry makes with the vertical, in a generally inhomogeneous TI background with a vertical axis of symmetry. I do an additional expansion in terms of the independent (anellipticity) parameter in a generally inhomogeneous elliptically anisotropic background medium. These new TI traveltime solutions are given by expansions in and θ with coefficients extracted from solving linear first-order partial differential equations. Pade approximations are used to enhance the accuracy of the representation by predicting the behavior of the higher-order terms of the expansion. A simplification of the expansion for homogenous media provides nonhyperbolic moveout descriptions of the traveltime for TI models that are more accurate than other recently derived approximations. In addition, for 3D media, I develop traveltime approximations using Taylor's series type of expansions in the azimuth of the axis of symmetry. The coefficients of all these expansions can also provide us with the medium sensitivity gradients (Jacobian) for nonlinear tomographic-based inversion for the tilt in the symmetry axis. © 2011 Society of Exploration Geophysicists.
Mean-field approximation minimizes relative entropy
International Nuclear Information System (INIS)
Bilbro, G.L.; Snyder, W.E.; Mann, R.C.
1991-01-01
The authors derive the mean-field approximation from the information-theoretic principle of minimum relative entropy instead of by minimizing Peierls's inequality for the Weiss free energy of statistical physics theory. They show that information theory leads to the statistical mechanics procedure. As an example, they consider a problem in binary image restoration. They find that mean-field annealing compares favorably with the stochastic approach
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-01-07
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander
2015-01-05
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander; Genton, Marc G.; Sun, Ying; Tempone, Raul
2015-01-01
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design
Hierarchical matrix approximation of large covariance matrices
Litvinenko, Alexander; Genton, Marc G.; Sun, Ying; Tempone, Raul
2015-01-01
We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.
Some results in Diophantine approximation
DEFF Research Database (Denmark)
Pedersen, Steffen Højris
the basic concepts on which the papers build. Among other it introduces metric Diophantine approximation, Mahler’s approach on algebraic approximation, the Hausdorff measure, and properties of the formal Laurent series over Fq. The introduction ends with a discussion on Mahler’s problem when considered......This thesis consists of three papers in Diophantine approximation, a subbranch of number theory. Preceding these papers is an introduction to various aspects of Diophantine approximation and formal Laurent series over Fq and a summary of each of the three papers. The introduction introduces...
Limitations of shallow nets approximation.
Lin, Shao-Bo
2017-10-01
In this paper, we aim at analyzing the approximation abilities of shallow networks in reproducing kernel Hilbert spaces (RKHSs). We prove that there is a probability measure such that the achievable lower bound for approximating by shallow nets can be realized for all functions in balls of reproducing kernel Hilbert space with high probability, which is different with the classical minimax approximation error estimates. This result together with the existing approximation results for deep nets shows the limitations for shallow nets and provides a theoretical explanation on why deep nets perform better than shallow nets. Copyright © 2017 Elsevier Ltd. All rights reserved.
Multiple Scattering Model for Optical Coherence Tomography with Rytov Approximation
Li, Muxingzi
2017-04-24
Optical Coherence Tomography (OCT) is a coherence-gated, micrometer-resolution imaging technique that focuses a broadband near-infrared laser beam to penetrate into optical scattering media, e.g. biological tissues. The OCT resolution is split into two parts, with the axial resolution defined by half the coherence length, and the depth-dependent lateral resolution determined by the beam geometry, which is well described by a Gaussian beam model. The depth dependence of lateral resolution directly results in the defocusing effect outside the confocal region and restricts current OCT probes to small numerical aperture (NA) at the expense of lateral resolution near the focus. Another limitation on OCT development is the presence of a mixture of speckles due to multiple scatterers within the coherence length, and other random noise. Motivated by the above two challenges, a multiple scattering model based on Rytov approximation and Gaussian beam optics is proposed for the OCT setup. Some previous papers have adopted the first Born approximation with the assumption of small perturbation of the incident field in inhomogeneous media. The Rytov method of the same order with smooth phase perturbation assumption benefits from a wider spatial range of validity. A deconvolution method for solving the inverse problem associated with the first Rytov approximation is developed, significantly reducing the defocusing effect through depth and therefore extending the feasible range of NA.
基于三次B样条函数的SEM图像处理%SEM Image Processing Based on Third- order B- spline Function
Institute of Scientific and Technical Information of China (English)
张健
2011-01-01
SEM images, for its unique practical testing significance, need in denoising also highlight its edges and accurate edge extraction positioning, So this paper adopts a partial differential method which can maintain the edges of the denoising and a extensive application of multi - scale wavelet analysis to detect edges, all based on third - order B - spline function as the core operator, for line width test of SEM image processing, This algorithm obtained the better denoising effect and maintained edge features for SEM images.%SEM图像由于其独特的实际测试意义,需要在去噪的同时突出边缘和准确的边缘提取定位,所以提出采用能够保持边缘的偏微分方法去噪和广泛应用的多尺度小波提取边缘,基于三次B样条函数作为核心算子,对用于线宽测试的SEM图像进行处理,获得了较好的去噪并保持边缘的效果以及清晰的图像边缘检测效果.
Spherical Approximation on Unit Sphere
Directory of Open Access Journals (Sweden)
Eman Samir Bhaya
2018-01-01
Full Text Available In this paper we introduce a Jackson type theorem for functions in LP spaces on sphere And study on best approximation of functions in spaces defined on unit sphere. our central problem is to describe the approximation behavior of functions in spaces for by modulus of smoothness of functions.
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-08-18
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.
Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E
2018-06-01
An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.
Modification of linear response theory for mean-field approximations
Hütter, M.; Öttinger, H.C.
1996-01-01
In the framework of statistical descriptions of many particle systems, the influence of mean-field approximations on the linear response theory is studied. A procedure, analogous to one where no mean-field approximation is involved, is used in order to determine the first order response of the
Cram, Dawn; Roth, Christopher J; Towbin, Alexander J
2016-10-01
The decision to implement an orders-based versus an encounters-based imaging workflow poses various implications to image capture and storage. The impacts include workflows before and after an imaging procedure, electronic health record build, technical infrastructure, analytics, resulting, and revenue. Orders-based workflows tend to favor some imaging specialties while others require an encounters-based approach. The intent of this HIMSS-SIIM white paper is to offer lessons learned from early adopting institutions to physician champions and informatics leadership developing strategic planning and operational rollouts for specialties capturing clinical multimedia.
Approximate truncation robust computed tomography—ATRACT
International Nuclear Information System (INIS)
Dennerlein, Frank; Maier, Andreas
2013-01-01
We present an approximate truncation robust algorithm to compute tomographic images (ATRACT). This algorithm targets at reconstructing volumetric images from cone-beam projections in scenarios where these projections are highly truncated in each dimension. It thus facilitates reconstructions of small subvolumes of interest, without involving prior knowledge about the object. Our method is readily applicable to medical C-arm imaging, where it may contribute to new clinical workflows together with a considerable reduction of x-ray dose. We give a detailed derivation of ATRACT that starts from the conventional Feldkamp filtered-backprojection algorithm and that involves, as one component, a novel original formula for the inversion of the two-dimensional Radon transform. Discretization and numerical implementation are discussed and reconstruction results from both, simulated projections and first clinical data sets are presented. (paper)
Pawlak algebra and approximate structure on fuzzy lattice.
Zhuang, Ying; Liu, Wenqi; Wu, Chin-Chia; Li, Jinhai
2014-01-01
The aim of this paper is to investigate the general approximation structure, weak approximation operators, and Pawlak algebra in the framework of fuzzy lattice, lattice topology, and auxiliary ordering. First, we prove that the weak approximation operator space forms a complete distributive lattice. Then we study the properties of transitive closure of approximation operators and apply them to rough set theory. We also investigate molecule Pawlak algebra and obtain some related properties.
Approximation theorems by Meyer-Koenig and Zeller type operators
International Nuclear Information System (INIS)
Ali Ozarslan, M.; Duman, Oktay
2009-01-01
This paper is mainly connected with the approximation properties of Meyer-Koenig and Zeller (MKZ) type operators. We first introduce a general sequence of MKZ operators based on q-integers and then obtain a Korovkin-type approximation theorem for these operators. We also compute their rates of convergence by means of modulus of continuity and the elements of Lipschitz class functionals. Furthermore, we give an rth order generalization of our operators in order to get some explicit approximation results.
The binary collision approximation: Background and introduction
International Nuclear Information System (INIS)
Robinson, M.T.
1992-08-01
The binary collision approximation (BCA) has long been used in computer simulations of the interactions of energetic atoms with solid targets, as well as being the basis of most analytical theory in this area. While mainly a high-energy approximation, the BCA retains qualitative significance at low energies and, with proper formulation, gives useful quantitative information as well. Moreover, computer simulations based on the BCA can achieve good statistics in many situations where those based on full classical dynamical models require the most advanced computer hardware or are even impracticable. The foundations of the BCA in classical scattering are reviewed, including methods of evaluating the scattering integrals, interaction potentials, and electron excitation effects. The explicit evaluation of time at significant points on particle trajectories is discussed, as are scheduling algorithms for ordering the collisions in a developing cascade. An approximate treatment of nearly simultaneous collisions is outlined and the searching algorithms used in MARLOWE are presented
Resummation of perturbative QCD by pade approximants
International Nuclear Information System (INIS)
Gardi, E.
1997-01-01
In this lecture I present some of the new developments concerning the use of Pade Approximants (PA's) for resuming perturbative series in QCD. It is shown that PA's tend to reduce the renormalization scale and scheme dependence as compared to truncated series. In particular it is proven that in the limit where the β function is dominated by the 1-loop contribution, there is an exact symmetry that guarantees invariance of diagonal PA's under changing the renormalization scale. In addition it is shown that in the large β 0 approximation diagonal PA's can be interpreted as a systematic method for approximating the flow of momentum in Feynman diagrams. This corresponds to a new multiple scale generalization of the Brodsky-Lepage-Mackenzie (BLM) method to higher orders. I illustrate the method with the Bjorken sum rule and the vacuum polarization function. (author)
Cyganek, Boguslaw; Smolka, Bogdan
2015-02-01
In this paper a system for real-time recognition of objects in multidimensional video signals is proposed. Object recognition is done by pattern projection into the tensor subspaces obtained from the factorization of the signal tensors representing the input signal. However, instead of taking only the intensity signal the novelty of this paper is first to build the Extended Structural Tensor representation from the intensity signal that conveys information on signal intensities, as well as on higher-order statistics of the input signals. This way the higher-order input pattern tensors are built from the training samples. Then, the tensor subspaces are built based on the Higher-Order Singular Value Decomposition of the prototype pattern tensors. Finally, recognition relies on measurements of the distance of a test pattern projected into the tensor subspaces obtained from the training tensors. Due to high-dimensionality of the input data, tensor based methods require high memory and computational resources. However, recent achievements in the technology of the multi-core microprocessors and graphic cards allows real-time operation of the multidimensional methods as is shown and analyzed in this paper based on real examples of object detection in digital images.
Approximate Implicitization Using Linear Algebra
Directory of Open Access Journals (Sweden)
Oliver J. D. Barrowclough
2012-01-01
Full Text Available We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.
Rollout sampling approximate policy iteration
Dimitrakakis, C.; Lagoudakis, M.G.
2008-01-01
Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a
Weighted approximation with varying weight
Totik, Vilmos
1994-01-01
A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.
Framework for sequential approximate optimization
Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.
2004-01-01
An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python
Nonresonant approximations to the optical potential
International Nuclear Information System (INIS)
Kowalski, K.L.
1982-01-01
A new class of approximations to the optical potential, which includes those of the multiple-scattering variety, is investigated. These approximations are constructed so that the optical potential maintains the correct unitarity properties along with a proper treatment of nucleon identity. The special case of nucleon-nucleus scattering with complete inclusion of Pauli effects is studied in detail. The treatment is such that the optical potential receives contributions only from subsystems embedded in their own physically correct antisymmetrized subspaces. It is found that a systematic development of even the lowest-order approximations requires the use of the off-shell extension due to Alt, Grassberger, and Sandhas along with a consistent set of dynamical equations for the optical potential. In nucleon-nucleus scattering a lowest-order optical potential is obtained as part of a systematic, exact, inclusive connectivity expansion which is expected to be useful at moderately high energies. This lowest-order potential consists of an energy-shifted (trho)-type term with three-body kinematics plus a heavy-particle exchange or pickup term. The natural appearance of the exchange term additivity in the optical potential clarifies the role of the elastic distortion in connection with the treatment of these processes. The relationship of the relevant aspects of the present analysis of the optical potential to conventional multiple scattering methods is discussed
Briggs, Matt; Shanmugam, Mohan
2013-12-01
This case study describes how a 3D animation was created to approximate the depth and angle of a foreign object (metal bar) that had become embedded into a patient's head. A pre-operative CT scan was not available as the patient could not fit though the CT scanner, therefore a post surgical CT scan, x-ray and photographic images were used. A surface render was made of the skull and imported into Blender (a 3D animation application). The metal bar was not available, however images of a similar object that was retrieved from the scene by the ambulance crew were used to recreate a 3D model. The x-ray images were then imported into Blender and used as background images in order to align the skull reconstruction and metal bar at the correct depth/angle. A 3D animation was then created to fully illustrate the angle and depth of the iron bar in the skull.
Nuclear Hartree-Fock approximation testing and other related approximations
International Nuclear Information System (INIS)
Cohenca, J.M.
1970-01-01
Hartree-Fock, and Tamm-Dancoff approximations are tested for angular momentum of even-even nuclei. Wave functions, energy levels and momenta are comparatively evaluated. Quadripole interactions are studied following the Elliott model. Results are applied to Ne 20 [pt
Diophantine approximation and Dirichlet series
Queffélec, Hervé
2013-01-01
This self-contained book will benefit beginners as well as researchers. It is devoted to Diophantine approximation, the analytic theory of Dirichlet series, and some connections between these two domains, which often occur through the Kronecker approximation theorem. Accordingly, the book is divided into seven chapters, the first three of which present tools from commutative harmonic analysis, including a sharp form of the uncertainty principle, ergodic theory and Diophantine approximation to be used in the sequel. A presentation of continued fraction expansions, including the mixing property of the Gauss map, is given. Chapters four and five present the general theory of Dirichlet series, with classes of examples connected to continued fractions, the famous Bohr point of view, and then the use of random Dirichlet series to produce non-trivial extremal examples, including sharp forms of the Bohnenblust-Hille theorem. Chapter six deals with Hardy-Dirichlet spaces, which are new and useful Banach spaces of anal...
Approximation methods in probability theory
Čekanavičius, Vydas
2016-01-01
This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.
Kim, SungKun; Lee, Hunpyo
2017-06-01
Via a dynamical cluster approximation with N c = 4 in combination with a semiclassical approximation (DCA+SCA), we study the doped two-dimensional Hubbard model. We obtain a plaquette antiferromagnetic (AF) Mott insulator, a plaquette AF ordered metal, a pseudogap (or d-wave superconductor) and a paramagnetic metal by tuning the doping concentration. These features are similar to the behaviors observed in copper-oxide superconductors and are in qualitative agreement with the results calculated by the cluster dynamical mean field theory with the continuous-time quantum Monte Carlo (CDMFT+CTQMC) approach. The results of our DCA+SCA differ from those of the CDMFT+CTQMC approach in that the d-wave superconducting order parameters are shown even in the high doped region, unlike the results of the CDMFT+CTQMC approach. We think that the strong plaquette AF orderings in the dynamical cluster approximation (DCA) with N c = 4 suppress superconducting states with increasing doping up to strongly doped region, because frozen dynamical fluctuations in a semiclassical approximation (SCA) approach are unable to destroy those orderings. Our calculation with short-range spatial fluctuations is initial research, because the SCA can manage long-range spatial fluctuations in feasible computational times beyond the CDMFT+CTQMC tool. We believe that our future DCA+SCA calculations should supply information on the fully momentum-resolved physical properties, which could be compared with the results measured by angle-resolved photoemission spectroscopy experiments.
Approximate reasoning in physical systems
International Nuclear Information System (INIS)
Mutihac, R.
1991-01-01
The theory of fuzzy sets provides excellent ground to deal with fuzzy observations (uncertain or imprecise signals, wavelengths, temperatures,etc.) fuzzy functions (spectra and depth profiles) and fuzzy logic and approximate reasoning. First, the basic ideas of fuzzy set theory are briefly presented. Secondly, stress is put on application of simple fuzzy set operations for matching candidate reference spectra of a spectral library to an unknown sample spectrum (e.g. IR spectroscopy). Thirdly, approximate reasoning is applied to infer an unknown property from information available in a database (e.g. crystal systems). Finally, multi-dimensional fuzzy reasoning techniques are suggested. (Author)
Topics in multivariate approximation and interpolation
Jetter, Kurt
2005-01-01
This book is a collection of eleven articles, written by leading experts and dealing with special topics in Multivariate Approximation and Interpolation. The material discussed here has far-reaching applications in many areas of Applied Mathematics, such as in Computer Aided Geometric Design, in Mathematical Modelling, in Signal and Image Processing and in Machine Learning, to mention a few. The book aims at giving a comprehensive information leading the reader from the fundamental notions and results of each field to the forefront of research. It is an ideal and up-to-date introduction for gr
Approximate Reanalysis in Topology Optimization
DEFF Research Database (Denmark)
Amir, Oded; Bendsøe, Martin P.; Sigmund, Ole
2009-01-01
In the nested approach to structural optimization, most of the computational effort is invested in the solution of the finite element analysis equations. In this study, the integration of an approximate reanalysis procedure into the framework of topology optimization of continuum structures...
Approximate Matching of Hierarchial Data
DEFF Research Database (Denmark)
Augsten, Nikolaus
-grams of a tree are all its subtrees of a particular shape. Intuitively, two trees are similar if they have many pq-grams in common. The pq-gram distance is an efficient and effective approximation of the tree edit distance. We analyze the properties of the pq-gram distance and compare it with the tree edit...
Approximation properties of haplotype tagging
Directory of Open Access Journals (Sweden)
Dreiseitl Stephan
2006-01-01
Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs are locations at which the genomic sequences of population members differ. Since these differences are known to follow patterns, disease association studies are facilitated by identifying SNPs that allow the unique identification of such patterns. This process, known as haplotype tagging, is formulated as a combinatorial optimization problem and analyzed in terms of complexity and approximation properties. Results It is shown that the tagging problem is NP-hard but approximable within 1 + ln((n2 - n/2 for n haplotypes but not approximable within (1 - ε ln(n/2 for any ε > 0 unless NP ⊂ DTIME(nlog log n. A simple, very easily implementable algorithm that exhibits the above upper bound on solution quality is presented. This algorithm has running time O((2m - p + 1 ≤ O(m(n2 - n/2 where p ≤ min(n, m for n haplotypes of size m. As we show that the approximation bound is asymptotically tight, the algorithm presented is optimal with respect to this asymptotic bound. Conclusion The haplotype tagging problem is hard, but approachable with a fast, practical, and surprisingly simple algorithm that cannot be significantly improved upon on a single processor machine. Hence, significant improvement in computatational efforts expended can only be expected if the computational effort is distributed and done in parallel.
All-Norm Approximation Algorithms
Azar, Yossi; Epstein, Leah; Richter, Yossi; Woeginger, Gerhard J.; Penttonen, Martti; Meineche Schmidt, Erik
2002-01-01
A major drawback in optimization problems and in particular in scheduling problems is that for every measure there may be a different optimal solution. In many cases the various measures are different ℓ p norms. We address this problem by introducing the concept of an All-norm ρ-approximation
Truthful approximations to range voting
DEFF Research Database (Denmark)
Filos-Ratsika, Aris; Miltersen, Peter Bro
We consider the fundamental mechanism design problem of approximate social welfare maximization under general cardinal preferences on a finite number of alternatives and without money. The well-known range voting scheme can be thought of as a non-truthful mechanism for exact social welfare...
On badly approximable complex numbers
DEFF Research Database (Denmark)
Esdahl-Schou, Rune; Kristensen, S.
We show that the set of complex numbers which are badly approximable by ratios of elements of , where has maximal Hausdorff dimension. In addition, the intersection of these sets is shown to have maximal dimension. The results remain true when the sets in question are intersected with a suitably...
Approximate reasoning in decision analysis
Energy Technology Data Exchange (ETDEWEB)
Gupta, M M; Sanchez, E
1982-01-01
The volume aims to incorporate the recent advances in both theory and applications. It contains 44 articles by 74 contributors from 17 different countries. The topics considered include: membership functions; composite fuzzy relations; fuzzy logic and inference; classifications and similarity measures; expert systems and medical diagnosis; psychological measurements and human behaviour; approximate reasoning and decision analysis; and fuzzy clustering algorithms.
Rational approximation of vertical segments
Salazar Celis, Oliver; Cuyt, Annie; Verdonk, Brigitte
2007-08-01
In many applications, observations are prone to imprecise measurements. When constructing a model based on such data, an approximation rather than an interpolation approach is needed. Very often a least squares approximation is used. Here we follow a different approach. A natural way for dealing with uncertainty in the data is by means of an uncertainty interval. We assume that the uncertainty in the independent variables is negligible and that for each observation an uncertainty interval can be given which contains the (unknown) exact value. To approximate such data we look for functions which intersect all uncertainty intervals. In the past this problem has been studied for polynomials, or more generally for functions which are linear in the unknown coefficients. Here we study the problem for a particular class of functions which are nonlinear in the unknown coefficients, namely rational functions. We show how to reduce the problem to a quadratic programming problem with a strictly convex objective function, yielding a unique rational function which intersects all uncertainty intervals and satisfies some additional properties. Compared to rational least squares approximation which reduces to a nonlinear optimization problem where the objective function may have many local minima, this makes the new approach attractive.
Pythagorean Approximations and Continued Fractions
Peralta, Javier
2008-01-01
In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…
Ultrafast Approximation for Phylogenetic Bootstrap
Bui Quang Minh, [No Value; Nguyen, Thi; von Haeseler, Arndt
Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and
Approximate Networking for Universal Internet Access
Directory of Open Access Journals (Sweden)
Junaid Qadir
2017-12-01
Full Text Available Despite the best efforts of networking researchers and practitioners, an ideal Internet experience is inaccessible to an overwhelming majority of people the world over, mainly due to the lack of cost-efficient ways of provisioning high-performance, global Internet. In this paper, we argue that instead of an exclusive focus on a utopian goal of universally accessible “ideal networking” (in which we have a high throughput and quality of service as well as low latency and congestion, we should consider providing “approximate networking” through the adoption of context-appropriate trade-offs. In this regard, we propose to leverage the advances in the emerging trend of “approximate computing” that rely on relaxing the bounds of precise/exact computing to provide new opportunities for improving the area, power, and performance efficiency of systems by orders of magnitude by embracing output errors in resilient applications. Furthermore, we propose to extend the dimensions of approximate computing towards various knobs available at network layers. Approximate networking can be used to provision “Global Access to the Internet for All” (GAIA in a pragmatically tiered fashion, in which different users around the world are provided a different context-appropriate (but still contextually functional Internet experience.
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
Approximated solutions to Born-Infeld dynamics
Energy Technology Data Exchange (ETDEWEB)
Ferraro, Rafael [Instituto de Astronomía y Física del Espacio (IAFE, CONICET-UBA),Casilla de Correo 67, Sucursal 28, 1428 Buenos Aires (Argentina); Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires,Ciudad Universitaria, Pabellón I, 1428 Buenos Aires (Argentina); Nigro, Mauro [Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires,Ciudad Universitaria, Pabellón I, 1428 Buenos Aires (Argentina)
2016-02-01
The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.
Approximated solutions to Born-Infeld dynamics
International Nuclear Information System (INIS)
Ferraro, Rafael; Nigro, Mauro
2016-01-01
The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.
Beyond the random phase approximation
DEFF Research Database (Denmark)
Olsen, Thomas; Thygesen, Kristian S.
2013-01-01
We assess the performance of a recently proposed renormalized adiabatic local density approximation (rALDA) for ab initio calculations of electronic correlation energies in solids and molecules. The method is an extension of the random phase approximation (RPA) derived from time-dependent density...... functional theory and the adiabatic connection fluctuation-dissipation theorem and contains no fitted parameters. The new kernel is shown to preserve the accurate description of dispersive interactions from RPA while significantly improving the description of short-range correlation in molecules, insulators......, and metals. For molecular atomization energies, the rALDA is a factor of 7 better than RPA and a factor of 4 better than the Perdew-Burke-Ernzerhof (PBE) functional when compared to experiments, and a factor of 3 (1.5) better than RPA (PBE) for cohesive energies of solids. For transition metals...
Hydrogen: Beyond the Classic Approximation
International Nuclear Information System (INIS)
Scivetti, Ivan
2003-01-01
The classical nucleus approximation is the most frequently used approach for the resolution of problems in condensed matter physics.However, there are systems in nature where it is necessary to introduce the nuclear degrees of freedom to obtain a correct description of the properties.Examples of this, are the systems with containing hydrogen.In this work, we have studied the resolution of the quantum nuclear problem for the particular case of the water molecule.The Hartree approximation has been used, i.e. we have considered that the nuclei are distinguishable particles.In addition, we have proposed a model to solve the tunneling process, which involves the resolution of the nuclear problem for configurations of the system away from its equilibrium position
Approximation errors during variance propagation
International Nuclear Information System (INIS)
Dinsmore, Stephen
1986-01-01
Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given
WKB approximation in atomic physics
International Nuclear Information System (INIS)
Karnakov, Boris Mikhailovich
2013-01-01
Provides extensive coverage of the Wentzel-Kramers-Brillouin approximation and its applications. Presented as a sequence of problems with highly detailed solutions. Gives a concise introduction for calculating Rydberg states, potential barriers and quasistationary systems. This book has evolved from lectures devoted to applications of the Wentzel-Kramers-Brillouin- (WKB or quasi-classical) approximation and of the method of 1/N -expansion for solving various problems in atomic and nuclear physics. The intent of this book is to help students and investigators in this field to extend their knowledge of these important calculation methods in quantum mechanics. Much material is contained herein that is not to be found elsewhere. WKB approximation, while constituting a fundamental area in atomic physics, has not been the focus of many books. A novel method has been adopted for the presentation of the subject matter, the material is presented as a succession of problems, followed by a detailed way of solving them. The methods introduced are then used to calculate Rydberg states in atomic systems and to evaluate potential barriers and quasistationary states. Finally, adiabatic transition and ionization of quantum systems are covered.
Approximate labeling via graph cuts based on linear programming.
Komodakis, Nikos; Tziritas, Georgios
2007-08-01
A new framework is presented for both understanding and developing graph-cut-based combinatorial algorithms suitable for the approximate optimization of a very wide class of Markov Random Fields (MRFs) that are frequently encountered in computer vision. The proposed framework utilizes tools from the duality theory of linear programming in order to provide an alternative and more general view of state-of-the-art techniques like the \\alpha-expansion algorithm, which is included merely as a special case. Moreover, contrary to \\alpha-expansion, the derived algorithms generate solutions with guaranteed optimality properties for a much wider class of problems, for example, even for MRFs with nonmetric potentials. In addition, they are capable of providing per-instance suboptimality bounds in all occasions, including discrete MRFs with an arbitrary potential function. These bounds prove to be very tight in practice (that is, very close to 1), which means that the resulting solutions are almost optimal. Our algorithms' effectiveness is demonstrated by presenting experimental results on a variety of low-level vision tasks, such as stereo matching, image restoration, image completion, and optical flow estimation, as well as on synthetic problems.
The high intensity approximation applied to multiphoton ionization
International Nuclear Information System (INIS)
Brandi, H.S.; Davidovich, L.; Zagury, N.
1980-08-01
It is shown that the most commonly used high intensity approximations as applied to ionization by strong electromagnetic fields are related. The applicability of the steepest descent method in these approximations, and the relation between them and first-order perturbation theory, are also discussed. (Author) [pt
The modified signed likelihood statistic and saddlepoint approximations
DEFF Research Database (Denmark)
Jensen, Jens Ledet
1992-01-01
SUMMARY: For a number of tests in exponential families we show that the use of a normal approximation to the modified signed likelihood ratio statistic r * is equivalent to the use of a saddlepoint approximation. This is also true in a large deviation region where the signed likelihood ratio...... statistic r is of order √ n. © 1992 Biometrika Trust....
Static correlation beyond the random phase approximation
DEFF Research Database (Denmark)
Olsen, Thomas; Thygesen, Kristian Sommer
2014-01-01
derived from Hedin's equations (Random Phase Approximation (RPA), Time-dependent Hartree-Fock (TDHF), Bethe-Salpeter equation (BSE), and Time-Dependent GW) all reproduce the correct dissociation limit. We also show that the BSE improves the correlation energies obtained within RPA and TDHF significantly...... and confirms that BSE greatly improves the RPA and TDHF results despite the fact that the BSE excitation spectrum breaks down in the dissociation limit. In contrast, second order screened exchange gives a poor description of the dissociation limit, which can be attributed to the fact that it cannot be derived...
Approximate direct georeferencing in national coordinates
Legat, Klaus
Direct georeferencing has gained an increasing importance in photogrammetry and remote sensing. Thereby, the parameters of exterior orientation (EO) of an image sensor are determined by GPS/INS, yielding results in a global geocentric reference frame. Photogrammetric products like digital terrain models or orthoimages, however, are often required in national geodetic datums and mapped by national map projections, i.e., in "national coordinates". As the fundamental mathematics of photogrammetry is based on Cartesian coordinates, the scene restitution is often performed in a Cartesian frame located at some central position of the image block. The subsequent transformation to national coordinates is a standard problem in geodesy and can be done in a rigorous manner-at least if the formulas of the map projection are rigorous. Drawbacks of this procedure include practical deficiencies related to the photogrammetric processing as well as the computational cost of transforming the whole scene. To avoid these problems, the paper pursues an alternative processing strategy where the EO parameters are transformed prior to the restitution. If only this transition was done, however, the scene would be systematically distorted. The reason is that the national coordinates are not Cartesian due to the earth curvature and the unavoidable length distortion of map projections. To settle these distortions, several corrections need to be applied. These are treated in detail for both passive and active imaging. Since all these corrections are approximations only, the resulting technique is termed "approximate direct georeferencing". Still, the residual distortions are usually very low as is demonstrated by simulations, rendering the technique an attractive approach to direct georeferencing.
Continuum approximation of the Fermi-Pasta-Ulam lattice
International Nuclear Information System (INIS)
Martina, L.
1979-01-01
A continuum approximation method is applied in order to discuss the connection between some properties of the infinite Fermi-Pasta-Ulam lattice and the ones displayed by the Korteweg-de Vries equation
The degenerate-internal-states approximation for cold collisions
Maan, A.C.; Tiesinga, E.; Stoof, H.T.C.; Verhaar, B.J.
1990-01-01
The Degenerate-Internal-States approximation as well as its first-order correction are shown to provide a convenient method for calculating elastic and inelastic collision amplitudes for low temperature atomic scattering.
The accuracy of time dependent transport equation ergodic approximation
International Nuclear Information System (INIS)
Stancic, V.
1995-01-01
In order to predict the accuracy of the ergodic approximation for solving the time dependent transport equation, a comparison with respect to multiple collision and time finite difference methods, has been considered. (author)
Energy Technology Data Exchange (ETDEWEB)
Garibotti, C R; Grinstein, F F [Rosario Univ. Nacional (Argentina). Facultad de Ciencias Exactas e Ingenieria
1976-05-08
It is discussed the real utility of Born series for the calculation of atomic collision processes in the Born approximation. It is suggested to make use of Pade approximants and it is shown that this approach provides very fast convergent sequences over all the energy range studied. Yukawa and exponential potential are explicitly considered and the results are compared with high-order Born approximation.
Operator ordering and causality
Plimak, L. I.; Stenholm, S. T.
2011-01-01
It is shown that causality violations [M. de Haan, Physica 132A, 375, 397 (1985)], emerging when the conventional definition of the time-normal operator ordering [P.L.Kelley and W.H.Kleiner, Phys.Rev. 136, A316 (1964)] is taken outside the rotating wave approximation, disappear when the amended definition [L.P. and S.S., Annals of Physics, 323, 1989 (2008)] of this ordering is used.
Discrete dipole approximation simulation of bead enhanced diffraction grating biosensor
International Nuclear Information System (INIS)
Arif, Khalid Mahmood
2016-01-01
We present the discrete dipole approximation simulation of light scattering from bead enhanced diffraction biosensor and report the effect of bead material, number of beads forming the grating and spatial randomness on the diffraction intensities of 1st and 0th orders. The dipole models of gratings are formed by volume slicing and image processing while the spatial locations of the beads on the substrate surface are randomly computed using discrete probability distribution. The effect of beads reduction on far-field scattering of 632.8 nm incident field, from fully occupied gratings to very coarse gratings, is studied for various bead materials. Our findings give insight into many difficult or experimentally impossible aspects of this genre of biosensors and establish that bead enhanced grating may be used for rapid and precise detection of small amounts of biomolecules. The results of simulations also show excellent qualitative similarities with experimental observations. - Highlights: • DDA was used to study the relationship between the number of beads forming gratings and ratio of first and zeroth order diffraction intensities. • A very flexible modeling program was developed to design complicated objects for DDA. • Material and spatial effects of bead distribution on surfaces were studied. • It has been shown that bead enhanced grating biosensor can be useful for fast detection of small amounts of biomolecules. • Experimental results qualitatively support the simulations and thus open a way to optimize the grating biosensors.
Approximate solutions to Mathieu's equation
Wilkinson, Samuel A.; Vogt, Nicolas; Golubev, Dmitry S.; Cole, Jared H.
2018-06-01
Mathieu's equation has many applications throughout theoretical physics. It is especially important to the theory of Josephson junctions, where it is equivalent to Schrödinger's equation. Mathieu's equation can be easily solved numerically, however there exists no closed-form analytic solution. Here we collect various approximations which appear throughout the physics and mathematics literature and examine their accuracy and regimes of applicability. Particular attention is paid to quantities relevant to the physics of Josephson junctions, but the arguments and notation are kept general so as to be of use to the broader physics community.
Approximate Inference for Wireless Communications
DEFF Research Database (Denmark)
Hansen, Morten
This thesis investigates signal processing techniques for wireless communication receivers. The aim is to improve the performance or reduce the computationally complexity of these, where the primary focus area is cellular systems such as Global System for Mobile communications (GSM) (and extensions...... to the optimal one, which usually requires an unacceptable high complexity. Some of the treated approximate methods are based on QL-factorization of the channel matrix. In the work presented in this thesis it is proven how the QL-factorization of frequency-selective channels asymptotically provides the minimum...
Quantum tunneling beyond semiclassical approximation
International Nuclear Information System (INIS)
Banerjee, Rabin; Majhi, Bibhas Ranjan
2008-01-01
Hawking radiation as tunneling by Hamilton-Jacobi method beyond semiclassical approximation is analysed. We compute all quantum corrections in the single particle action revealing that these are proportional to the usual semiclassical contribution. We show that a simple choice of the proportionality constants reproduces the one loop back reaction effect in the spacetime, found by conformal field theory methods, which modifies the Hawking temperature of the black hole. Using the law of black hole mechanics we give the corrections to the Bekenstein-Hawking area law following from the modified Hawking temperature. Some examples are explicitly worked out.
Generalized Gradient Approximation Made Simple
International Nuclear Information System (INIS)
Perdew, J.P.; Burke, K.; Ernzerhof, M.
1996-01-01
Generalized gradient approximations (GGA close-quote s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. copyright 1996 The American Physical Society
Label inspection of approximate cylinder based on adverse cylinder panorama
Lin, Jianping; Liao, Qingmin; He, Bei; Shi, Chenbo
2013-12-01
This paper presents a machine vision system for automated label inspection, with the goal to reduce labor cost and ensure consistent product quality. Firstly, the images captured from each single-camera are distorted, since the inspection object is approximate cylindrical. Therefore, this paper proposes an algorithm based on adverse cylinder projection, where label images are rectified by distortion compensation. Secondly, to overcome the limited field of viewing for each single-camera, our method novelly combines images of all single-cameras and build a panorama for label inspection. Thirdly, considering the shake of production lines and error of electronic signal, we design the real-time image registration to calculate offsets between the template and inspected images. Experimental results demonstrate that our system is accurate, real-time and can be applied for numerous real- time inspections of approximate cylinders.
Impulse approximation in solid helium
International Nuclear Information System (INIS)
Glyde, H.R.
1985-01-01
The incoherent dynamic form factor S/sub i/(Q, ω) is evaluated in solid helium for comparison with the impulse approximation (IA). The purpose is to determine the Q values for which the IA is valid for systems such a helium where the atoms interact via a potential having a steeply repulsive but not infinite hard core. For 3 He, S/sub i/(Q, ω) is evaluated from first principles, beginning with the pair potential. The density of states g(ω) is evaluated using the self-consistent phonon theory and S/sub i/(Q,ω) is expressed in terms of g(ω). For solid 4 He resonable models of g(ω) using observed input parameters are used to evaluate S/sub i/(Q,ω). In both cases S/sub i/(Q, ω) is found to approach the impulse approximation S/sub IA/(Q, ω) closely for wave vector transfers Q> or approx. =20 A -1 . The difference between S/sub i/ and S/sub IA/, which is due to final state interactions of the scattering atom with the remainder of the atoms in the solid, is also predominantly antisymmetric in (ω-ω/sub R/), where ω/sub R/ is the recoil frequency. This suggests that the symmetrization procedure proposed by Sears to eliminate final state contributions should work well in solid helium
Efficient and Robust Signal Approximations
2009-05-01
gains of MrICA over the non- adaptive wavelet method for these images are: 2.43 bpp , 0.62 bpp , 2.91 bpp , 2.78 bpp , 3.39 bpp , and 2.69 bpp . Figure 3.6...shows six examples of 64 × 64 images encoded at 20dB. The coding gain values of the adaptive method are in this case 1.5 bpp , 1.48 bpp , 0.33 bpp , 0.23... bpp , 0.45 bpp , and 1.23 bpp . (For both figures, the colormaps are maximally stretched to enhance visibility.) As a general conclusion, MrICA obtains a
Reliable Function Approximation and Estimation
2016-08-16
compressed sensing results to a wide class of infinite -dimensional problems. We discuss four key application domains for the methods developed in this... infinite -dimensional problems. We discuss four key findings arising from this project, as related to uncertainty quantification, image processing, matrix...compressed sensing results to a wide class of infinite -dimensional problems. We discuss four key application domains for the methods developed in this project
International Nuclear Information System (INIS)
Buckley, L; Lambert, C; Nyiri, B; Gerig, L; Webb, R
2016-01-01
Purpose: To standardize the tube calibration for Elekta XVI cone beam CT (CBCT) systems in order to provide a meaningful estimate of the daily imaging dose and reduce the variation between units in a large centre with multiple treatment units. Methods: Initial measurements of the output from the CBCT systems were made using a Farmer chamber and standard CTDI phantom. The correlation between the measured CTDI and the tube current was confirmed using an Unfors Xi detector which was then used to perform a tube current calibration on each unit. Results: Initial measurements showed measured tube current variations of up to 25% between units for scans with the same image settings. In order to reasonably estimate the imaging dose, a systematic approach to x-ray generator calibration was adopted to ensure that the imaging dose was consistent across all units at the centre and was adopted as part of the routine quality assurance program. Subsequent measurements show that the variation in measured dose across nine units is on the order of 5%. Conclusion: Increasingly, patients receiving radiation therapy have extended life expectancies and therefore the cumulative dose from daily imaging should not be ignored. In theory, an estimate of imaging dose can be made from the imaging parameters. However, measurements have shown that there are large differences in the x-ray generator calibration as installed at the clinic. Current protocols recommend routine checks of dose to ensure constancy. The present study suggests that in addition to constancy checks on a single machine, a tube current calibration should be performed on every unit to ensure agreement across multiple machines. This is crucial at a large centre with multiple units in order to provide physicians with a meaningful estimate of the daily imaging dose.
Energy Technology Data Exchange (ETDEWEB)
Buckley, L; Lambert, C; Nyiri, B; Gerig, L [The Ottawa Hospital Cancer Ctr., Ottawa, ON (Canada); Webb, R [Elekta, Montreal, Quebec (Canada)
2016-06-15
Purpose: To standardize the tube calibration for Elekta XVI cone beam CT (CBCT) systems in order to provide a meaningful estimate of the daily imaging dose and reduce the variation between units in a large centre with multiple treatment units. Methods: Initial measurements of the output from the CBCT systems were made using a Farmer chamber and standard CTDI phantom. The correlation between the measured CTDI and the tube current was confirmed using an Unfors Xi detector which was then used to perform a tube current calibration on each unit. Results: Initial measurements showed measured tube current variations of up to 25% between units for scans with the same image settings. In order to reasonably estimate the imaging dose, a systematic approach to x-ray generator calibration was adopted to ensure that the imaging dose was consistent across all units at the centre and was adopted as part of the routine quality assurance program. Subsequent measurements show that the variation in measured dose across nine units is on the order of 5%. Conclusion: Increasingly, patients receiving radiation therapy have extended life expectancies and therefore the cumulative dose from daily imaging should not be ignored. In theory, an estimate of imaging dose can be made from the imaging parameters. However, measurements have shown that there are large differences in the x-ray generator calibration as installed at the clinic. Current protocols recommend routine checks of dose to ensure constancy. The present study suggests that in addition to constancy checks on a single machine, a tube current calibration should be performed on every unit to ensure agreement across multiple machines. This is crucial at a large centre with multiple units in order to provide physicians with a meaningful estimate of the daily imaging dose.
Approximating the minimum cycle mean
Directory of Open Access Journals (Sweden)
Krishnendu Chatterjee
2013-07-01
Full Text Available We consider directed graphs where each edge is labeled with an integer weight and study the fundamental algorithmic question of computing the value of a cycle with minimum mean weight. Our contributions are twofold: (1 First we show that the algorithmic question is reducible in O(n^2 time to the problem of a logarithmic number of min-plus matrix multiplications of n-by-n matrices, where n is the number of vertices of the graph. (2 Second, when the weights are nonnegative, we present the first (1 + ε-approximation algorithm for the problem and the running time of our algorithm is ilde(O(n^ω log^3(nW/ε / ε, where O(n^ω is the time required for the classic n-by-n matrix multiplication and W is the maximum value of the weights.
Denoising in Wavelet Packet Domain via Approximation Coefficients
Directory of Open Access Journals (Sweden)
Zahra Vahabi
2012-01-01
Full Text Available In this paper we propose a new approach in the wavelet domain for image denoising. In recent researches wavelet transform has introduced a time-Frequency transform for computing wavelet coefficient and eliminating noise. Some coefficients have effected smaller than the other's from noise, so they can be use reconstruct images with other subbands. We have developed Approximation image to estimate better denoised image. Naturally noiseless subimage introduced image with lower noise. Beside denoising we obtain a bigger compression rate. Increasing image contrast is another advantage of this method. Experimental results demonstrate that our approach compares favorably to more typical methods of denoising and compression in wavelet domain.100 images of LIVE Dataset were tested, comparing signal to noise ratios (SNR,soft thresholding was %1.12 better than hard thresholding, POAC was %1.94 better than soft thresholding and POAC with wavelet packet was %1.48 better than POAC.
Nonlinear approximation with dictionaries I. Direct estimates
DEFF Research Database (Denmark)
Gribonval, Rémi; Nielsen, Morten
2004-01-01
We study various approximation classes associated with m-term approximation by elements from a (possibly) redundant dictionary in a Banach space. The standard approximation class associated with the best m-term approximation is compared to new classes defined by considering m-term approximation w...
Approximate cohomology in Banach algebras | Pourabbas ...
African Journals Online (AJOL)
We introduce the notions of approximate cohomology and approximate homotopy in Banach algebras and we study the relation between them. We show that the approximate homotopically equivalent cochain complexes give the same approximate cohomologies. As a special case, approximate Hochschild cohomology is ...
Approximate models of job shops
Diamantidis, Alexandros
1999-01-01
Scheduling can be described as “the allocation of scarce resources over time to perform a collection of tasks”. They arise in many practical applications in manufacturing, marketing, service industries and within the operating systems of computers. Scheduling problems are frequently encountered in various activities of every day life. They exist whenever there is a choice o f the order in which a number of tasks can be performed Some examples are scheduling of classes in academic inst...
Fingerprint Image Enhancement Based on Second Directional Derivative of the Digital Image
Directory of Open Access Journals (Sweden)
Onnia Vesa
2002-01-01
Full Text Available This paper presents a novel approach of fingerprint image enhancement that relies on detecting the fingerprint ridges as image regions where the second directional derivative of the digital image is positive. A facet model is used in order to approximate the derivatives at each image pixel based on the intensity values of pixels located in a certain neighborhood. We note that the size of this neighborhood has a critical role in achieving accurate enhancement results. Using neighborhoods of various sizes, the proposed algorithm determines several candidate binary representations of the input fingerprint pattern. Subsequently, an output binary ridge-map image is created by selecting image zones, from the available binary image candidates, according to a MAP selection rule. Two public domain collections of fingerprint images are used in order to objectively assess the performance of the proposed fingerprint image enhancement approach.
Recognition of computerized facial approximations by familiar assessors.
Richard, Adam H; Monson, Keith L
2017-11-01
Studies testing the effectiveness of facial approximations typically involve groups of participants who are unfamiliar with the approximated individual(s). This limitation requires the use of photograph arrays including a picture of the subject for comparison to the facial approximation. While this practice is often necessary due to the difficulty in obtaining a group of assessors who are familiar with the approximated subject, it may not accurately simulate the thought process of the target audience (friends and family members) in comparing a mental image of the approximated subject to the facial approximation. As part of a larger process to evaluate the effectiveness and best implementation of the ReFace facial approximation software program, the rare opportunity arose to conduct a recognition study using assessors who were personally acquainted with the subjects of the approximations. ReFace facial approximations were generated based on preexisting medical scans, and co-workers of the scan donors were tested on whether they could accurately pick out the approximation of their colleague from arrays of facial approximations. Results from the study demonstrated an overall poor recognition performance (i.e., where a single choice within a pool is not enforced) for individuals who were familiar with the approximated subjects. Out of 220 recognition tests only 10.5% resulted in the assessor selecting the correct approximation (or correctly choosing not to make a selection when the array consisted only of foils), an outcome that was not significantly different from the 9% random chance rate. When allowed to select multiple approximations the assessors felt resembled the target individual, the overall sensitivity for ReFace approximations was 16.0% and the overall specificity was 81.8%. These results differ markedly from the results of a previous study using assessors who were unfamiliar with the approximated subjects. Some possible explanations for this disparity in
Approximal morphology as predictor of approximal caries in primary molar teeth
DEFF Research Database (Denmark)
Cortes, A; Martignon, S; Qvist, V
2018-01-01
consent was given, participated. Upper and lower molar teeth of one randomly selected side received a 2-day temporarily separation. Bitewing radiographs and silicone impressions of interproximal area (IPA) were obtained. One-year procedures were repeated in 52 children (84%). The morphology of the distal...... surfaces of the first molar teeth and the mesial surfaces on the second molar teeth (n=208) was scored from the occlusal aspect on images from the baseline resin models resulting in four IPA variants: concave-concave; concave-convex; convex-concave, and convex-convex. Approximal caries on the surface...
Magnus approximation in neutrino oscillations
International Nuclear Information System (INIS)
Acero, Mario A; Aguilar-Arevalo, Alexis A; D'Olivo, J C
2011-01-01
Oscillations between active and sterile neutrinos remain as an open possibility to explain some anomalous experimental observations. In a four-neutrino (three active plus one sterile) mixing scheme, we use the Magnus expansion of the evolution operator to study the evolution of neutrino flavor amplitudes within the Earth. We apply this formalism to calculate the transition probabilities from active to sterile neutrinos with energies of the order of a few GeV, taking into account the matter effect for a varying terrestrial density.
Approximate models for broken clouds in stochastic radiative transfer theory
International Nuclear Information System (INIS)
Doicu, Adrian; Efremenko, Dmitry S.; Loyola, Diego; Trautmann, Thomas
2014-01-01
This paper presents approximate models in stochastic radiative transfer theory. The independent column approximation and its modified version with a solar source computed in a full three-dimensional atmosphere are formulated in a stochastic framework and for arbitrary cloud statistics. The nth-order stochastic models describing the independent column approximations are equivalent to the nth-order stochastic models for the original radiance fields in which the gradient vectors are neglected. Fast approximate models are further derived on the basis of zeroth-order stochastic models and the independent column approximation. The so-called “internal mixing” models assume a combination of the optical properties of the cloud and the clear sky, while the “external mixing” models assume a combination of the radiances corresponding to completely overcast and clear skies. A consistent treatment of internal and external mixing models is provided, and a new parameterization of the closure coefficient in the effective thickness approximation is given. An efficient computation of the closure coefficient for internal mixing models, using a previously derived vector stochastic model as a reference, is also presented. Equipped with appropriate look-up tables for the closure coefficient, these models can easily be integrated into operational trace gas retrieval systems that exploit absorption features in the near-IR solar spectrum. - Highlights: • Independent column approximation in a stochastic setting. • Fast internal and external mixing models for total and diffuse radiances. • Efficient optimization of internal mixing models to match reference models
International Nuclear Information System (INIS)
Kellum, C.D.; Fisher, L.M.; Tegtmeyer, C.J.
1987-01-01
This paper examines the advantages of the use of excretory urography for diagnosis. According to the authors, excretory urography remains the basic radiologic examination of the urinary tract and is the foundation for the evaluation of suspected urologic disease. Despite development of the newer diagnostic modalities such as isotope scanning, ultrasonography, CT, and magnetic resonsance imaging (MRI), excretory urography has maintained a prominent role in ruorradiology. Some indications have been altered and will continue to change with the newer imaging modalities, but the initial evaluation of suspected urinary tract structural abnormalities; hematuria, pyuria, and calculus disease is best performed with excretory urography. The examination is relatively inexpensive and simple to perform, with few contraindictions. Excretory urography, when properly performed, can provide valuable information about the renal parenchyma, pelvicalyceal system, ureters, and urinary bladder
Photoelectron spectroscopy and the dipole approximation
Energy Technology Data Exchange (ETDEWEB)
Hemmers, O.; Hansen, D.L.; Wang, H. [Univ. of Nevada, Las Vegas, NV (United States)] [and others
1997-04-01
Photoelectron spectroscopy is a powerful technique because it directly probes, via the measurement of photoelectron kinetic energies, orbital and band structure in valence and core levels in a wide variety of samples. The technique becomes even more powerful when it is performed in an angle-resolved mode, where photoelectrons are distinguished not only by their kinetic energy, but by their direction of emission as well. Determining the probability of electron ejection as a function of angle probes the different quantum-mechanical channels available to a photoemission process, because it is sensitive to phase differences among the channels. As a result, angle-resolved photoemission has been used successfully for many years to provide stringent tests of the understanding of basic physical processes underlying gas-phase and solid-state interactions with radiation. One mainstay in the application of angle-resolved photoelectron spectroscopy is the well-known electric-dipole approximation for photon interactions. In this simplification, all higher-order terms, such as those due to electric-quadrupole and magnetic-dipole interactions, are neglected. As the photon energy increases, however, effects beyond the dipole approximation become important. To best determine the range of validity of the dipole approximation, photoemission measurements on a simple atomic system, neon, where extra-atomic effects cannot play a role, were performed at BL 8.0. The measurements show that deviations from {open_quotes}dipole{close_quotes} expectations in angle-resolved valence photoemission are observable for photon energies down to at least 0.25 keV, and are quite significant at energies around 1 keV. From these results, it is clear that non-dipole angular-distribution effects may need to be considered in any application of angle-resolved photoelectron spectroscopy that uses x-ray photons of energies as low as a few hundred eV.
APPROXIMATING INNOVATION POTENTIAL WITH NEUROFUZZY ROBUST MODEL
Directory of Open Access Journals (Sweden)
Kasa, Richard
2015-01-01
Full Text Available In a remarkably short time, economic globalisation has changed the world’s economic order, bringing new challenges and opportunities to SMEs. These processes pushed the need to measure innovation capability, which has become a crucial issue for today’s economic and political decision makers. Companies cannot compete in this new environment unless they become more innovative and respond more effectively to consumers’ needs and preferences – as mentioned in the EU’s innovation strategy. Decision makers cannot make accurate and efficient decisions without knowing the capability for innovation of companies in a sector or a region. This need is forcing economists to develop an integrated, unified and complete method of measuring, approximating and even forecasting the innovation performance not only on a macro but also a micro level. In this recent article a critical analysis of the literature on innovation potential approximation and prediction is given, showing their weaknesses and a possible alternative that eliminates the limitations and disadvantages of classical measuring and predictive methods.
TMB: Automatic Differentiation and Laplace Approximation
Directory of Open Access Journals (Sweden)
Kasper Kristensen
2016-04-01
Full Text Available TMB is an open source R package that enables quick implementation of complex nonlinear random effects (latent variable models in a manner similar to the established AD Model Builder package (ADMB, http://admb-project.org/; Fournier et al. 2011. In addition, it offers easy access to parallel computations. The user defines the joint likelihood for the data and the random effects as a C++ template function, while all the other operations are done in R; e.g., reading in the data. The package evaluates and maximizes the Laplace approximation of the marginal likelihood where the random effects are automatically integrated out. This approximation, and its derivatives, are obtained using automatic differentiation (up to order three of the joint likelihood. The computations are designed to be fast for problems with many random effects (≈ 106 and parameters (≈ 103 . Computation times using ADMB and TMB are compared on a suite of examples ranging from simple models to large spatial models where the random effects are a Gaussian random field. Speedups ranging from 1.5 to about 100 are obtained with increasing gains for large problems. The package and examples are available at http://tmb-project.org/.
Detecting Change-Point via Saddlepoint Approximations
Institute of Scientific and Technical Information of China (English)
Zhaoyuan LI; Maozai TIAN
2017-01-01
It's well-known that change-point problem is an important part of model statistical analysis.Most of the existing methods are not robust to criteria of the evaluation of change-point problem.In this article,we consider "mean-shift" problem in change-point studies.A quantile test of single quantile is proposed based on saddlepoint approximation method.In order to utilize the information at different quantile of the sequence,we further construct a "composite quantile test" to calculate the probability of every location of the sequence to be a change-point.The location of change-point can be pinpointed rather than estimated within a interval.The proposed tests make no assumptions about the functional forms of the sequence distribution and work sensitively on both large and small size samples,the case of change-point in the tails,and multiple change-points situation.The good performances of the tests are confirmed by simulations and real data analysis.The saddlepoint approximation based distribution of the test statistic that is developed in the paper is of independent interest and appealing.This finding may be of independent interest to the readers in this research area.
Traveling cluster approximation for uncorrelated amorphous systems
International Nuclear Information System (INIS)
Kaplan, T.; Sen, A.K.; Gray, L.J.; Mills, R.
1985-01-01
In this paper, the authors apply the TCA concepts to spatially disordered, uncorrelated systems (e.g., fluids or amorphous metals without short-range order). This is the first approximation scheme for amorphous systems that takes cluster effects into account while preserving the Herglotz property for any amount of disorder. They have performed some computer calculations for the pair TCA, for the model case of delta-function potentials on a one-dimensional random chain. These results are compared with exact calculations (which, in principle, taken into account all cluster effects) and with the CPA, which is the single-site TCA. The density of states for the pair TCA clearly shows some improvement over the CPA, and yet, apparently, the pair approximation distorts some of the features of the exact results. They conclude that the effects of large clusters are much more important in an uncorrelated liquid metal than in a substitutional alloy. As a result, the pair TCA, which does quite a nice job for alloys, is not adequate for the liquid. Larger clusters must be treated exactly, and therefore an n-TCA with n > 2 must be used
Matching rendered and real world images by digital image processing
Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume
2010-05-01
Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.
Approximated solutions to the Schroedinger equation
International Nuclear Information System (INIS)
Rico, J.F.; Fernandez-Alonso, J.I.
1977-01-01
The authors are currently working on a couple of the well-known deficiencies of the variation method and present here some of the results that have been obtained so far. The variation method does not give information a priori on the trial functions best suited for a particular problem nor does it give information a posteriori on the degree of precision attained. In order to clarify the origin of both difficulties, a geometric interpretation of the variation method is presented. This geometric interpretation is the starting point for the exact formal solution to the fundamental state and for the step-by-step approximations to the exact solution which are also given. Some comments on these results are included. (Auth.)
Analytical approximations for wide and narrow resonances
International Nuclear Information System (INIS)
Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da
2005-01-01
This paper aims at developing analytical expressions for the adjoint neutron spectrum in the resonance energy region, taking into account both narrow and wide resonance approximations, in order to reduce the numerical computations involved. These analytical expressions, besides reducing computing time, are very simple from a mathematical point of view. The results obtained with this analytical formulation were compared to a reference solution obtained with a numerical method previously developed to solve the neutron balance adjoint equations. Narrow and wide resonances of U 238 were treated and the analytical procedure gave satisfactory results as compared with the reference solution, for the resonance energy range. The adjoint neutron spectrum is useful to determine the neutron resonance absorption, so that multigroup adjoint cross sections used by the adjoint diffusion equation can be obtained. (author)
Analytical approximations for wide and narrow resonances
Energy Technology Data Exchange (ETDEWEB)
Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]. E-mail: aquilino@lmp.ufrj.br
2005-07-01
This paper aims at developing analytical expressions for the adjoint neutron spectrum in the resonance energy region, taking into account both narrow and wide resonance approximations, in order to reduce the numerical computations involved. These analytical expressions, besides reducing computing time, are very simple from a mathematical point of view. The results obtained with this analytical formulation were compared to a reference solution obtained with a numerical method previously developed to solve the neutron balance adjoint equations. Narrow and wide resonances of U{sup 238} were treated and the analytical procedure gave satisfactory results as compared with the reference solution, for the resonance energy range. The adjoint neutron spectrum is useful to determine the neutron resonance absorption, so that multigroup adjoint cross sections used by the adjoint diffusion equation can be obtained. (author)
Polarized constituent quarks in NLO approximation
International Nuclear Information System (INIS)
Khorramian, Ali N.; Tehrani, S. Atashbar; Mirjalili, A.
2006-01-01
The valon representation provides a basis between hadrons and quarks, in terms of which the bound-state and scattering properties of hadrons can be united and described. We studied polarized valon distributions which have an important role in describing the spin dependence of parton distribution in leading and next-to-leading order approximation. Convolution integral in frame work of valon model as a useful tool, was used in polarized case. To obtain polarized parton distributions in a proton we need to polarized valon distribution in a proton and polarized parton distributions inside the valon. We employed Bernstein polynomial averages to get unknown parameters of polarized valon distributions by fitting to available experimental data
Radiographic display of carious lesions and cavitation in approximal surfaces
DEFF Research Database (Denmark)
Wenzel, Ann
2014-01-01
cavitation in approximal surfaces. Nonetheless, there are several drawbacks with CBCT, such as radiation dose, costs and imaging artefacts. Therefore, CBCT cannot be advocated at current as a primary radiographic examination with the aim of diagnosing cavitated carious lesions. Conclusions. Bitewing...
Properties of Brownian Image Models in Scale-Space
DEFF Research Database (Denmark)
Pedersen, Kim Steenstrup
2003-01-01
Brownian images) will be discussed in relation to linear scale-space theory, and it will be shown empirically that the second order statistics of natural images mapped into jet space may, within some scale interval, be modeled by the Brownian image model. This is consistent with the 1/f 2 power spectrum...... law that apparently governs natural images. Furthermore, the distribution of Brownian images mapped into jet space is Gaussian and an analytical expression can be derived for the covariance matrix of Brownian images in jet space. This matrix is also a good approximation of the covariance matrix......In this paper it is argued that the Brownian image model is the least committed, scale invariant, statistical image model which describes the second order statistics of natural images. Various properties of three different types of Gaussian image models (white noise, Brownian and fractional...
Common approximations for density operators may lead to imaginary entropy
International Nuclear Information System (INIS)
Lendi, K.; Amaral Junior, M.R. do
1983-01-01
The meaning and validity of usual second order approximations for density operators are illustrated with the help of a simple exactly soluble two-level model in which all relevant quantities can easily be controlled. This leads to exact upper bound error estimates which help to select more precisely permissible correlation times as frequently introduced if stochastic potentials are present. A final consideration of information entropy reveals clearly the limitations of this kind of approximation procedures. (Author) [pt
Fernandez, R.; Deveaux, V.
2010-01-01
We provide a formal definition and study the basic properties of partially ordered chains (POC). These systems were proposed to model textures in image processing and to represent independence relations between random variables in statistics (in the later case they are known as Bayesian networks).
Approximate solution fuzzy pantograph equation by using homotopy perturbation method
Jameel, A. F.; Saaban, A.; Ahadkulov, H.; Alipiah, F. M.
2017-09-01
In this paper, Homotopy Perturbation Method (HPM) is modified and formulated to find the approximate solution for its employment to solve (FDDEs) involving a fuzzy pantograph equation. The solution that can be obtained by using HPM is in the form of infinite series that converge to the actual solution of the FDDE and this is one of the benefits of this method In addition, it can be used for solving high order fuzzy delay differential equations directly without reduction to a first order system. Moreover, the accuracy of HPM can be detected without needing the exact solution. The HPM is studied for fuzzy initial value problems involving pantograph equation. Using the properties of fuzzy set theory, we reformulate the standard approximate method of HPM and obtain the approximate solutions. The effectiveness of the proposed method is demonstrated for third order fuzzy pantograph equation.
Grimm, Uwe
2017-01-01
Quasicrystals are non-periodic solids that were discovered in 1982 by Dan Shechtman, Nobel Prize Laureate in Chemistry 2011. The mathematics that underlies this discovery or that proceeded from it, known as the theory of Aperiodic Order, is the subject of this comprehensive multi-volume series. This second volume begins to develop the theory in more depth. A collection of leading experts, among them Robert V. Moody, cover various aspects of crystallography, generalising appropriately from the classical case to the setting of aperiodically ordered structures. A strong focus is placed upon almost periodicity, a central concept of crystallography that captures the coherent repetition of local motifs or patterns, and its close links to Fourier analysis. The book opens with a foreword by Jeffrey C. Lagarias on the wider mathematical perspective and closes with an epilogue on the emergence of quasicrystals, written by Peter Kramer, one of the founders of the field.
Repfinder: Finding approximately repeated scene elements for image editing
Cheng, Ming-Ming; Zhang, Fanglue; Mitra, Niloy J.; Huang, Xiaolei; Hu, Shimin
2010-01-01
variation, etc. Manually enforcing such relations is laborious and error-prone. We propose a novel framework where user scribbles are used to guide detection and extraction of such repeated elements. Our detection process, which is based on a novel boundary
Reconstruction Algorithms in Undersampled AFM Imaging
DEFF Research Database (Denmark)
Arildsen, Thomas; Oxvig, Christian Schou; Pedersen, Patrick Steffen
2016-01-01
This paper provides a study of spatial undersampling in atomic force microscopy (AFM) imaging followed by different image reconstruction techniques based on sparse approximation as well as interpolation. The main reasons for using undersampling is that it reduces the path length and thereby...... the scanning time as well as the amount of interaction between the AFM probe and the specimen. It can easily be applied on conventional AFM hardware. Due to undersampling, it is then necessary to further process the acquired image in order to reconstruct an approximation of the image. Based on real AFM cell...... images, our simulations reveal that using a simple raster scanning pattern in combination with conventional image interpolation performs very well. Moreover, this combination enables a reduction by a factor 10 of the scanning time while retaining an average reconstruction quality around 36 dB PSNR...
Reducing Approximation Error in the Fourier Flexible Functional Form
Directory of Open Access Journals (Sweden)
Tristan D. Skolrud
2017-12-01
Full Text Available The Fourier Flexible form provides a global approximation to an unknown data generating process. In terms of limiting function specification error, this form is preferable to functional forms based on second-order Taylor series expansions. The Fourier Flexible form is a truncated Fourier series expansion appended to a second-order expansion in logarithms. By replacing the logarithmic expansion with a Box-Cox transformation, we show that the Fourier Flexible form can reduce approximation error by 25% on average in the tails of the data distribution. The new functional form allows for nested testing of a larger set of commonly implemented functional forms.
Z-contrast imaging of ordered structures in Pb(Mg1/3Nb2/3)O3 and Ba(Mg1/3Nb2/3)O3
International Nuclear Information System (INIS)
Yan, Y.; Pennycook, S.J.; Xu, Z.; Viehland, D.
1998-02-01
Lead-based cubic perovskites such as Pb(B 1/3 2+ B 2/3 5+ )O 3 (B 2+ Mg, Co, Ni, Zn; B 5+ = Nb, Ta) are relaxor ferroelectrics. Localized order and disorder often occur in materials of this type. In the Pb(Mg 1/3 Nb 2/3 )O 3 (PMN) family, previous studies have proposed two models, space-charge and charge-balance models. In the first model, the ordered regions carry a net negative charge [Pb(Mg 1/2 Nb 1/2 )O 3 ], while in the second model it does not carry a net charge [Pb((Mg 2/3 Nb 1/3 ) 1/2 Nb 1/2 )O 3 ]. However, no direct evidence for these two models has appeared in the literature yet. In this paper the authors report the first direct observations of local ordering in undoped and La-doped Pb(Mg 1/3 Nb 2/3 )O 3 , using high-resolution Z-contrast imaging. Because the ordered structure in Ba(Mg 1/3 Nb 2/3 )O 3 is well known, the Z-contrast image from an ordered domain is used as a reference for this study
Trajectory averaging for stochastic approximation MCMC algorithms
Liang, Faming
2010-01-01
to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic
Reduction of Linear Programming to Linear Approximation
Vaserstein, Leonid N.
2006-01-01
It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.
Ito, Kazufumi; Teglas, Russell
1987-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
Ito, K.; Teglas, R.
1984-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
Some relations between entropy and approximation numbers
Institute of Scientific and Technical Information of China (English)
郑志明
1999-01-01
A general result is obtained which relates the entropy numbers of compact maps on Hilbert space to its approximation numbers. Compared with previous works in this area, it is particularly convenient for dealing with the cases where the approximation numbers decay rapidly. A nice estimation between entropy and approximation numbers for noncompact maps is given.
Axiomatic Characterizations of IVF Rough Approximation Operators
Directory of Open Access Journals (Sweden)
Guangji Yu
2014-01-01
Full Text Available This paper is devoted to the study of axiomatic characterizations of IVF rough approximation operators. IVF approximation spaces are investigated. The fact that different IVF operators satisfy some axioms to guarantee the existence of different types of IVF relations which produce the same operators is proved and then IVF rough approximation operators are characterized by axioms.
An approximation for kanban controlled assembly systems
Topan, E.; Avsar, Z.M.
2011-01-01
An approximation is proposed to evaluate the steady-state performance of kanban controlled two-stage assembly systems. The development of the approximation is as follows. The considered continuous-time Markov chain is aggregated keeping the model exact, and this aggregate model is approximated
Operator approximant problems arising from quantum theory
Maher, Philip J
2017-01-01
This book offers an account of a number of aspects of operator theory, mainly developed since the 1980s, whose problems have their roots in quantum theory. The research presented is in non-commutative operator approximation theory or, to use Halmos' terminology, in operator approximants. Focusing on the concept of approximants, this self-contained book is suitable for graduate courses.
Analysis of corrections to the eikonal approximation
Hebborn, C.; Capel, P.
2017-11-01
Various corrections to the eikonal approximations are studied for two- and three-body nuclear collisions with the goal to extend the range of validity of this approximation to beam energies of 10 MeV/nucleon. Wallace's correction does not improve much the elastic-scattering cross sections obtained at the usual eikonal approximation. On the contrary, a semiclassical approximation that substitutes the impact parameter by a complex distance of closest approach computed with the projectile-target optical potential efficiently corrects the eikonal approximation. This opens the possibility to analyze data measured down to 10 MeV/nucleon within eikonal-like reaction models.
Mapping moveout approximations in TI media
Stovas, Alexey; Alkhalifah, Tariq Ali
2013-01-01
Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.
Analytical approximation of neutron physics data
International Nuclear Information System (INIS)
Badikov, S.A.; Vinogradov, V.A.; Gaj, E.V.; Rabotnov, N.S.
1984-01-01
The method for experimental neutron-physical data analytical approximation by rational functions based on the Pade approximation is suggested. It is shown that the existence of the Pade approximation specific properties in polar zones is an extremely favourable analytical property essentially extending the convergence range and increasing its rate as compared with polynomial approximation. The Pade approximation is the particularly natural instrument for resonance curve processing as the resonances conform to the complex poles of the approximant. But even in a general case analytical representation of the data in this form is convenient and compact. Thus representation of the data on the neutron threshold reaction cross sections (BOSPOR constant library) in the form of rational functions lead to approximately twenty fold reduction of the storaged numerical information as compared with the by-point calculation at the same accWracy
A unified approach to the Darwin approximation
International Nuclear Information System (INIS)
Krause, Todd B.; Apte, A.; Morrison, P. J.
2007-01-01
There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting
Mapping moveout approximations in TI media
Stovas, Alexey
2013-11-21
Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.