WorldWideScience

Sample records for standard backprojection algorithm

  1. An implementation of a fast backprojection image formation algorithm for spotlight-mode SAR

    Science.gov (United States)

    Wahl, Daniel E.; Yocky, David A.; Jakowatz, Charles V., Jr.

    2008-04-01

    In this paper we describe an algorithm for fast spotlight-mode synthetic aperture radar (SAR) image formation that employs backprojection as the core, but is implemented such that its compute time is comparable to the often-used Polar Format Algorithm (PFA). (Standard backprojection is so much slower than PFA that it is impractical to use in many operational scenarios.) We demonstrate the feasibility of the algorithm on real SAR phase history data sets and show some advantages in the SAR image formed by this technique.

  2. An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction

    International Nuclear Information System (INIS)

    Mundy, Daniel W.; Herman, Michael G.

    2011-01-01

    parallel to the image plane. This effect decreases the sum of the image, thereby also affecting the mean, standard deviation, and SNR of the image. All back-projected events associated with a simulated point source intersected the voxel containing the source and the FWHM of the back-projected image was similar to that obtained from the marching method. Conclusions: The slight deficit to image quality observed with the threshold-based back-projection algorithm described here is outweighed by the 75% reduction in computation time. The implementation of this method requires the development of an optimum threshold function, which determines the overall accuracy of the method. This makes the algorithm well-suited to applications involving the reconstruction of many large images, where the time invested in threshold development is offset by the decreased image reconstruction time. Implemented in a parallel-computing environment, the threshold-based algorithm has the potential to provide real-time dose verification for radiation therapy.

  3. A study of reconstruction artifacts in cone beam tomography using filtered backprojection and iterative EM algorithms

    International Nuclear Information System (INIS)

    Zeng, G.L.; Gullberg, G.T.

    1990-01-01

    Reconstruction artifacts in cone beam tomography are studied for filtered backprojection (Feldkamp) and iterative EM algorithms. The filtered backprojection algorithm uses a voxel-driven, interpolated backprojection to reconstruct the cone beam data; whereas, the iterative EM algorithm performs ray-driven projection and backprojection operations for each iteration. Two weight in schemes for the projection and backprojection operations in the EM algorithm are studied. One weights each voxel by the length of the ray through the voxel and the other equates the value of a voxel to the functional value of the midpoint of the line intersecting the voxel, which is obtained by interpolating between eight neighboring voxels. Cone beam reconstruction artifacts such as rings, bright vertical extremities, and slice-to slice cross talk are not found with parallel beam and fan beam geometries

  4. A reconstruction algorithm for coherent scatter computed tomography based on filtered back-projection

    International Nuclear Information System (INIS)

    Stevendaal, U. van; Schlomka, J.-P.; Harding, A.; Grass, M.

    2003-01-01

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter form factor of the investigated object. Reconstruction from coherently scattered x-rays is commonly done using algebraic reconstruction techniques (ART). In this paper, we propose an alternative approach based on filtered back-projection. For the first time, a three-dimensional (3D) filtered back-projection technique using curved 3D back-projection lines is applied to two-dimensional coherent scatter projection data. The proposed algorithm is tested with simulated projection data as well as with projection data acquired with a demonstrator setup similar to a multi-line CT scanner geometry. While yielding comparable image quality as ART reconstruction, the modified 3D filtered back-projection algorithm is about two orders of magnitude faster. In contrast to iterative reconstruction schemes, it has the advantage that subfield-of-view reconstruction becomes feasible. This allows a selective reconstruction of the coherent-scatter form factor for a region of interest. The proposed modified 3D filtered back-projection algorithm is a powerful reconstruction technique to be implemented in a CSCT scanning system. This method gives coherent scatter CT the potential of becoming a competitive modality for medical imaging or nondestructive testing

  5. A fast beam hardening correction method incorporated in a filtered back-projection based MAP algorithm

    Science.gov (United States)

    Luo, Shouhua; Wu, Huazhen; Sun, Yi; Li, Jing; Li, Guang; Gu, Ning

    2017-03-01

    The beam hardening effect can induce strong artifacts in CT images, which result in severely deteriorated image quality with incorrect intensities (CT numbers). This paper develops an effective and efficient beam hardening correction algorithm incorporated in a filtered back-projection based maximum a posteriori (BHC-FMAP). In the proposed algorithm, the beam hardening effect is modeled and incorporated into the forward-projection of the MAP to suppress beam hardening induced artifacts, and the image update process is performed by Feldkamp-Davis-Kress method based back-projection to speed up the convergence. The proposed BHC-FMAP approach does not require information about the beam spectrum or the material properties, or any additional segmentation operation. The proposed method was qualitatively and quantitatively evaluated using both phantom and animal projection data. The experimental results demonstrate that the BHC-FMAP method can efficiently provide a good correction of beam hardening induced artefacts.

  6. Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm

    Directory of Open Access Journals (Sweden)

    Man Zhang

    2017-10-01

    Full Text Available Precise azimuth-variant motion compensation (MOCO is an essential and difficult task for high-resolution synthetic aperture radar (SAR imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA, have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.

  7. Comparison of 3D Maximum A Posteriori and Filtered Backprojection algorithms for high resolution animal imaging in microPET

    International Nuclear Information System (INIS)

    Chatziioannou, A.; Qi, J.; Moore, A.; Annala, A.; Nguyen, K.; Leahy, R.M.; Cherry, S.R.

    2000-01-01

    We have evaluated the performance of two three dimensional reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data and the second was a statistical maximum-aposteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing 3 million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the sixty reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior

  8. Theoretically exact backprojection filtration algorithm for multi-segment linear trajectory

    Science.gov (United States)

    Wu, Weiwen; Yu, Hengyong; Cong, Wenxiang; Liu, Fenglin

    2018-01-01

    A theoretically exact backprojection filtration algorithm is proved and implemented for image reconstruction from a multi-segment linear trajectory assuming fan-beam geometry. The reconstruction formula is based on a concept of linear PI-line (L-PI) proposed in our previous work. The proof is completed in two consecutive steps. In the first step, it is proved that theoretically exact image reconstruction can be obtained on an arbitrary L-PI line from an infinite straight-line trajectory. In the second step, it is shown that accurate image reconstruction can be achieved from a multi-segment line trajectory by introducing a weight function to deal with the data redundancy. Numerical implementation and simulation results validate the correctness of our theoretical results.

  9. Performance evaluation of the backprojection filtered (BPF) algorithm in circular fan-beam and cone-beam CT

    International Nuclear Information System (INIS)

    Li Liang; Chen Zhiqiang; Zhang Li; Kang Kejun

    2006-01-01

    In this article we introduce an exact backprojection filtered (BPF) type reconstruction algorithm for cone-beam scans based on Zou and Pan's work. The algorithm can reconstruct images using only the projection data passing through the parallel PI-line segments in reduced scans. Computer simulations and practical experiments are carried out to evaluate this algorithm. the BPF algorithm has a higher computational efficiency than the famous FDK algorithm. the BPF algorithm is evaluated using the practical CT projection data on a 450 keV X-ray CT system with a flat-panel detector (FPD). From the practical experiments, we get the spatial resolution of this CT system. The algorithm could achieve the spatial resolution of 2.4 lp/mm and satisfies the practical applications in industrial CT inspection. (authors)

  10. An improved cone-beam filtered backprojection reconstruction algorithm based on x-ray angular correction and multiresolution analysis

    International Nuclear Information System (INIS)

    Sun, Y.; Hou, Y.; Yan, Y.

    2004-01-01

    With the extensive application of industrial computed tomography in the field of non-destructive testing, how to improve the quality of the reconstructed image is receiving more and more concern. It is well known that in the existing cone-beam filtered backprojection reconstruction algorithms the cone angle is controlled within a narrow range. The reason of this limitation is the incompleteness of projection data when the cone angle increases. Thus the size of the tested workpiece is limited. Considering the characteristic of X-ray cone angle, an improved cone-beam filtered back-projection reconstruction algorithm taking account of angular correction is proposed in this paper. The aim of our algorithm is to correct the cone-angle effect resulted from the incompleteness of projection data in the conventional algorithm. The basis of the correction is the angular relationship among X-ray source, tested workpiece and the detector. Thus the cone angle is not strictly limited and this algorithm may be used to detect larger workpiece. Further more, adaptive wavelet filter is used to make multiresolution analysis, which can modify the wavelet decomposition series adaptively according to the demand for resolution of local reconstructed area. Therefore the computation and the time of reconstruction can be reduced, and the quality of the reconstructed image can also be improved. (author)

  11. Comparison of polar formatting and back-projection algorithms for spotlight-mode SAR image formation

    Science.gov (United States)

    Jakowatz, Charles V., Jr.; Doren, Neall

    2006-05-01

    The convolution/back-projection (CBP) algorithm has recently once again been touted as the "gold standard" for spotlight-mode SAR image formation, as it is proclaimed to achieve better image quality than the well-known and often employed polar formatting algorithm (PFA). In addition, it has been suggested that PFA is less flexible than CBP in that PFA can only compute the SAR image on one grid and PFA cannot add or subtract pulses from the imaging process. The argument for CBP acknowledges the computational burden of CBP compared to PFA, but asserts that the increased image accuracy and flexibility of the formation process is warranted, at least in some imaging scenarios. Because CBP can now be sped up by the proper algorithm design, it becomes, according to this line of analysis, the clear algorithm of choice for SAR image formation. In this paper we reject the above conclusion by showing that PFA and CBP achieve the same image quality, and that PFA has complete flexibility, including choice of imaging plane, size of illuminated beam area to be imaged, resolution of the image, and others. We demonstrate these claims via formation of both simulated and real SAR imagery using both algorithms.

  12. Potency backprojection

    Science.gov (United States)

    Okuwaki, R.; Kasahara, A.; Yagi, Y.

    2017-12-01

    The backprojection (BP) method has been one of the powerful tools of tracking seismic-wave sources of the large/mega earthquakes. The BP method projects waveforms onto a possible source point by stacking them with the theoretical-travel-time shifts between the source point and the stations. Following the BP method, the hybrid backprojection (HBP) method was developed to enhance depth-resolution of projected images and mitigate the dummy imaging of the depth phases, which are shortcomings of the BP method, by stacking cross-correlation functions of the observed waveforms and theoretically calculated Green's functions (GFs). The signal-intensity of the BP/HBP image at a source point is related to how much of observed waveforms was radiated from that point. Since the amplitude of the GF associated with the slip-rate increases with depth as the rigidity increases with depth, the intensity of the BP/HBP image inherently has depth dependence. To make a direct comparison of the BP/HBP image with the corresponding slip distribution inferred from a waveform inversion, and discuss the rupture properties along the fault drawn from the waveforms in high- and low-frequencies with the BP/HBP methods and the waveform inversion, respectively, it is desirable to have the variants of BP/HBP methods that directly image the potency-rate-density distribution. Here we propose new formulations of the BP/HBP methods, which image the distribution of the potency-rate density by introducing alternative normalizing factors in the conventional formulations. For the BP method, the observed waveform is normalized with the maximum amplitude of P-phase of the corresponding GF. For the HBP method, we normalize the cross-correlation function with the squared-sum of the GF. The normalized waveforms or the cross-correlation functions are then stacked for all the stations to enhance the signal to noise ratio. We will present performance-tests of the new formulations by using synthetic waveforms and the

  13. A back-projection algorithm in the presence of an extra attenuating medium: towards EPID dosimetry for the MR-Linac

    Science.gov (United States)

    Torres-Xirau, I.; Olaciregui-Ruiz, I.; Rozendaal, R. A.; González, P.; Mijnheer, B. J.; Sonke, J.-J.; van der Heide, U. A.; Mans, A.

    2017-08-01

    In external beam radiotherapy, electronic portal imaging devices (EPIDs) are frequently used for pre-treatment and for in vivo dose verification. Currently, various MR-guided radiotherapy systems are being developed and clinically implemented. Independent dosimetric verification is highly desirable. For this purpose we adapted our EPID-based dose verification system for use with the MR-Linac combination developed by Elekta in cooperation with UMC Utrecht and Philips. In this study we extended our back-projection method to cope with the presence of an extra attenuating medium between the patient and the EPID. Experiments were performed at a conventional linac, using an aluminum mock-up of the MRI scanner housing between the phantom and the EPID. For a 10 cm square field, the attenuation by the mock-up was 72%, while 16% of the remaining EPID signal resulted from scattered radiation. 58 IMRT fields were delivered to a 20 cm slab phantom with and without the mock-up. EPID reconstructed dose distributions were compared to planned dose distributions using the γ -evaluation method (global, 3%, 3 mm). In our adapted back-projection algorithm the averaged {γmean} was 0.27+/- 0.06 , while in the conventional it was 0.28+/- 0.06 . Dose profiles of several square fields reconstructed with our adapted algorithm showed excellent agreement when compared to TPS.

  14. Virtual patient 3D dose reconstruction using in air EPID measurements and a back-projection algorithm for IMRT and VMAT treatments.

    Science.gov (United States)

    Olaciregui-Ruiz, Igor; Rozendaal, Roel; van Oers, René F M; Mijnheer, Ben; Mans, Anton

    2017-05-01

    At our institute, a transit back-projection algorithm is used clinically to reconstruct in vivo patient and in phantom 3D dose distributions using EPID measurements behind a patient or a polystyrene slab phantom, respectively. In this study, an extension to this algorithm is presented whereby in air EPID measurements are used in combination with CT data to reconstruct 'virtual' 3D dose distributions. By combining virtual and in vivo patient verification data for the same treatment, patient-related errors can be separated from machine, planning and model errors. The virtual back-projection algorithm is described and verified against the transit algorithm with measurements made behind a slab phantom, against dose measurements made with an ionization chamber and with the OCTAVIUS 4D system, as well as against TPS patient data. Virtual and in vivo patient dose verification results are also compared. Virtual dose reconstructions agree within 1% with ionization chamber measurements. The average γ-pass rate values (3% global dose/3mm) in the 3D dose comparison with the OCTAVIUS 4D system and the TPS patient data are 98.5±1.9%(1SD) and 97.1±2.9%(1SD), respectively. For virtual patient dose reconstructions, the differences with the TPS in median dose to the PTV remain within 4%. Virtual patient dose reconstruction makes pre-treatment verification based on deviations of DVH parameters feasible and eliminates the need for phantom positioning and re-planning. Virtual patient dose reconstructions have additional value in the inspection of in vivo deviations, particularly in situations where CBCT data is not available (or not conclusive). Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  15. Data-parallel tomographic reconstruction : A comparison of filtered backprojection and direct Fourier reconstruction

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Westenberg, M.A

    1998-01-01

    We consider the parallelization of two standard 2D reconstruction algorithms, filtered backprojection and direct Fourier reconstruction, using the data-parallel programming style. The algorithms are implemented on a Connection Machine CM-5 with 16 processors and a peak performance of 2 Gflop/s.

  16. A three-dimensional-weighted cone beam filtered backprojection (CB-FBP) algorithm for image reconstruction in volumetric CT-helical scanning

    International Nuclear Information System (INIS)

    Tang Xiangyang; Hsieh Jiang; Nilsen, Roy A; Dutta, Sandeep; Samsonov, Dmitry; Hagiwara, Akira

    2006-01-01

    Based on the structure of the original helical FDK algorithm, a three-dimensional (3D)-weighted cone beam filtered backprojection (CB-FBP) algorithm is proposed for image reconstruction in volumetric CT under helical source trajectory. In addition to its dependence on view and fan angles, the 3D weighting utilizes the cone angle dependency of a ray to improve reconstruction accuracy. The 3D weighting is ray-dependent and the underlying mechanism is to give a favourable weight to the ray with the smaller cone angle out of a pair of conjugate rays but an unfavourable weight to the ray with the larger cone angle out of the conjugate ray pair. The proposed 3D-weighted helical CB-FBP reconstruction algorithm is implemented in the cone-parallel geometry that can improve noise uniformity and image generation speed significantly. Under the cone-parallel geometry, the filtering is naturally carried out along the tangential direction of the helical source trajectory. By exploring the 3D weighting's dependence on cone angle, the proposed helical 3D-weighted CB-FBP reconstruction algorithm can provide significantly improved reconstruction accuracy at moderate cone angle and high helical pitches. The 3D-weighted CB-FBP algorithm is experimentally evaluated by computer-simulated phantoms and phantoms scanned by a diagnostic volumetric CT system with a detector dimension of 64 x 0.625 mm over various helical pitches. The computer simulation study shows that the 3D weighting enables the proposed algorithm to reach reconstruction accuracy comparable to that of exact CB reconstruction algorithms, such as the Katsevich algorithm, under a moderate cone angle (4 deg.) and various helical pitches. Meanwhile, the experimental evaluation using the phantoms scanned by a volumetric CT system shows that the spatial resolution along the z-direction and noise characteristics of the proposed 3D-weighted helical CB-FBP reconstruction algorithm are maintained very well in comparison to the FDK

  17. Experimental validation of a novel reconstruction algorithm for electrical impedance tomography based on backprojection of Lagrange multipliers.

    Science.gov (United States)

    Bayford, R; Hanquan, Y; Boone, K; Holder, D S

    1995-08-01

    A novel approach to image reconstruction for electrical impedance tomography (EIT) has been developed. It is based on a constrained optimization technique for the reconstruction of difference resistivity images without finite-element modelling. It solves the inverse problem by optimizing a cost function under constraints, in the form of normalized boundary potentials. Its application to the neighboring data collection method is presented here. Mathematical models are developed according to specified criteria. These express the reconstructed image in terms of one-dimensional Lagrange multiplier functions. The reconstruction problem becomes one of estimating these functions from normalized boundary potentials. This model is based on a cost criterion of the minimization of the variance between the reconstructed and the true resistivity distributions. The algorithm was tested on data collected in a cylindrical saline-filled tank. A polyacrylamide rod was placed in various positions with or without a peripheral plaster of Paris ring in place. This was intended to resemble the conditions during EIT of epileptic seizures recorded with scalp or cortical electrodes in the human head. One advantage of this approach is that compensation for non-uniform initial conditions may be made, as this is a significant problem in imaging cerebral activity through the skull.

  18. Backprojection filtering for variable orbit fan-beam tomography

    International Nuclear Information System (INIS)

    Gullberg, G.T.; Zeng, G.L.

    1995-01-01

    Backprojection filtering algorithms are presented for three variable Orbit fan-beam geometries. Expressions for the fan beam projection and backprojection operators are given for a flat detector fan-beam geometry with fixed focal length, with variable focal length, and with fixed focal length and off-center focusing. Backprojection operators are derived for each geometry using transformation of coordinates to transform from a parallel geometry backprojector to a fan-beam backprojector for the appropriate geometry. The backprojection operator includes a factor which is a function of the coordinates of the projection ray and the coordinates of the pixel in the backprojected image. The backprojection filtering algorithm first backprojects the variable orbit fan-beam projection data using the appropriately derived backprojector to obtain a 1/r blurring of the original image then takes the two-dimensional (2D) Fast Fourier Transform (FFT) of the backprojected image, then multiples the transformed image by the 2D ramp filter function, and finally takes the inverse 2D FFT to obtain the reconstructed image. Computer simulations verify that backprojectors with appropriate weighting give artifact free reconstructions of simulated line integral projections. Also, it is shown that it is not necessary to assume a projection model of line integrals, but the projector and backprojector can be defined to model the physics of the imaging detection process. A backprojector for variable orbit fan-beam tomography with fixed focal length is derived which includes an additional factor which is a function of the flux density along the flat detector. It is shown that the impulse response for the composite of the projection and backprojection operations is equal to 1/r

  19. High performance cone-beam spiral backprojection with voxel-specific weighting

    International Nuclear Information System (INIS)

    Steckmann, Sven; Knaup, Michael; Kachelriess, Marc

    2009-01-01

    Cone-beam spiral backprojection is computationally highly demanding. At first sight, the backprojection requirements are similar to those of cone-beam backprojection from circular scans such as it is performed in the widely used Feldkamp algorithm. However, there is an additional complication: the illumination of each voxel, i.e. the range of angles the voxel is seen by the x-ray cone, is a complex function of the voxel position. In general, one needs to multiply a voxel-specific weight w(x, y, z, α) prior to adding a projection from angle α to a voxel at position x, y, z. Often, the weight function has no analytically closed form and must be numerically determined. Storage of the weights is prohibitive since the amount of memory required equals the number of voxels per spiral rotation times the number of projections a voxel receives contributions and therefore is in the order of up to 10 12 floating point values for typical spiral scans. We propose a new algorithm that combines the spiral symmetry with the ability of today's 64 bit operating systems to store large amounts of precomputed weights, even above the 4 GB limit. Our trick is to backproject into slices that are rotated in the same manner as the spiral trajectory rotates. Using the spiral symmetry in this way allows one to exploit data-level paralellism and thereby to achieve a very high level of vectorization. An additional postprocessing step rotates these slices back to normal images. Our new backprojection algorithm achieves up to 17 giga voxel updates per second on our systems that are equipped with four standard Intel X7460 hexa core CPUs (Intel Xeon 7300 platform, 2.66 GHz, Intel Corporation). This equals the reconstruction of 344 images per second assuming that each slice consists of 512 x 512 pixels and receives contributions from 512 projections. Thereby, it is an order of magnitude faster than a highly optimized code that does not make use of the spiral symmetry. In its present version, the

  20. High performance cone-beam spiral backprojection with voxel-specific weighting

    Science.gov (United States)

    Steckmann, Sven; Knaup, Michael; Kachelrieß, Marc

    2009-06-01

    Cone-beam spiral backprojection is computationally highly demanding. At first sight, the backprojection requirements are similar to those of cone-beam backprojection from circular scans such as it is performed in the widely used Feldkamp algorithm. However, there is an additional complication: the illumination of each voxel, i.e. the range of angles the voxel is seen by the x-ray cone, is a complex function of the voxel position. In general, one needs to multiply a voxel-specific weight w(x, y, z, α) prior to adding a projection from angle α to a voxel at position x, y, z. Often, the weight function has no analytically closed form and must be numerically determined. Storage of the weights is prohibitive since the amount of memory required equals the number of voxels per spiral rotation times the number of projections a voxel receives contributions and therefore is in the order of up to 1012 floating point values for typical spiral scans. We propose a new algorithm that combines the spiral symmetry with the ability of today's 64 bit operating systems to store large amounts of precomputed weights, even above the 4 GB limit. Our trick is to backproject into slices that are rotated in the same manner as the spiral trajectory rotates. Using the spiral symmetry in this way allows one to exploit data-level paralellism and thereby to achieve a very high level of vectorization. An additional postprocessing step rotates these slices back to normal images. Our new backprojection algorithm achieves up to 17 giga voxel updates per second on our systems that are equipped with four standard Intel X7460 hexa core CPUs (Intel Xeon 7300 platform, 2.66 GHz, Intel Corporation). This equals the reconstruction of 344 images per second assuming that each slice consists of 512 × 512 pixels and receives contributions from 512 projections. Thereby, it is an order of magnitude faster than a highly optimized code that does not make use of the spiral symmetry. In its present version, the

  1. Convolution backprojection image reconstruction for spotlight mode synthetic aperture radar.

    Science.gov (United States)

    Desai, M D; Jenkins, W K

    1992-01-01

    Convolution backprojection (CBP) image reconstruction has been proposed as a means of producing high-resolution synthetic-aperture radar (SAR) images by processing data directly in the polar recording format which is the conventional recording format for spotlight mode SAR. The CBP algorithm filters each projection as it is recorded and then backprojects the ensemble of filtered projections to create the final image in a pixel-by-pixel format. CBP reconstruction produces high-quality images by handling the recorded data directly in polar format. The CBP algorithm requires only 1-D interpolation along the filtered projections to determine the precise values that must be contributed to the backprojection summation from each projection. The algorithm is thus able to produce higher quality images by eliminating the inaccuracies of 2-D interpolation, as well as using all the data recorded in the spectral domain annular sector more effectively. The computational complexity of the CBP algorithm is O(N (3)).

  2. GPU-based Branchless Distance-Driven Projection and Backprojection

    Science.gov (United States)

    Liu, Rui; Fu, Lin; De Man, Bruno; Yu, Hengyong

    2017-01-01

    Projection and backprojection operations are essential in a variety of image reconstruction and physical correction algorithms in CT. The distance-driven (DD) projection and backprojection are widely used for their highly sequential memory access pattern and low arithmetic cost. However, a typical DD implementation has an inner loop that adjusts the calculation depending on the relative position between voxel and detector cell boundaries. The irregularity of the branch behavior makes it inefficient to be implemented on massively parallel computing devices such as graphics processing units (GPUs). Such irregular branch behaviors can be eliminated by factorizing the DD operation as three branchless steps: integration, linear interpolation, and differentiation, all of which are highly amenable to massive vectorization. In this paper, we implement and evaluate a highly parallel branchless DD algorithm for 3D cone beam CT. The algorithm utilizes the texture memory and hardware interpolation on GPUs to achieve fast computational speed. The developed branchless DD algorithm achieved 137-fold speedup for forward projection and 188-fold speedup for backprojection relative to a single-thread CPU implementation. Compared with a state-of-the-art 32-thread CPU implementation, the proposed branchless DD achieved 8-fold acceleration for forward projection and 10-fold acceleration for backprojection. GPU based branchless DD method was evaluated by iterative reconstruction algorithms with both simulation and real datasets. It obtained visually identical images as the CPU reference algorithm. PMID:29333480

  3. Portable Wideband Microwave Imaging System for Intracranial Hemorrhage Detection Using Improved Back-projection Algorithm with Model of Effective Head Permittivity

    Science.gov (United States)

    Mobashsher, Ahmed Toaha; Mahmoud, A.; Abbosh, A. M.

    2016-02-01

    Intracranial hemorrhage is a medical emergency that requires rapid detection and medication to restrict any brain damage to minimal. Here, an effective wideband microwave head imaging system for on-the-spot detection of intracranial hemorrhage is presented. The operation of the system relies on the dielectric contrast between healthy brain tissues and a hemorrhage that causes a strong microwave scattering. The system uses a compact sensing antenna, which has an ultra-wideband operation with directional radiation, and a portable, compact microwave transceiver for signal transmission and data acquisition. The collected data is processed to create a clear image of the brain using an improved back projection algorithm, which is based on a novel effective head permittivity model. The system is verified in realistic simulation and experimental environments using anatomically and electrically realistic human head phantoms. Quantitative and qualitative comparisons between the images from the proposed and existing algorithms demonstrate significant improvements in detection and localization accuracy. The radiation and thermal safety of the system are examined and verified. Initial human tests are conducted on healthy subjects with different head sizes. The reconstructed images are statistically analyzed and absence of false positive results indicate the efficacy of the proposed system in future preclinical trials.

  4. PDES, Fips Standard Data Encryption Algorithm

    International Nuclear Information System (INIS)

    Nessett, D.N.

    1991-01-01

    Description of program or function: PDES performs the National Bureau of Standards FIPS Pub. 46 data encryption/decryption algorithm used for the cryptographic protection of computer data. The DES algorithm is designed to encipher and decipher blocks of data consisting of 64 bits under control of a 64-bit key. The key is generated in such a way that each of the 56 bits used directly by the algorithm are random and the remaining 8 error-detecting bits are set to make the parity of each 8-bit byte of the key odd, i. e. there is an odd number of '1' bits in each 8-bit byte. Each member of a group of authorized users of encrypted computer data must have the key that was used to encipher the data in order to use it. Data can be recovered from cipher only by using exactly the same key used to encipher it, but with the schedule of addressing the key bits altered so that the deciphering process is the reverse of the enciphering process. A block of data to be enciphered is subjected to an initial permutation, then to a complex key-dependent computation, and finally to a permutation which is the inverse of the initial permutation. Two PDES routines are included; both perform the same calculation. One, identified as FDES.MAR, is designed to achieve speed in execution, while the other identified as PDES.MAR, presents a clearer view of how the algorithm is executed

  5. Filtered backprojection proton CT reconstruction along most likely paths

    Energy Technology Data Exchange (ETDEWEB)

    Rit, Simon; Dedes, George; Freud, Nicolas; Sarrut, David; Letang, Jean Michel [Universite de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Universite Lyon 1, Centre Leon Berard, 69008 Lyon (France)

    2013-03-15

    Purpose: Proton CT (pCT) has the potential to accurately measure the electron density map of tissues at low doses but the spatial resolution is prohibitive if the curved paths of protons in matter is not accounted for. The authors propose to account for an estimate of the most likely path of protons in a filtered backprojection (FBP) reconstruction algorithm. Methods: The energy loss of protons is first binned in several proton radiographs at different distances to the proton source to exploit the depth-dependency of the estimate of the most likely path. This process is named the distance-driven binning. A voxel-specific backprojection is then used to select the adequate radiograph in the distance-driven binning in order to propagate in the pCT image the best achievable spatial resolution in proton radiographs. The improvement in spatial resolution is demonstrated using Monte Carlo simulations of resolution phantoms. Results: The spatial resolution in the distance-driven binning depended on the distance of the objects from the source and was optimal in the binned radiograph corresponding to that distance. The spatial resolution in the reconstructed pCT images decreased with the depth in the scanned object but it was always better than previous FBP algorithms assuming straight line paths. In a water cylinder with 20 cm diameter, the observed range of spatial resolutions was 0.7 - 1.6 mm compared to 1.0 - 2.4 mm at best with a straight line path assumption. The improvement was strongly enhanced in shorter 200 Degree-Sign scans. Conclusions: Improved spatial resolution was obtained in pCT images with filtered backprojection reconstruction using most likely path estimates of protons. The improvement in spatial resolution combined with the practicality of FBP algorithms compared to iterative reconstruction algorithms makes this new algorithm a candidate of choice for clinical pCT.

  6. Computed tomography of the cervical spine: comparison of image quality between a standard-dose and a low-dose protocol using filtered back-projection and iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Becce, Fabio [University of Lausanne, Department of Diagnostic and Interventional Radiology, Centre Hospitalier Universitaire Vaudois, Lausanne (Switzerland); Universite Catholique Louvain, Department of Radiology, Cliniques Universitaires Saint-Luc, Brussels (Belgium); Ben Salah, Yosr; Berg, Bruno C. vande; Lecouvet, Frederic E.; Omoumi, Patrick [Universite Catholique Louvain, Department of Radiology, Cliniques Universitaires Saint-Luc, Brussels (Belgium); Verdun, Francis R. [University of Lausanne, Institute of Radiation Physics, Centre Hospitalier Universitaire Vaudois, Lausanne (Switzerland); Meuli, Reto [University of Lausanne, Department of Diagnostic and Interventional Radiology, Centre Hospitalier Universitaire Vaudois, Lausanne (Switzerland)

    2013-07-15

    To compare image quality of a standard-dose (SD) and a low-dose (LD) cervical spine CT protocol using filtered back-projection (FBP) and iterative reconstruction (IR). Forty patients investigated by cervical spine CT were prospectively randomised into two groups: SD (120 kVp, 275 mAs) and LD (120 kVp, 150 mAs), both applying automatic tube current modulation. Data were reconstructed using both FBP and sinogram-affirmed IR. Image noise, signal-to-noise (SNR) and contrast-to-noise (CNR) ratios were measured. Two radiologists independently and blindly assessed the following anatomical structures at C3-C4 and C6-C7 levels, using a four-point scale: intervertebral disc, content of neural foramina and dural sac, ligaments, soft tissues and vertebrae. They subsequently rated overall image quality using a ten-point scale. For both protocols and at each disc level, IR significantly decreased image noise and increased SNR and CNR, compared with FBP. SNR and CNR were statistically equivalent in LD-IR and SD-FBP protocols. Regardless of the dose and disc level, the qualitative scores with IR compared with FBP, and with LD-IR compared with SD-FBP, were significantly higher or not statistically different for intervertebral discs, neural foramina and ligaments, while significantly lower or not statistically different for soft tissues and vertebrae. The overall image quality scores were significantly higher with IR compared with FBP, and with LD-IR compared with SD-FBP. LD-IR cervical spine CT provides better image quality for intervertebral discs, neural foramina and ligaments, and worse image quality for soft tissues and vertebrae, compared with SD-FBP, while reducing radiation dose by approximately 40 %. (orig.)

  7. Weighted filtered backprojection for quantitative fluorescence optical projection tomography

    Energy Technology Data Exchange (ETDEWEB)

    Darrell, A; Marias, K [BMI Laboratory, Institute of Computer Science, Foundation for Research and Technology-Hellas, Vassilika Vouton, PO Box 1385, GR 711 10 Heraklion (Greece); Meyer, H; Ripoll, J [Institute of Electronic Structure and Laser, Foundation for Research and Technology-Hellas, Vassilika Vouton, PO Box 1385, GR 711 10 Heraklion (Greece); Brady, M [Medical Vision Laboratory, Department of Engineering Science, Oxford University, Parks Road, Oxford OX1 3PJ (United Kingdom)

    2008-07-21

    Reconstructing images from a set of fluorescence optical projection tomography (OPT) projections is a relatively new problem. Several physical aspects of fluorescence OPT necessitate a different treatment of the inverse problem to that required for non-fluorescence tomography. Given a fluorophore within the depth of field of the imaging system, the power received by the optical system, and therefore the CCD detector, is related to the distance of the fluorophore from the objective entrance pupil. Additionally, due to the slight blurring of images of sources positioned off the focal plane, the CCD image of a fluorophore off the focal plane is lower in intensity than the CCD image of an identical fluorophore positioned on the focal plane. The filtered backprojection (FBP) algorithm does not take these effects into account and so cannot be expected to yield truly quantitative results. A full model of image formation is introduced which takes into account the effects of isotropic emission and defocus. The model is used to obtain a weighting function which is used in a variation of the FBP algorithm called weighted filtered backprojection (WFBP). This new algorithm is tested with simulated data and with experimental data from a phantom consisting of fluorescent microspheres embedded in an agarose gel.

  8. Parallel Backprojection: A Case Study in High-Performance Reconfigurable Computing

    Directory of Open Access Journals (Sweden)

    Cordes Ben

    2009-01-01

    Full Text Available High-performance reconfigurable computing (HPRC is a novel approach to provide large-scale computing power to modern scientific applications. Using both general-purpose processors and FPGAs allows application designers to exploit fine-grained and coarse-grained parallelism, achieving high degrees of speedup. One scientific application that benefits from this technique is backprojection, an image formation algorithm that can be used as part of a synthetic aperture radar (SAR processing system. We present an implementation of backprojection for SAR on an HPRC system. Using simulated data taken at a variety of ranges, our implementation runs over 200 times faster than a similar software program, with an overall application speedup better than 50x. The backprojection application is easily parallelizable, achieving near-linear speedup when run on multiple nodes of a clustered HPRC system. The results presented can be applied to other systems and other algorithms with similar characteristics.

  9. Parallel Backprojection: A Case Study in High-Performance Reconfigurable Computing

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available High-performance reconfigurable computing (HPRC is a novel approach to provide large-scale computing power to modern scientific applications. Using both general-purpose processors and FPGAs allows application designers to exploit fine-grained and coarse-grained parallelism, achieving high degrees of speedup. One scientific application that benefits from this technique is backprojection, an image formation algorithm that can be used as part of a synthetic aperture radar (SAR processing system. We present an implementation of backprojection for SAR on an HPRC system. Using simulated data taken at a variety of ranges, our implementation runs over 200 times faster than a similar software program, with an overall application speedup better than 50x. The backprojection application is easily parallelizable, achieving near-linear speedup when run on multiple nodes of a clustered HPRC system. The results presented can be applied to other systems and other algorithms with similar characteristics.

  10. Water cycle algorithm: A detailed standard code

    Science.gov (United States)

    Sadollah, Ali; Eskandar, Hadi; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon

    Inspired by the observation of the water cycle process and movements of rivers and streams toward the sea, a population-based metaheuristic algorithm, the water cycle algorithm (WCA) has recently been proposed. Lately, an increasing number of WCA applications have appeared and the WCA has been utilized in different optimization fields. This paper provides detailed open source code for the WCA, of which the performance and efficiency has been demonstrated for solving optimization problems. The WCA has an interesting and simple concept and this paper aims to use its source code to provide a step-by-step explanation of the process it follows.

  11. Efficient Backprojection-Based Synthetic Aperture Radar Computation with Many-Core Processors

    Directory of Open Access Journals (Sweden)

    Jongsoo Park

    2013-01-01

    Full Text Available Tackling computationally challenging problems with high efficiency often requires the combination of algorithmic innovation, advanced architecture, and thorough exploitation of parallelism. We demonstrate this synergy through synthetic aperture radar (SAR via backprojection, an image reconstruction method that can require hundreds of TFLOPS. Computation cost is significantly reduced by our new algorithm of approximate strength reduction; data movement cost is economized by software locality optimizations facilitated by advanced architecture support; parallelism is fully harnessed in various patterns and granularities. We deliver over 35 billion backprojections per second throughput per compute node on an Intel® Xeon® processor E5-2670-based cluster, equipped with Intel® Xeon Phi™ coprocessors. This corresponds to processing a 3K×3K image within a second using a single node. Our study can be extended to other settings: backprojection is applicable elsewhere including medical imaging, approximate strength reduction is a general code transformation technique, and many-core processors are emerging as a solution to energy-efficient computing.

  12. Multi-GPU Acceleration of Branchless Distance Driven Projection and Backprojection for Clinical Helical CT.

    Science.gov (United States)

    Mitra, Ayan; Politte, David G; Whiting, Bruce R; Williamson, Jeffrey F; O'Sullivan, Joseph A

    2017-01-01

    Model-based image reconstruction (MBIR) techniques have the potential to generate high quality images from noisy measurements and a small number of projections which can reduce the x-ray dose in patients. These MBIR techniques rely on projection and backprojection to refine an image estimate. One of the widely used projectors for these modern MBIR based technique is called branchless distance driven (DD) projection and backprojection. While this method produces superior quality images, the computational cost of iterative updates keeps it from being ubiquitous in clinical applications. In this paper, we provide several new parallelization ideas for concurrent execution of the DD projectors in multi-GPU systems using CUDA programming tools. We have introduced some novel schemes for dividing the projection data and image voxels over multiple GPUs to avoid runtime overhead and inter-device synchronization issues. We have also reduced the complexity of overlap calculation of the algorithm by eliminating the common projection plane and directly projecting the detector boundaries onto image voxel boundaries. To reduce the time required for calculating the overlap between the detector edges and image voxel boundaries, we have proposed a pre-accumulation technique to accumulate image intensities in perpendicular 2D image slabs (from a 3D image) before projection and after backprojection to ensure our DD kernels run faster in parallel GPU threads. For the implementation of our iterative MBIR technique we use a parallel multi-GPU version of the alternating minimization (AM) algorithm with penalized likelihood update. The time performance using our proposed reconstruction method with Siemens Sensation 16 patient scan data shows an average of 24 times speedup using a single TITAN X GPU and 74 times speedup using 3 TITAN X GPUs in parallel for combined projection and backprojection.

  13. Filtered-backprojection reconstruction for a cone-beam computed tomography scanner with independent source and detector rotations

    Energy Technology Data Exchange (ETDEWEB)

    Rit, Simon, E-mail: simon.rit@creatis.insa-lyon.fr [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Department of Radiation Oncology, Centre Léon Bérard, 28 rue Laennec, Lyon 69008 (France); Clackdoyle, Rolf [Laboratoire Hubert Curien, CNRS and Université Jean Monnet (UMR5516), 18 rue du Professeur Benoit Lauras, Saint Etienne F-42000 (France); Keuschnigg, Peter; Steininger, Philipp [Institute for Research and Development on Advanced Radiation Technologies (radART), Paracelsus Medical University, Strubergasse 16, Salzburg 5020, Austria and medPhoton GmbH, Strubergasse 16, Salzburg 5020 (Austria)

    2016-05-15

    Purpose: A new cone-beam CT scanner for image-guided radiotherapy (IGRT) can independently rotate the source and the detector along circular trajectories. Existing reconstruction algorithms are not suitable for this scanning geometry. The authors propose and evaluate a three-dimensional (3D) filtered-backprojection reconstruction for this situation. Methods: The source and the detector trajectories are tuned to image a field-of-view (FOV) that is offset with respect to the center-of-rotation. The new reconstruction formula is derived from the Feldkamp algorithm and results in a similar three-step algorithm: projection weighting, ramp filtering, and weighted backprojection. Simulations of a Shepp Logan digital phantom were used to evaluate the new algorithm with a 10 cm-offset FOV. A real cone-beam CT image with an 8.5 cm-offset FOV was also obtained from projections of an anthropomorphic head phantom. Results: The quality of the cone-beam CT images reconstructed using the new algorithm was similar to those using the Feldkamp algorithm which is used in conventional cone-beam CT. The real image of the head phantom exhibited comparable image quality to that of existing systems. Conclusions: The authors have proposed a 3D filtered-backprojection reconstruction for scanners with independent source and detector rotations that is practical and effective. This algorithm forms the basis for exploiting the scanner’s unique capabilities in IGRT protocols.

  14. Filtered-backprojection reconstruction for a cone-beam computed tomography scanner with independent source and detector rotations

    International Nuclear Information System (INIS)

    Rit, Simon; Clackdoyle, Rolf; Keuschnigg, Peter; Steininger, Philipp

    2016-01-01

    Purpose: A new cone-beam CT scanner for image-guided radiotherapy (IGRT) can independently rotate the source and the detector along circular trajectories. Existing reconstruction algorithms are not suitable for this scanning geometry. The authors propose and evaluate a three-dimensional (3D) filtered-backprojection reconstruction for this situation. Methods: The source and the detector trajectories are tuned to image a field-of-view (FOV) that is offset with respect to the center-of-rotation. The new reconstruction formula is derived from the Feldkamp algorithm and results in a similar three-step algorithm: projection weighting, ramp filtering, and weighted backprojection. Simulations of a Shepp Logan digital phantom were used to evaluate the new algorithm with a 10 cm-offset FOV. A real cone-beam CT image with an 8.5 cm-offset FOV was also obtained from projections of an anthropomorphic head phantom. Results: The quality of the cone-beam CT images reconstructed using the new algorithm was similar to those using the Feldkamp algorithm which is used in conventional cone-beam CT. The real image of the head phantom exhibited comparable image quality to that of existing systems. Conclusions: The authors have proposed a 3D filtered-backprojection reconstruction for scanners with independent source and detector rotations that is practical and effective. This algorithm forms the basis for exploiting the scanner’s unique capabilities in IGRT protocols.

  15. Filtered-backprojection reconstruction for a cone-beam computed tomography scanner with independent source and detector rotations.

    Science.gov (United States)

    Rit, Simon; Clackdoyle, Rolf; Keuschnigg, Peter; Steininger, Philipp

    2016-05-01

    A new cone-beam CT scanner for image-guided radiotherapy (IGRT) can independently rotate the source and the detector along circular trajectories. Existing reconstruction algorithms are not suitable for this scanning geometry. The authors propose and evaluate a three-dimensional (3D) filtered-backprojection reconstruction for this situation. The source and the detector trajectories are tuned to image a field-of-view (FOV) that is offset with respect to the center-of-rotation. The new reconstruction formula is derived from the Feldkamp algorithm and results in a similar three-step algorithm: projection weighting, ramp filtering, and weighted backprojection. Simulations of a Shepp Logan digital phantom were used to evaluate the new algorithm with a 10 cm-offset FOV. A real cone-beam CT image with an 8.5 cm-offset FOV was also obtained from projections of an anthropomorphic head phantom. The quality of the cone-beam CT images reconstructed using the new algorithm was similar to those using the Feldkamp algorithm which is used in conventional cone-beam CT. The real image of the head phantom exhibited comparable image quality to that of existing systems. The authors have proposed a 3D filtered-backprojection reconstruction for scanners with independent source and detector rotations that is practical and effective. This algorithm forms the basis for exploiting the scanner's unique capabilities in IGRT protocols.

  16. Alignment of Custom Standards by Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Adela Sirbu

    2010-09-01

    Full Text Available Building an efficient model for automatic alignment of terminologies would bring a significant improvement to the information retrieval process. We have developed and compared two machine learning based algorithms whose aim is to align 2 custom standards built on a 3 level taxonomy, using kNN and SVM classifiers that work on a vector representation consisting of several similarity measures. The weights utilized by the kNN were optimized with an evolutionary algorithm, while the SVM classifier's hyper-parameters were optimized with a grid search algorithm. The database used for train was semi automatically obtained by using the Coma++ tool. The performance of our aligners is shown by the results obtained on the test set.

  17. [Standard algorithm of molecular typing of Yersinia pestis strains].

    Science.gov (United States)

    Eroshenko, G A; Odinokov, G N; Kukleva, L M; Pavlova, A I; Krasnov, Ia M; Shavina, N Iu; Guseva, N P; Vinogradova, N A; Kutyrev, V V

    2012-01-01

    Development of the standard algorithm of molecular typing of Yersinia pestis that ensures establishing of subspecies, biovar and focus membership of the studied isolate. Determination of the characteristic strain genotypes of plague infectious agent of main and nonmain subspecies from various natural foci of plague of the Russian Federation and the near abroad. Genotyping of 192 natural Y. pestis strains of main and nonmain subspecies was performed by using PCR methods, multilocus sequencing and multilocus analysis of variable tandem repeat number. A standard algorithm of molecular typing of plague infectious agent including several stages of Yersinia pestis differentiation by membership: in main and nonmain subspecies, various biovars of the main subspecies, specific subspecies; natural foci and geographic territories was developed. The algorithm is based on 3 typing methods--PCR, multilocus sequence typing and multilocus analysis of variable tandem repeat number using standard DNA targets--life support genes (terC, ilvN, inv, glpD, napA, rhaS and araC) and 7 loci of variable tandem repeats (ms01, ms04, ms06, ms07, ms46, ms62, ms70). The effectiveness of the developed algorithm is shown on the large number of natural Y. pestis strains. Characteristic sequence types of Y. pestis strains of various subspecies and biovars as well as MLVA7 genotypes of strains from natural foci of plague of the Russian Federation and the near abroad were established. The application of the developed algorithm will increase the effectiveness of epidemiologic monitoring of plague infectious agent, and analysis of epidemics and outbreaks of plague with establishing the source of origin of the strain and routes of introduction of the infection.

  18. Iterative total-variation reconstruction versus weighted filtered-backprojection reconstruction with edge-preserving filtering

    International Nuclear Information System (INIS)

    Zeng, Gengsheng L; Li Ya; Zamyatin, Alex

    2013-01-01

    Iterative image reconstruction with the total-variation (TV) constraint has become an active research area in recent years, especially in x-ray CT and MRI. Based on Green's one-step-late algorithm, this paper develops a transmission noise weighted iterative algorithm with a TV prior. This paper compares the reconstructions from this iterative TV algorithm with reconstructions from our previously developed non-iterative reconstruction method that consists of a noise-weighted filtered backprojection (FBP) reconstruction algorithm and a nonlinear edge-preserving post filtering algorithm. This paper gives a mathematical proof that the noise-weighted FBP provides an optimal solution. The results from both methods are compared using clinical data and computer simulation data. The two methods give comparable image quality, while the non-iterative method has the advantage of requiring much shorter computation times. (paper)

  19. A simple method to back-project isocenter dose of radiotherapy treatments using EPID transit dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Silveira, T.B.; Cerbaro, B.Q.; Rosa, L.A.R. da, E-mail: thiago.fisimed@gmail.com, E-mail: tbsilveira@inca.gov.br [Instituto de Radioproteção e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro - RJ (Brazil)

    2017-07-01

    The aim of this work was to implement a simple algorithm to evaluate isocenter dose in a phantom using the back-projected transmitted dose acquired using an Electronic Portal Imaging Device (EPID) available in a Varian Trilogy accelerator with two nominal 6 and 10 MV photon beams. This algorithm was developed in MATLAB language, to calibrate EPID measured dose in absolute dose, using a deconvolution process, and to incorporate all scattering and attenuation contributions due to photon interactions with phantom. Modeling process was simplified by using empirical curve adjustments to describe the contribution of scattering and attenuation effects. The implemented algorithm and method were validated employing 19 patient treatment plans with 104 clinical irradiation fields projected on the phantom used. Results for EPID absolute dose calibration by deconvolution have showed percent deviations lower than 1%. Final method validation presented average percent deviations between isocenter doses calculated by back-projection and isocenter doses determined with ionization chamber of 1,86% (SD of 1,00%) and -0,94% (SD of 0,61%) for 6 and 10 MV, respectively. Normalized field by field analysis showed deviations smaller than 2% for 89% of all data for 6 MV beams and 94% for 10 MV beams. It was concluded that the proposed algorithm possesses sufficient accuracy to be used for in vivo dosimetry, being sensitive to detect dose delivery errors bigger than 3-4% for conformal and intensity modulated radiation therapy techniques. (author)

  20. A review of lossless audio compression standards and algorithms

    Science.gov (United States)

    Muin, Fathiah Abdul; Gunawan, Teddy Surya; Kartiwi, Mira; Elsheikh, Elsheikh M. A.

    2017-09-01

    Over the years, lossless audio compression has gained popularity as researchers and businesses has become more aware of the need for better quality and higher storage demand. This paper will analyse various lossless audio coding algorithm and standards that are used and available in the market focusing on Linear Predictive Coding (LPC) specifically due to its popularity and robustness in audio compression, nevertheless other prediction methods are compared to verify this. Advanced representation of LPC such as LSP decomposition techniques are also discussed within this paper.

  1. Optimization, evaluation, and comparison of standard algorithms for image reconstruction with the VIP-PET

    Science.gov (United States)

    Mikhaylova, E.; Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-01-01

    A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm3) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics. PMID:25018777

  2. Metal artefact reduction for a dental cone beam CT image using image segmentation and backprojection filters

    International Nuclear Information System (INIS)

    Mohammadi, Mahdi; Khotanlou, Hassan; Mohammadi, Mohammad

    2011-01-01

    Full text: Due to low dose delivery and fast scanning, the dental Cone Beam CT (CBCT) is the latest technology being implanted for a range of dental imaging. The presence of metallic objects including amalgam or gold fillings in the mouth produces an intuitive image for human jaws. The feasibility of a fast and accurate approach for metal artefact reduction for dental CBCT is investigated. The current study investigates the metal artefact reduction using image segmentation and modification of several sinigrams. In order to reduce metal effects such as beam hardening, streak artefact and intense noises, the application of several algorithms is evaluated. The proposed method includes three stages: preprocessing, reconstruction and post-processing. In the pre-processing stage, in order to reduce the noise level, several phase and frequency filters were applied. At the second stage, based on the specific sinogram achieved for each segment, spline interpolation and weighting backprojection filters were applied to reconstruct the original image. A three-dimensional filter was then applied on reconstructed images, to improve the image quality. Results showed that compared to other available filters, standard frequency filters have a significant influence in the preprocessing stage (ΔHU = 48 ± 6). In addition, with the streak artefact, the probability of beam hardening artefact increases. t e post-processing stage, the application of three-dimensional filters improves the quality of reconstructed images (See Fig. I). Conclusion The proposed method reduces metal artefacts especially where there are more than one metal implanted in the region of interest.

  3. Standard Sine Fitting Algorithms Applied To Blade Tip Timing Data

    Directory of Open Access Journals (Sweden)

    Kaźmierczak Krzysztof

    2014-12-01

    Full Text Available Blade Tip Timing (BTT is a non-intrusive method to measure blade vibration in turbomachinery. Time of Arrival (TOA is recorded when a blade is passing a stationary sensor. The measurement data, in form of undersampled (aliased tip-deflection signal, are difficult to analyze with standard signal processing methods like digital filters or Fourier Transform. Several indirect methods are applied to process TOA sequences, such as reconstruction of aliased spectrum and Least-Squares Fitting to harmonic oscillator model. We used standard sine fitting algorithms provided by IEEE-STD-1057 to estimate blade vibration parameters. Blade-tip displacement was simulated in time domain using SDOF model, sampled by stationary sensors and then processed by the sinefit.m toolkit. We evaluated several configurations of different sensor placement, noise level and number of data. Results of the linear sine fitting, performed with the frequency known a priori, were compared with the non-linear ones. Some of non-linear iterations were not convergent. The algorithms and testing results are aimed to be used in analysis of asynchronous blade vibration.

  4. Four (Algorithms) in One (Bag): An Integrative Framework of Knowledge for Teaching the Standard Algorithms of the Basic Arithmetic Operations

    Science.gov (United States)

    Raveh, Ira; Koichu, Boris; Peled, Irit; Zaslavsky, Orit

    2016-01-01

    In this article we present an integrative framework of knowledge for teaching the standard algorithms of the four basic arithmetic operations. The framework is based on a mathematical analysis of the algorithms, a connectionist perspective on teaching mathematics and an analogy with previous frameworks of knowledge for teaching arithmetic…

  5. Parallel Implementation of the Katsevich's FBP Algorithm

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available For spiral cone-beam CT, parallel computing is an effective approach to resolving the problem of heavy computation burden. It is well known that the major computation time is spent in the backprojection step for either filtered-backprojection (FBP or backprojected-filtration (BPF algorithms. By the cone-beam cover method [1], the backprojection procedure is driven by cone-beam projections, and every cone-beam projection can be backprojected independently. Basing on this fact, we develop a parallel implementation of Katsevich's FBP algorithm. We do all the numerical experiments on a Linux cluster. In one typical experiment, the sequential reconstruction time is 781.3 seconds, while the parallel reconstruction time is 25.7 seconds with 32 processors.

  6. Mental Computation or Standard Algorithm? Children's Strategy Choices on Multi-Digit Subtractions

    Science.gov (United States)

    Torbeyns, Joke; Verschaffel, Lieven

    2016-01-01

    This study analyzed children's use of mental computation strategies and the standard algorithm on multi-digit subtractions. Fifty-eight Flemish 4th graders of varying mathematical achievement level were individually offered subtractions that either stimulated the use of mental computation strategies or the standard algorithm in one choice and two…

  7. Energy efficient data sorting using standard sorting algorithms

    KAUST Repository

    Bunse, Christian

    2011-01-01

    Protecting the environment by saving energy and thus reducing carbon dioxide emissions is one of todays hottest and most challenging topics. Although the perspective for reducing energy consumption, from ecological and business perspectives is clear, from a technological point of view, the realization especially for mobile systems still falls behind expectations. Novel strategies that allow (software) systems to dynamically adapt themselves at runtime can be effectively used to reduce energy consumption. This paper presents a case study that examines the impact of using an energy management component that dynamically selects and applies the "optimal" sorting algorithm, from an energy perspective, during multi-party mobile communication. Interestingly, the results indicate that algorithmic performance is not key and that dynamically switching algorithms at runtime does have a significant impact on energy consumption. © Springer-Verlag Berlin Heidelberg 2011.

  8. Technical note: RabbitCT--an open platform for benchmarking 3D cone-beam reconstruction algorithms.

    Science.gov (United States)

    Rohkohl, C; Keck, B; Hofmann, H G; Hornegger, J

    2009-09-01

    Fast 3D cone beam reconstruction is mandatory for many clinical workflows. For that reason, researchers and industry work hard on hardware-optimized 3D reconstruction. Backprojection is a major component of many reconstruction algorithms that require a projection of each voxel onto the projection data, including data interpolation, before updating the voxel value. This step is the bottleneck of most reconstruction algorithms and the focus of optimization in recent publications. A crucial limitation, however, of these publications is that the presented results are not comparable to each other. This is mainly due to variations in data acquisitions, preprocessing, and chosen geometries and the lack of a common publicly available test dataset. The authors provide such a standardized dataset that allows for substantial comparison of hardware accelerated backprojection methods. They developed an open platform RabbitCT (www.rabbitCT.com) for worldwide comparison in backprojection performance and ranking on different architectures using a specific high resolution C-arm CT dataset of a rabbit. This includes a sophisticated benchmark interface, a prototype implementation in C++, and image quality measures. At the time of writing, six backprojection implementations are already listed on the website. Optimizations include multithreading using Intel threading building blocks and OpenMP, vectorization using SSE, and computation on the GPU using CUDA 2.0. There is a need for objectively comparing backprojection implementations for reconstruction algorithms. RabbitCT aims to provide a solution to this problem by offering an open platform with fair chances for all participants. The authors are looking forward to a growing community and await feedback regarding future evaluations of novel software- and hardware-based acceleration schemes.

  9. A parallel row-based algorithm for standard cell placement with integrated error control

    Science.gov (United States)

    Sargent, Jeff S.; Banerjee, Prith

    1989-01-01

    A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel approaches to control error in parallel cell-placement algorithms: (1) Heuristic Cell-Coloring; (2) Adaptive Sequence Length Control.

  10. ANALYSIS OF THE CHARACTERISTICS OF INTERNATIONAL STANDARD ALGORITHMS «LIGHTWEIGHT CRYPTOGRAPHY» – ISO/IEC 29192-3:2012

    Directory of Open Access Journals (Sweden)

    A. S. Poljakov

    2014-01-01

    Full Text Available The data on the characteristics of international standard algorithms «lightweight cryptography» while application in hardware implementation based on microchips of FPGA are provided. A compari-son of the characteristics of these algorithms with the characteristics of several widely-used standard encryption algorithms is made and possibilities of lightweight cryptography algorithms are evaluated.

  11. Medical image registration algorithms assesment Bronze Standard application enactment on grids using the MOTEUR workflow engine

    CERN Document Server

    Glatard, T; Pennec, X

    2006-01-01

    Medical image registration is pre-processing needed for many medical image analysis procedures. A very large number of registration algorithms are available today, but their performance is often not known and very difficult to assess due to the lack of gold standard. The Bronze Standard algorithm is a very data and compute intensive statistical approach for quantifying registration algorithms accuracy. In this paper, we describe the Bronze Standard application and we discuss the need for grids to tackle such computations on medical image databases. We demonstrate MOTEUR, a service-based workflow engine optimized for dealing with data intensive applications. MOTEUR eases the enactment of the Bronze Standard and similar applications on the EGEE production grid infrastructure. It is a generic workflow engine, based on current standards and freely available, that can be used to instrument legacy application code at low cost.

  12. Neural network Hilbert transform based filtered backprojection for fast inline x-ray inspection

    Science.gov (United States)

    Janssens, Eline; De Beenhouwer, Jan; Van Dael, Mattias; De Schryver, Thomas; Van Hoorebeke, Luc; Verboven, Pieter; Nicolai, Bart; Sijbers, Jan

    2018-03-01

    X-ray imaging is an important tool for quality control since it allows to inspect the interior of products in a non-destructive way. Conventional x-ray imaging, however, is slow and expensive. Inline x-ray inspection, on the other hand, can pave the way towards fast and individual quality control, provided that a sufficiently high throughput can be achieved at a minimal cost. To meet these criteria, an inline inspection acquisition geometry is proposed where the object moves and rotates on a conveyor belt while it passes a fixed source and detector. Moreover, for this acquisition geometry, a new neural-network-based reconstruction algorithm is introduced: the neural network Hilbert transform based filtered backprojection. The proposed algorithm is evaluated both on simulated and real inline x-ray data and has shown to generate high quality reconstructions of 400  ×  400 reconstruction pixels within 200 ms, thereby meeting the high throughput criteria.

  13. A Comparison of a Standard Genetic Algorithm with a Hybrid Genetic Algorithm Applied to Cell Formation Problem

    Directory of Open Access Journals (Sweden)

    Waqas Javaid

    2014-09-01

    Full Text Available Though there are a number of benefits associated with cellular manufacturing systems, its implementation (identification of part families and corresponding machine groups for real life problems is still a challenging task. To handle the complexity of optimizing multiple objectives and larger size of the problem, most of the researchers in the past two decades or so have focused on developing genetic algorithm (GA based techniques. Recently this trend has shifted from standard GA to hybrid GA (HGA based approaches in the quest for greater effectiveness as far as convergence on to the optimum solution is concerned. In order to prove the point, that HGAs possess better convergence abilities than standard GAs, a methodology, initially based on standard GA and later on hybridized with a local search heuristic (LSH, has been developed during this research. Computational experience shows that HGA maintains its accuracy level with increase in problem size, whereas standard GA looses its effectiveness as the problem size grows.

  14. Public Conceptions of Algorithms and Representations in the Common Core State Standards for Mathematics

    Science.gov (United States)

    Nanna, Robert J.

    2016-01-01

    Algorithms and representations have been an important aspect of the work of mathematics, especially for understanding concepts and communicating ideas about concepts and mathematical relationships. They have played a key role in various mathematics standards documents, including the Common Core State Standards for Mathematics. However, there have…

  15. A local region of interest image reconstruction via filtered backprojection for fan-beam differential phase-contrast computed tomography

    International Nuclear Information System (INIS)

    Qi Zhihua; Chen Guanghong

    2007-01-01

    Recently, x-ray differential phase contrast computed tomography (DPC-CT) has been experimentally implemented using a conventional source combined with several gratings. Images were reconstructed using a parallel-beam reconstruction formula. However, parallel-beam reconstruction formulae are not directly applicable for a large image object where the parallel-beam approximation fails. In this note, we present a new image reconstruction formula for fan-beam DPC-CT. There are two major features in this algorithm: (1) it enables the reconstruction of a local region of interest (ROI) using data acquired from an angular interval shorter than 180 0 + fan angle and (2) it still preserves the filtered backprojection structure. Numerical simulations have been conducted to validate the image reconstruction algorithm. (note)

  16. Relating backprojection images to kinematics and dynamic source models

    Science.gov (United States)

    Yin, J.; Denolle, M.

    2017-12-01

    Backprojection (BP) of teleseismic P waves is a method widely used to study the evolution of earthquake radiation and is particularly effective for large earthquakes. We can harness details on the spatiotemporal evolution of the rupture process from waveform similarity or coherency. A direct relation between these kinematic observations to earthquake physics is critical. Theoretical analysis indicates that high-frequency bursts can be related to abrupt changes in rupture velocity (e.g. stopping of the rupture or kinks on the fault, e.g. Madariaga, 1976; Madariaga et al., 2006). Moreover, the BP images are thought to be equivalent to either slip or slip rate on the fault, provided that the Green's functions from the sources to the receivers are incoherent delta functions (Fukuhata et al., 2014). Furthermore, recent studies propose that the frequency dependent features of BP results can reflect the stress status, frictional and/or geometrical heterogeneity on the fault surface (e.g. Huang et al., 2012; Lay et al., 2012; Yao et al., 2013; Yin et al., 2016, etc.). With this promising background, we attempt to relate the BP results and earthquake source process through kinematic and dynamic source models. We build synthetic seismic waveforms and trace them back to the fault surface using synthetic backprojection. We carry the 3D kinematic source models using Crempien and Archuleta (2014) and the 2D kinematic models using FDMap (Dunham et al., 2011). By varying the source models such as the friction laws and fault geometries, we directly compare the BP results with the ground truth earthquake sources and further explore the possible relation to the source properties. To simplify our problem and exclude the potential effects from complex earth structure, our tests are carried out in a purely elastic whole space, allowing us to solve analytically for the far-field body waves. From these systematical tests and comparisons, we aim at building a comprehensive relation between

  17. How Much Can We Hope to Resolve in Earthquake Rupture Processes with Back-projection

    Science.gov (United States)

    Fan, W.; Shearer, P. M.

    2015-12-01

    Back-projection has been proven to be reliable and effective for unraveling complicated earthquake rupture processes. It is very robust because the method makes few a priori assumptions about the fault geometry or rupture speed, and is relatively insensitive to 3D velocity variations. As most studies use array data at high frequencies for back-projection imaging, the results sometime suffer from artifacts, limited resolution, and unclear physical explanations. We have found that improved back-projection results can be obtained utilizing global data. First, global data often can provide fairly uniform azimuthal coverage, which improves spatial resolution and reduces back-projection artifacts, permitting hidden features in the ruptures to be studied in detail. Second, the good azimuthal coverage also enables back-projection to be performed at relatively low frequencies (0.05 to 0.2 Hz), which can fill in the gap between moment tensor/finite-fault inversions (low frequency content) and high-frequency back-projection/beam-forming. Third, P-wave polarity differences among global stations will affect the maximum coherency of back-projected power as a function of source location, which can be used to resolve the spatially varying focal-mechanisms of complicated earthquakes involving multiple fault segments. We plan to conduct synthetic tests to explore the resolution and uncertainty limits of global back-projection across multiple frequency bands. Ultimately, we hope to extend the potential of back-projection methods with global array data, while exploring the theoretical limits of the method.

  18. Standardized evaluation of algorithms for computer-aided diagnosis of dementia based on structural MRI

    DEFF Research Database (Denmark)

    Bron, Esther E.; Smits, Marion; van der Flier, Wiesje M.

    2015-01-01

    Abstract Algorithms for computer-aided diagnosis of dementia based on structural MRI have demonstrated high performance in the literature, but are difficult to compare as different data sets and methodology were used for evaluation. In addition, it is unclear how the algorithms would perform...... on previously unseen data, and thus, how they would perform in clinical practice when there is no real opportunity to adapt the algorithm to the data at hand. To address these comparability, generalizability and clinical applicability issues, we organized a grand challenge that aimed to objectively compare......, patients with mild cognitive impairment and healthy controls. The diagnosis based on clinical criteria was used as reference standard, as it was the best available reference despite its known limitations. For evaluation, a previously unseen test set was used consisting of 354 T1-weighted MRI scans...

  19. Algorithms

    Indian Academy of Sciences (India)

    have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming language Is called a program. From activities 1-3, we can observe that: • Each activity is a command.

  20. Performance evaluation of grid-enabled registration algorithms using bronze-standards

    CERN Document Server

    Glatard, T; Montagnat, J

    2006-01-01

    Evaluating registration algorithms is difficult due to the lack of gold standard in most clinical procedures. The bronze standard is a real-data based statistical method providing an alternative registration reference through a computationally intensive image database registration procedure. We propose in this paper an efficient implementation of this method through a grid-interfaced workflow enactor enabling the concurrent processing of hundreds of image registrations in a couple of hours only. The performances of two different grid infrastructures were compared. We computed the accuracy of 4 different rigid registration algorithms on longitudinal MRI images of brain tumors. Results showed an average subvoxel accuracy of 0.4 mm and 0.15 degrees in rotation.

  1. A beamforming algorithm for bistatic SAR image formation

    Science.gov (United States)

    Jakowatz, Charles V., Jr.; Wahl, Daniel E.; Yocky, David A.

    2010-04-01

    Beamforming is a methodology for collection-mode-independent SAR image formation. It is essentially equivalent to backprojection. The authors have in previous papers developed this idea and discussed the advantages and disadvantages of the approach to monostatic SAR image formation vis-à-vis the more standard and time-tested polar formatting algorithm (PFA). In this paper we show that beamforming for bistatic SAR imaging leads again to a very simple image formation algorithm that requires a minimal number of lines of code and that allows the image to be directly formed onto a three-dimensional surface model, thus automatically creating an orthorectified image. The same disadvantage of beamforming applied to monostatic SAR imaging applies to the bistatic case, however, in that the execution time for the beamforming algorithm is quite long compared to that of PFA. Fast versions of beamforming do exist to help alleviate this issue. Results of image reconstructions from phase history data are presented.

  2. Reproducibility of a Standardized Actigraphy Scoring Algorithm for Sleep in a US Hispanic/Latino Population.

    Science.gov (United States)

    Patel, Sanjay R; Weng, Jia; Rueschman, Michael; Dudley, Katherine A; Loredo, Jose S; Mossavar-Rahmani, Yasmin; Ramirez, Maricelle; Ramos, Alberto R; Reid, Kathryn; Seiger, Ashley N; Sotres-Alvarez, Daniela; Zee, Phyllis C; Wang, Rui

    2015-09-01

    While actigraphy is considered objective, the process of setting rest intervals to calculate sleep variables is subjective. We sought to evaluate the reproducibility of actigraphy-derived measures of sleep using a standardized algorithm for setting rest intervals. Observational study. Community-based. A random sample of 50 adults aged 18-64 years free of severe sleep apnea participating in the Sueño sleep ancillary study to the Hispanic Community Health Study/Study of Latinos. N/A. Participants underwent 7 days of continuous wrist actigraphy and completed daily sleep diaries. Studies were scored twice by each of two scorers. Rest intervals were set using a standardized hierarchical approach based on event marker, diary, light, and activity data. Sleep/wake status was then determined for each 30-sec epoch using a validated algorithm, and this was used to generate 11 variables: mean nightly sleep duration, nap duration, 24-h sleep duration, sleep latency, sleep maintenance efficiency, sleep fragmentation index, sleep onset time, sleep offset time, sleep midpoint time, standard deviation of sleep duration, and standard deviation of sleep midpoint. Intra-scorer intraclass correlation coefficients (ICCs) were high, ranging from 0.911 to 0.995 across all 11 variables. Similarly, inter-scorer ICCs were high, also ranging from 0.911 to 0.995, and mean inter-scorer differences were small. Bland-Altman plots did not reveal any systematic disagreement in scoring. With use of a standardized algorithm to set rest intervals, scoring of actigraphy for the purpose of generating a wide array of sleep variables is highly reproducible. © 2015 Associated Professional Sleep Societies, LLC.

  3. Algorithms

    Indian Academy of Sciences (India)

    algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language ... ·1 x:=sln(theta) x : = sm(theta) 1. ~. Idl d.t Read A.B,C. ~ lei ~ Print x.y.z. L;;;J. Figure 2 Symbols used In flowchart language to rep- resent Assignment, Read.

  4. Algorithms

    Indian Academy of Sciences (India)

    In the previous articles, we have discussed various common data-structures such as arrays, lists, queues and trees and illustrated the widely used algorithm design paradigm referred to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted ...

  5. Standardized diagnostic interviews, criteria, and algorithms for mental disorders: garbage in, garbage out.

    Science.gov (United States)

    Linden, Michael; Muschalla, Beate

    2012-09-01

    There is a general consensus that diagnoses for mental disorders should be based on criteria and algorithms as given in ICD or DSM. Standardized clinical interviews are recommended as diagnostic methods. In ICD and DSM, much emphasis is put on algorithms, while the underlying criteria get much less attention. The question is how valid are the criteria that are collected by structured diagnostic interviews. 209 patients from a cardiology inpatient unit were interviewed with the Mini International Neuropsychiatric Interview (MINI). 32 (15.3%) were diagnosed as suffering from a major depressive episode or dysthymia. Additionally, a thorough clinical examination was done by a psychiatric expert in 15 patients. The standardized diagnosis of present major depression was reaffirmed in one. In total, four patients were suffering from some kind of depressive disorder presently or life time. Two patients were suffering from anxiety disorders, two from adjustment disorders, and four from different types of organic brain disorders. Most important, there are 3 out of 15 who are not mentally ill. Our observations show that standardized diagnostic interviews cannot be used to make specific differential diagnoses, but rather catch unspecific syndromes. This is partly due to the fact that the wording, definition, and understanding of the underlying criteria is rather vague. This is an even greater problem if there is any somatic comorbidity. In the revision of ICD and DSM, a glossary of psychopathological terms and guidelines for the training of clinicians should be included.

  6. An adaptive filtered back-projection for photoacoustic image reconstruction

    International Nuclear Information System (INIS)

    Huang, He; Bustamante, Gilbert; Peterson, Ralph; Ye, Jing Yong

    2015-01-01

    Purpose: The purpose of this study is to develop an improved filtered-back-projection (FBP) algorithm for photoacoustic tomography (PAT), which allows image reconstruction with higher quality compared to images reconstructed through traditional algorithms. Methods: A rigorous expression of a weighting function has been derived directly from a photoacoustic wave equation and used as a ramp filter in Fourier domain. The authors’ new algorithm utilizes this weighting function to precisely calculate each photoacoustic signal’s contribution and then reconstructs the image based on the retarded potential generated from the photoacoustic sources. In addition, an adaptive criterion has been derived for selecting the cutoff frequency of a low pass filter. Two computational phantoms were created to test the algorithm. The first phantom contained five spheres with each sphere having different absorbances. The phantom was used to test the capability for correctly representing both the geometry and the relative absorbed energy in a planar measurement system. The authors also used another phantom containing absorbers of different sizes with overlapping geometry to evaluate the performance of the new method for complicated geometry. In addition, random noise background was added to the simulated data, which were obtained by using an arc-shaped array of 50 evenly distributed transducers that spanned 160° over a circle with a radius of 65 mm. A normalized factor between the neighbored transducers was applied for correcting measurement signals in PAT simulations. The authors assumed that the scanned object was mounted on a holder that rotated over the full 360° and the scans were set to a sampling rate of 20.48 MHz. Results: The authors have obtained reconstructed images of the computerized phantoms by utilizing the new FBP algorithm. From the reconstructed image of the first phantom, one can see that this new approach allows not only obtaining a sharp image but also showing

  7. SAR focusing of P-band ice sounding data using back-projection

    DEFF Research Database (Denmark)

    Kusk, Anders; Dall, Jørgen

    2010-01-01

    SAR processing can be applied to ice sounder data to improve along-track resolution and clutter suppression. This paper presents a time-domain back-projection technique for SAR focusing of ice sounder data. With this technique, variations in flight track and ice surface slope can be accurately ac...

  8. Task-based evaluation of segmentation algorithms for diffusion-weighted MRI without using a gold standard.

    Science.gov (United States)

    Jha, Abhinav K; Kupinski, Matthew A; Rodríguez, Jeffrey J; Stephen, Renu M; Stopeck, Alison T

    2012-07-07

    In many studies, the estimation of the apparent diffusion coefficient (ADC) of lesions in visceral organs in diffusion-weighted (DW) magnetic resonance images requires an accurate lesion-segmentation algorithm. To evaluate these lesion-segmentation algorithms, region-overlap measures are used currently. However, the end task from the DW images is accurate ADC estimation, and the region-overlap measures do not evaluate the segmentation algorithms on this task. Moreover, these measures rely on the existence of gold-standard segmentation of the lesion, which is typically unavailable. In this paper, we study the problem of task-based evaluation of segmentation algorithms in DW imaging in the absence of a gold standard. We first show that using manual segmentations instead of gold-standard segmentations for this task-based evaluation is unreliable. We then propose a method to compare the segmentation algorithms that does not require gold-standard or manual segmentation results. The no-gold-standard method estimates the bias and the variance of the error between the true ADC values and the ADC values estimated using the automated segmentation algorithm. The method can be used to rank the segmentation algorithms on the basis of both the ensemble mean square error and precision. We also propose consistency checks for this evaluation technique.

  9. A comparison of earthquake backprojection imaging methods for dense local arrays

    Science.gov (United States)

    Beskardes, G. D.; Hole, J. A.; Wang, K.; Michaelides, M.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Brown, L. D.; Quiros, D. A.

    2018-03-01

    Backprojection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. While backprojection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed and simplified to overcome imaging challenges. Real data issues include aliased station spacing, inadequate array aperture, inaccurate velocity model, low signal-to-noise ratio, large noise bursts and varying waveform polarity. We compare the performance of backprojection with four previously used data pre-processing methods: raw waveform, envelope, short-term averaging/long-term averaging and kurtosis. Our primary goal is to detect and locate events smaller than noise by stacking prior to detection to improve the signal-to-noise ratio. The objective is to identify an optimized strategy for automated imaging that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the source images, preserves magnitude, and considers computational cost. Imaging method performance is assessed using a real aftershock data set recorded by the dense AIDA array following the 2011 Virginia earthquake. Our comparisons show that raw-waveform backprojection provides the best spatial resolution, preserves magnitude and boosts signal to detect events smaller than noise, but is most sensitive to velocity error, polarity error and noise bursts. On the other hand, the other methods avoid polarity error and reduce sensitivity to velocity error, but sacrifice spatial resolution and cannot effectively reduce noise by stacking. Of these, only kurtosis is insensitive to large noise bursts while being as efficient as the raw-waveform method to lower the detection threshold; however, it does not preserve the magnitude information. For automatic detection and location of events in a large data set, we

  10. Scatter-Reducing Sounding Filtration Using a Genetic Algorithm and Mean Monthly Standard Deviation

    Science.gov (United States)

    Mandrake, Lukas

    2013-01-01

    Retrieval algorithms like that used by the Orbiting Carbon Observatory (OCO)-2 mission generate massive quantities of data of varying quality and reliability. A computationally efficient, simple method of labeling problematic datapoints or predicting soundings that will fail is required for basic operation, given that only 6% of the retrieved data may be operationally processed. This method automatically obtains a filter designed to reduce scatter based on a small number of input features. Most machine-learning filter construction algorithms attempt to predict error in the CO2 value. By using a surrogate goal of Mean Monthly STDEV, the goal is to reduce the retrieved CO2 scatter rather than solving the harder problem of reducing CO2 error. This lends itself to improved interpretability and performance. This software reduces the scatter of retrieved CO2 values globally based on a minimum number of input features. It can be used as a prefilter to reduce the number of soundings requested, or as a post-filter to label data quality. The use of the MMS (Mean Monthly Standard deviation) provides a much cleaner, clearer filter than the standard ABS(CO2-truth) metrics previously employed by competitor methods. The software's main strength lies in a clearer (i.e., fewer features required) filter that more efficiently reduces scatter in retrieved CO2 rather than focusing on the more complex (and easily removed) bias issues.

  11. Fan-beam and cone-beam image reconstruction via filtering the backprojection image of differentiated projection data

    International Nuclear Information System (INIS)

    Zhuang Tingliang; Leng Shuai; Nett, Brian E; Chen Guanghong

    2004-01-01

    In this paper, a new image reconstruction scheme is presented based on Tuy's cone-beam inversion scheme and its fan-beam counterpart. It is demonstrated that Tuy's inversion scheme may be used to derive a new framework for fan-beam and cone-beam image reconstruction. In this new framework, images are reconstructed via filtering the backprojection image of differentiated projection data. The new framework is mathematically exact and is applicable to a general source trajectory provided the Tuy data sufficiency condition is satisfied. By choosing a piece-wise constant function for one of the components in the factorized weighting function, the filtering kernel is one dimensional, viz. the filtering process is along a straight line. Thus, the derived image reconstruction algorithm is mathematically exact and efficient. In the cone-beam case, the derived reconstruction algorithm is applicable to a large class of source trajectories where the pi-lines or the generalized pi-lines exist. In addition, the new reconstruction scheme survives the super-short scan mode in both the fan-beam and cone-beam cases provided the data are not transversely truncated. Numerical simulations were conducted to validate the new reconstruction scheme for the fan-beam case

  12. Exact fan-beam image reconstruction algorithm for truncated projection data acquired from an asymmetric half-size detector

    International Nuclear Information System (INIS)

    Leng Shuai; Zhuang Tingliang; Nett, Brian E; Chen Guanghong

    2005-01-01

    In this paper, we present a new algorithm designed for a specific data truncation problem in fan-beam CT. We consider a scanning configuration in which the fan-beam projection data are acquired from an asymmetrically positioned half-sized detector. Namely, the asymmetric detector only covers one half of the scanning field of view. Thus, the acquired fan-beam projection data are truncated at every view angle. If an explicit data rebinning process is not invoked, this data acquisition configuration will reek havoc on many known fan-beam image reconstruction schemes including the standard filtered backprojection (FBP) algorithm and the super-short-scan FBP reconstruction algorithms. However, we demonstrate that a recently developed fan-beam image reconstruction algorithm which reconstructs an image via filtering a backprojection image of differentiated projection data (FBPD) survives the above fan-beam data truncation problem. Namely, we may exactly reconstruct the whole image object using the truncated data acquired in a full scan mode (2π angular range). We may also exactly reconstruct a small region of interest (ROI) using the truncated projection data acquired in a short-scan mode (less than 2π angular range). The most important characteristic of the proposed reconstruction scheme is that an explicit data rebinning process is not introduced. Numerical simulations were conducted to validate the new reconstruction algorithm

  13. Characterization and Comparison of the 10-2 SITA-Standard and Fast Algorithms

    Directory of Open Access Journals (Sweden)

    Yaniv Barkana

    2012-01-01

    Full Text Available Purpose: To compare the 10-2 SITA-standard and SITA-fast visual field programs in patients with glaucoma. Methods: We enrolled 26 patients with open angle glaucoma with involvement of at least one paracentral location on 24-2 SITA-standard field test. Each subject performed 10-2 SITA-standard and SITA-fast tests. Within 2 months this sequence of tests was repeated. Results: SITA-fast was 30% shorter than SITA-standard (5.5±1.1 vs 7.9±1.1 minutes, <0.001. Mean MD was statistically significantly higher for SITA-standard compared with SITA-fast at first visit (Δ=0.3 dB, =0.017 but not second visit. Inter-visit difference in MD or in number of depressed points was not significant for both programs. Bland-Altman analysis showed that clinically significant variations can exist in individual instances between the 2 programs and between repeat tests with the same program. Conclusions: The 10-2 SITA-fast algorithm is significantly shorter than SITA-standard. The two programs have similar long-term variability. Average same-visit between-program and same-program between-visit sensitivity results were similar for the study population, but clinically significant variability was observed for some individual test pairs. Group inter- and intra-program test results may be comparable, but in the management of the individual patient field change should be verified by repeat testing.

  14. Algorithms

    Indian Academy of Sciences (India)

    In the program shown in Figure 1, we have repeated the algorithm. M times and we can make the following observations. Each block is essentially a different instance of "code"; that is, the objects differ by the value to which N is initialized before the execution of the. "code" block. Thus, we can now avoid the repetition of the ...

  15. Algorithms

    Indian Academy of Sciences (India)

    algorithms built into the computer corresponding to the logic- circuit rules that are used to .... For the purpose of carrying ou t ari thmetic or logical operations the memory is organized in terms .... In fixed point representation, one essentially uses integer arithmetic operators assuming the binary point to be at some point other ...

  16. Dual filtered backprojection for micro-rotation confocal microscopy

    International Nuclear Information System (INIS)

    Laksameethanasan, Danai; Brandt, Sami S; Renaud, Olivier; Shorte, Spencer L

    2009-01-01

    Micro-rotation confocal microscopy is a novel optical imaging technique which employs dielectric fields to trap and rotate individual cells to facilitate 3D fluorescence imaging using a confocal microscope. In contrast to computed tomography (CT) where an image can be modelled as parallel projection of an object, the ideal confocal image is recorded as a central slice of the object corresponding to the focal plane. In CT, the projection images and the 3D object are related by the Fourier slice theorem which states that the Fourier transform of a CT image is equal to the central slice of the Fourier transform of the 3D object. In the micro-rotation application, we have a dual form of this setting, i.e. the Fourier transform of the confocal image equals the parallel projection of the Fourier transform of the 3D object. Based on the observed duality, we present here the dual of the classical filtered back projection (FBP) algorithm and apply it in micro-rotation confocal imaging. Our experiments on real data demonstrate that the proposed method is a fast and reliable algorithm for the micro-rotation application, as FBP is for CT application

  17. Cloud Computing Security Model with Combination of Data Encryption Standard Algorithm (DES) and Least Significant Bit (LSB)

    Science.gov (United States)

    Basri, M.; Mawengkang, H.; Zamzami, E. M.

    2018-03-01

    Limitations of storage sources is one option to switch to cloud storage. Confidentiality and security of data stored on the cloud is very important. To keep up the confidentiality and security of such data can be done one of them by using cryptography techniques. Data Encryption Standard (DES) is one of the block cipher algorithms used as standard symmetric encryption algorithm. This DES will produce 8 blocks of ciphers combined into one ciphertext, but the ciphertext are weak against brute force attacks. Therefore, the last 8 block cipher will be converted into 8 random images using Least Significant Bit (LSB) algorithm which later draws the result of cipher of DES algorithm to be merged into one.

  18. A general exact method for synthesizing parallel-beam projections from cone-beam projections via filtered backprojection

    International Nuclear Information System (INIS)

    Li Liang; Chen Zhiqiang; Xing Yuxiang; Zhang Li; Kang Kejun; Wang Ge

    2006-01-01

    In recent years, image reconstruction methods for cone-beam computed tomography (CT) have been extensively studied. However, few of these studies discussed computing parallel-beam projections from cone-beam projections. In this paper, we focus on the exact synthesis of complete or incomplete parallel-beam projections from cone-beam projections. First, an extended central slice theorem is described to establish a relationship between the Radon space and the Fourier space. Then, data sufficiency conditions are proposed for computing parallel-beam projection data from cone-beam data. Using these results, a general filtered backprojection algorithm is formulated that can exactly synthesize parallel-beam projection data from cone-beam projection data. As an example, we prove that parallel-beam projections can be exactly synthesized in an angular range in the case of circular cone-beam scanning. Interestingly, this angular range is larger than that derived in the Feldkamp reconstruction framework. Numerical experiments are performed in the circular scanning case to verify our method

  19. Assessing operating characteristics of CAD algorithms in the absence of a gold standard

    International Nuclear Information System (INIS)

    Roy Choudhury, Kingshuk; Paik, David S.; Yi, Chin A.; Napel, Sandy; Roos, Justus; Rubin, Geoffrey D.

    2010-01-01

    Purpose: The authors examine potential bias when using a reference reader panel as ''gold standard'' for estimating operating characteristics of CAD algorithms for detecting lesions. As an alternative, the authors propose latent class analysis (LCA), which does not require an external gold standard to evaluate diagnostic accuracy. Methods: A binomial model for multiple reader detections using different diagnostic protocols was constructed, assuming conditional independence of readings given true lesion status. Operating characteristics of all protocols were estimated by maximum likelihood LCA. Reader panel and LCA based estimates were compared using data simulated from the binomial model for a range of operating characteristics. LCA was applied to 36 thin section thoracic computed tomography data sets from the Lung Image Database Consortium (LIDC): Free search markings of four radiologists were compared to markings from four different CAD assisted radiologists. For real data, bootstrap-based resampling methods, which accommodate dependence in reader detections, are proposed to test of hypotheses of differences between detection protocols. Results: In simulation studies, reader panel based sensitivity estimates had an average relative bias (ARB) of -23% to -27%, significantly higher (p-value <0.0001) than LCA (ARB -2% to -6%). Specificity was well estimated by both reader panel (ARB -0.6% to -0.5%) and LCA (ARB 1.4%-0.5%). Among 1145 lesion candidates LIDC considered, LCA estimated sensitivity of reference readers (55%) was significantly lower (p-value 0.006) than CAD assisted readers' (68%). Average false positives per patient for reference readers (0.95) was not significantly lower (p-value 0.28) than CAD assisted readers' (1.27). Conclusions: Whereas a gold standard based on a consensus of readers may substantially bias sensitivity estimates, LCA may be a significantly more accurate and consistent means for evaluating diagnostic accuracy.

  20. Comparison between Genetic Algorithms and Particle Swarm Optimization Methods on Standard Test Functions and Machine Design

    DEFF Research Database (Denmark)

    Nica, Florin Valentin Traian; Ritchie, Ewen; Leban, Krisztina Monika

    2013-01-01

    , genetic algorithm and particle swarm are shortly presented in this paper. These two algorithms are tested to determine their performance on five different benchmark test functions. The algorithms are tested based on three requirements: precision of the result, number of iterations and calculation time...

  1. Comparison of digital beamforming algorithms for 3-D terahertz imaging with sparse multistatic line arrays

    Directory of Open Access Journals (Sweden)

    B. Baccouche

    2017-12-01

    Full Text Available In this contribution we compare the back-projection algorithm with our recently developed modified range migration algorithm for 3-D terahertz imaging using sparse multistatic line arrays. A 2-D planar sampling scheme is generated using the array's aperture in combination with an orthogonal synthetic aperture obtained through linear movement of the object under test. A stepped frequency continuous wave signal modulation is used for range focusing. Comparisons of the focusing quality show that results using the modified range migration algorithm reflect these of the back-projection algorithm except for some degradation along the array's axis due to the operation in the array's near-field. Nevertheless the highest computational efficiency is obtained from the modified range migration algorithm, which is better than the numerically optimized version of the back-projection algorithm. Measurements have been performed by using an imaging system operating in the W frequency band to verify the theoretical results.

  2. Robustness and precision of an automatic marker detection algorithm for online prostate daily targeting using a standard V-EPID.

    Science.gov (United States)

    Aubin, S; Beaulieu, L; Pouliot, S; Pouliot, J; Roy, R; Girouard, L M; Martel-Brisson, N; Vigneault, E; Laverdière, J

    2003-07-01

    An algorithm for the daily localization of the prostate using implanted markers and a standard video-based electronic portal imaging device (V-EPID) has been tested. Prior to planning, three gold markers were implanted in the prostate of seven patients. The clinical images were acquired with a BeamViewPlus 2.1 V-EPID for each field during the normal course radiotherapy treatment and are used off-line to determine the ability of the automatic marker detection algorithm to adequately and consistently detect the markers. Clinical images were obtained with various dose levels from ranging 2.5 to 75 MU. The algorithm is based on marker attenuation characterization in the portal image and spatial distribution. A total of 1182 clinical images were taken. The results show an average efficiency of 93% for the markers detected individually and 85% for the group of markers. This algorithm accomplishes the detection and validation in 0.20-0.40 s. When the center of mass of the group of implanted markers is used, then all displacements can be corrected to within 1.0 mm in 84% of the cases and within 1.5 mm in 97% of cases. The standard video-based EPID tested provides excellent marker detection capability even with low dose levels. The V-EPID can be used successfully with radiopaque markers and the automatic detection algorithm to track and correct the daily setup deviations due to organ motions.

  3. Development and validation of a computerized algorithm for International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI)

    DEFF Research Database (Denmark)

    Walden, K; Bélanger, L M; Biering-Sørensen, F

    2016-01-01

    STUDY DESIGN: Validation study. OBJECTIVES: To describe the development and validation of a computerized application of the international standards for neurological classification of spinal cord injury (ISNCSCI). SETTING: Data from acute and rehabilitation care. METHODS: The Rick Hansen Institute......-ISNCSCI Algorithm (RHI-ISNCSCI Algorithm) was developed based on the 2011 version of the ISNCSCI and the 2013 version of the worksheet. International experts developed the design and logic with a focus on usability and features to standardize the correct classification of challenging cases. A five-phased process...... a standardized method to accurately derive the level and severity of SCI from the raw data of the ISNCSCI examination. The web interface assists in maximizing usability while minimizing the impact of human error in classifying SCI. SPONSORSHIP: This study is sponsored by the Rick Hansen Institute and supported...

  4. Standardizing the protocol for hemispherical photographs: accuracy assessment of binarization algorithms.

    Science.gov (United States)

    Glatthorn, Jonas; Beckschäfer, Philip

    2014-01-01

    Hemispherical photography is a well-established method to optically assess ecological parameters related to plant canopies; e.g. ground-level light regimes and the distribution of foliage within the crown space. Interpreting hemispherical photographs involves classifying pixels as either sky or vegetation. A wide range of automatic thresholding or binarization algorithms exists to classify the photographs. The variety in methodology hampers ability to compare results across studies. To identify an optimal threshold selection method, this study assessed the accuracy of seven binarization methods implemented in software currently available for the processing of hemispherical photographs. Therefore, binarizations obtained by the algorithms were compared to reference data generated through a manual binarization of a stratified random selection of pixels. This approach was adopted from the accuracy assessment of map classifications known from remote sensing studies. Percentage correct (Pc) and kappa-statistics (K) were calculated. The accuracy of the algorithms was assessed for photographs taken with automatic exposure settings (auto-exposure) and photographs taken with settings which avoid overexposure (histogram-exposure). In addition, gap fraction values derived from hemispherical photographs were compared with estimates derived from the manually classified reference pixels. All tested algorithms were shown to be sensitive to overexposure. Three of the algorithms showed an accuracy which was high enough to be recommended for the processing of histogram-exposed hemispherical photographs: "Minimum" (Pc 98.8%; K 0.952), "Edge Detection" (Pc 98.1%; K 0.950), and "Minimum Histogram" (Pc 98.1%; K 0.947). The Minimum algorithm overestimated gap fraction least of all (11%). The overestimation by the algorithms Edge Detection (63%) and Minimum Histogram (67%) were considerably larger. For the remaining four evaluated algorithms (IsoData, Maximum Entropy, MinError, and Otsu

  5. A parallel simulated annealing algorithm for standard cell placement on a hypercube computer

    Science.gov (United States)

    Jones, Mark Howard

    1987-01-01

    A parallel version of a simulated annealing algorithm is presented which is targeted to run on a hypercube computer. A strategy for mapping the cells in a two dimensional area of a chip onto processors in an n-dimensional hypercube is proposed such that both small and large distance moves can be applied. Two types of moves are allowed: cell exchanges and cell displacements. The computation of the cost function in parallel among all the processors in the hypercube is described along with a distributed data structure that needs to be stored in the hypercube to support parallel cost evaluation. A novel tree broadcasting strategy is used extensively in the algorithm for updating cell locations in the parallel environment. Studies on the performance of the algorithm on example industrial circuits show that it is faster and gives better final placement results than the uniprocessor simulated annealing algorithms. An improved uniprocessor algorithm is proposed which is based on the improved results obtained from parallelization of the simulated annealing algorithm.

  6. External force back-projective composition and globally deformable optimization for 3-D coronary artery reconstruction

    International Nuclear Information System (INIS)

    Yang, Jian; Cong, Weijian; Fan, Jingfan; Liu, Yue; Wang, Yongtian; Chen, Yang

    2014-01-01

    The clinical value of the 3D reconstruction of a coronary artery is important for the diagnosis and intervention of cardiovascular diseases. This work proposes a method based on a deformable model for reconstructing coronary arteries from two monoplane angiographic images acquired from different angles. First, an external force back-projective composition model is developed to determine the external force, for which the force distributions in different views are back-projected to the 3D space and composited in the same coordinate system based on the perspective projection principle of x-ray imaging. The elasticity and bending forces are composited as an internal force to maintain the smoothness of the deformable curve. Second, the deformable curve evolves rapidly toward the true vascular centerlines in 3D space and angiographic images under the combination of internal and external forces. Third, densely matched correspondence among vessel centerlines is constructed using a curve alignment method. The bundle adjustment method is then utilized for the global optimization of the projection parameters and the 3D structures. The proposed method is validated on phantom data and routine angiographic images with consideration for space and re-projection image errors. Experimental results demonstrate the effectiveness and robustness of the proposed method for the reconstruction of coronary arteries from two monoplane angiographic images. The proposed method can achieve a mean space error of 0.564 mm and a mean re-projection error of 0.349 mm. (paper)

  7. An optimized routing algorithm for the automated assembly of standard multimode ribbon fibers in a full-mesh optical backplane

    Science.gov (United States)

    Basile, Vito; Guadagno, Gianluca; Ferrario, Maddalena; Fassi, Irene

    2018-03-01

    In this paper a parametric, modular and scalable algorithm allowing a fully automated assembly of a backplane fiber-optic interconnection circuit is presented. This approach guarantees the optimization of the optical fiber routing inside the backplane with respect to specific criteria (i.e. bending power losses), addressing both transmission performance and overall costs issues. Graph theory has been exploited to simplify the complexity of the NxN full-mesh backplane interconnection topology, firstly, into N independent sub-circuits and then, recursively, into a limited number of loops easier to be generated. Afterwards, the proposed algorithm selects a set of geometrical and architectural parameters whose optimization allows to identify the optimal fiber optic routing for each sub-circuit of the backplane. The topological and numerical information provided by the algorithm are then exploited to control a robot which performs the automated assembly of the backplane sub-circuits. The proposed routing algorithm can be extended to any array architecture and number of connections thanks to its modularity and scalability. Finally, the algorithm has been exploited for the automated assembly of an 8x8 optical backplane realized with standard multimode (MM) 12-fiber ribbons.

  8. Comparison of a Local Linearization Algorithm with Standard Numerical Integration Methods for Real-Time Simulation

    DEFF Research Database (Denmark)

    Cook, Gerald; Lin, Ching-Fang

    1980-01-01

    The local linearization algorithm is presented as a possible numerical integration scheme to be used in real-time simulation. A second-order nonlinear example problem is solved using different methods. The local linearization approach is shown to require less computing time and give significant...... improvement in accuracy over the classical second-order integration methods....

  9. A comparison of global optimization algorithms with standard benchmark functions and real-world applications using Energy Plus

    Energy Technology Data Exchange (ETDEWEB)

    Kamph, Jerome Henri; Robinson, Darren; Wetter, Michael

    2009-09-01

    There is an increasing interest in the use of computer algorithms to identify combinations of parameters which optimise the energy performance of buildings. For such problems, the objective function can be multi-modal and needs to be approximated numerically using building energy simulation programs. As these programs contain iterative solution algorithms, they introduce discontinuities in the numerical approximation to the objective function. Metaheuristics often work well for such problems, but their convergence to a global optimum cannot be established formally. Moreover, different algorithms tend to be suited to particular classes of optimization problems. To shed light on this issue we compared the performance of two metaheuristics, the hybrid CMA-ES/HDE and the hybrid PSO/HJ, in minimizing standard benchmark functions and real-world building energy optimization problems of varying complexity. From this we find that the CMA-ES/HDE performs well on more complex objective functions, but that the PSO/HJ more consistently identifies the global minimum for simpler objective functions. Both identified similar values in the objective functions arising from energy simulations, but with different combinations of model parameters. This may suggest that the objective function is multi-modal. The algorithms also correctly identified some non-intuitive parameter combinations that were caused by a simplified control sequence of the building energy system that does not represent actual practice, further reinforcing their utility.

  10. Preprocessing of backprojection images in the McClellan Nuclear Radiation Center tomography system

    Energy Technology Data Exchange (ETDEWEB)

    Gibbons, M. R., LLNL

    1998-02-19

    Neutron tomography is being investigated as a nondestructive technique for quantitative assessment of low atomic mass impurity concentration in metals. Neutrons maximize the sensitivity given their higher cross sections for low Z isotopes while tomography provides the three dimensional density information. The specific application is the detection of Hydrogen down to 200 ppm weight in aircraft engine compressor blades. A number of preprocessing corrections have been implemented for the backprojection images in order to achieve the detection requirements for a testing rate of three blades per hour. Among the procedures are corrections for neutron scattering and beam hardening. With these procedures the artifacts in tomographic reconstructions are shown to be less than the signal for 100 ppm hydrogen in titanium alloy samples.

  11. Fully three-dimensional defocus-gradient corrected backprojection in cryoelectron microscopy

    DEFF Research Database (Denmark)

    Kazantsev, Ivan G; Klukowska, J.; Herman, Gabor T.

    2010-01-01

    Recognizing that the microscope depth of field is a significant resolution-limiting factor in 3D cryoelectron microscopy, Jensen and Kornberg proposed a concept they called defocus-gradient corrected backprojection (DGCBP) and illustrated by computer simulations that DGCBP can effectively eliminate...... the depth of field limitation. They did not provide a mathematical justification for their concept. Our paper provides this, by showing (in the idealized case of noiseless data being available for all projection directions) that the reconstructions obtained based on DGCBP from data produced with distance...... of the DGCBP concept, one that closely follows the mathematics of its justifications, and illustrate it using mathematically described phantoms and their reconstructions from finitely many distance-dependently blurred projections....

  12. Stegano-Crypto Hiding Encrypted Data in Encrypted Image Using Advanced Encryption Standard and Lossy Algorithm

    Directory of Open Access Journals (Sweden)

    Ari Shawakat Tahir

    2015-12-01

    Full Text Available The Steganography is an art and science of hiding information by embedding messages within other, seemingly harmless messages and lots of researches are working in it. Proposed system is using AES Algorithm and Lossy technique to overcome the limitation of previous work and increasing the process’s speed. The sender uses AES Algorithm to encrypt message and image, then using LSB technique to hide encrypted data in encrypted message. The receive get the original data using the keys that had been used in encryption process. The proposed system has been implemented in NetBeans 7.3 software uses image and data in different size to find the system’s speed.

  13. Crystal structure prediction of flexible molecules using parallel genetic algorithms with a standard force field.

    Science.gov (United States)

    Kim, Seonah; Orendt, Anita M; Ferraro, Marta B; Facelli, Julio C

    2009-10-01

    This article describes the application of our distributed computing framework for crystal structure prediction (CSP) the modified genetic algorithms for crystal and cluster prediction (MGAC), to predict the crystal structure of flexible molecules using the general Amber force field (GAFF) and the CHARMM program. The MGAC distributed computing framework includes a series of tightly integrated computer programs for generating the molecule's force field, sampling crystal structures using a distributed parallel genetic algorithm and local energy minimization of the structures followed by the classifying, sorting, and archiving of the most relevant structures. Our results indicate that the method can consistently find the experimentally known crystal structures of flexible molecules, but the number of missing structures and poor ranking observed in some crystals show the need for further improvement of the potential. Copyright 2009 Wiley Periodicals, Inc.

  14. Regularized iterative weighted filtered backprojection for helical cone-beam CT

    International Nuclear Information System (INIS)

    Sunnegaardh, Johan; Danielsson, Per-Erik

    2008-01-01

    Contemporary reconstruction methods employed for clinical helical cone-beam computed tomography (CT) are analytical (noniterative) but mathematically nonexact, i.e., the reconstructed image contains so called cone-beam artifacts, especially for higher cone angles. Besides cone artifacts, these methods also suffer from windmill artifacts: alternating dark and bright regions creating spiral-like patterns occurring in the vicinity of high z-direction derivatives. In this article, the authors examine the possibility to suppress cone and windmill artifacts by means of iterative application of nonexact three-dimensional filtered backprojection, where the analytical part of the reconstruction brings about accelerated convergence. Specifically, they base their investigations on the weighted filtered backprojection method [Stierstorfer et al., Phys. Med. Biol. 49, 2209-2218 (2004)]. Enhancement of high frequencies and amplification of noise is a common but unwanted side effect in many acceleration attempts. They have employed linear regularization to avoid these effects and to improve the convergence properties of the iterative scheme. Artifacts and noise, as well as spatial resolution in terms of modulation transfer functions and slice sensitivity profiles have been measured. The results show that for cone angles up to ±2.78 deg., cone artifacts are suppressed and windmill artifacts are alleviated within three iterations. Furthermore, regularization parameters controlling spatial resolution can be tuned so that image quality in terms of spatial resolution and noise is preserved. Simulations with higher number of iterations and long objects (exceeding the measured region) verify that the size of the reconstructible region is not reduced, and that the regularization greatly improves the convergence properties of the iterative scheme. Taking these results into account, and the possibilities to extend the proposed method with more accurate modeling of the acquisition process

  15. Analytical fan-beam and cone-beam reconstruction algorithms with uniform attenuation correction for SPECT

    International Nuclear Information System (INIS)

    Tang Qiulin; Zeng, Gengsheng L; Gullberg, Grant T

    2005-01-01

    In this paper, we developed an analytical fan-beam reconstruction algorithm that compensates for uniform attenuation in SPECT. The new fan-beam algorithm is in the form of backprojection first, then filtering, and is mathematically exact. The algorithm is based on three components. The first one is the established generalized central-slice theorem, which relates the 1D Fourier transform of a set of arbitrary data and the 2D Fourier transform of the backprojected image. The second one is the fact that the backprojection of the fan-beam measurements is identical to the backprojection of the parallel measurements of the same object with the same attenuator. The third one is the stable analytical reconstruction algorithm for uniformly attenuated Radon data, developed by Metz and Pan. The fan-beam algorithm is then extended into a cone-beam reconstruction algorithm, where the orbit of the focal point of the cone-beam imaging geometry is a circle. This orbit geometry does not satisfy Tuy's condition and the obtained cone-beam algorithm is an approximation. In the cone-beam algorithm, the cone-beam data are first backprojected into the 3D image volume; then a slice-by-slice filtering is performed. This slice-by-slice filtering procedure is identical to that of the fan-beam algorithm. Both the fan-beam and cone-beam algorithms are efficient, and computer simulations are presented. The new cone-beam algorithm is compared with Bronnikov's cone-beam algorithm, and it is shown to have better performance with noisy projections

  16. Comparison of forward- and back-projection in vivo EPID dosimetry for VMAT treatment of the prostate

    Science.gov (United States)

    Bedford, James L.; Hanson, Ian M.; Hansen, Vibeke N.

    2018-01-01

    In the forward-projection method of portal dosimetry for volumetric modulated arc therapy (VMAT), the integrated signal at the electronic portal imaging device (EPID) is predicted at the time of treatment planning, against which the measured integrated image is compared. In the back-projection method, the measured signal at each gantry angle is back-projected through the patient CT scan to give a measure of total dose to the patient. This study aims to investigate the practical agreement between the two types of EPID dosimetry for prostate radiotherapy. The AutoBeam treatment planning system produced VMAT plans together with corresponding predicted portal images, and a total of 46 sets of gantry-resolved portal images were acquired in 13 patients using an iViewGT portal imager. For the forward-projection method, each acquisition of gantry-resolved images was combined into a single integrated image and compared with the predicted image. For the back-projection method, iViewDose was used to calculate the dose distribution in the patient for comparison with the planned dose. A gamma index for 3% and 3 mm was used for both methods. The results were investigated by delivering the same plans to a phantom and repeating some of the deliveries with deliberately introduced errors. The strongest agreement between forward- and back-projection methods is seen in the isocentric intensity/dose difference, with moderate agreement in the mean gamma. The strongest correlation is observed within a given patient, with less correlation between patients, the latter representing the accuracy of prediction of the two methods. The error study shows that each of the two methods has its own distinct sensitivity to errors, but that overall the response is similar. The forward- and back-projection EPID dosimetry methods show moderate agreement in this series of prostate VMAT patients, indicating that both methods can contribute to the verification of dose delivered to the patient.

  17. A parallel row-based algorithm with error control for standard-cell replacement on a hypercube multiprocessor

    Science.gov (United States)

    Sargent, Jeff Scott

    1988-01-01

    A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel new approaches to controlling error in parallel cell-placement algorithms; Heuristic Cell-Coloring and Adaptive (Parallel Move) Sequence Control. Heuristic Cell-Coloring identifies sets of noninteracting cells that can be moved repeatedly, and in parallel, with no buildup of error in the placement cost. Adaptive Sequence Control allows multiple parallel cell moves to take place between global cell-position updates. This feedback mechanism is based on an error bound derived analytically from the traditional annealing move-acceptance profile. Placement results are presented for real industry circuits and the performance is summarized of an implementation on the Intel iPSC/2 Hypercube. The runtime of this algorithm is 5 to 16 times faster than a previous program developed for the Hypercube, while producing equivalent quality placement. An integrated place and route program for the Intel iPSC/2 Hypercube is currently being developed.

  18. Comparison of different types of commercial filtered backprojection and ordered-subset expectation maximization SPECT reconstruction software.

    Science.gov (United States)

    Seret, Alain; Forthomme, Julien

    2009-09-01

    The aim of this study was to compare the performance of filtered backprojection (FBP) and ordered-subset expectation maximization (OSEM) reconstruction algorithms available in several types of commercial SPECT software. Numeric simulations of SPECT acquisitions of 2 phantoms were used: the National Electrical Manufacturers Association line phantom used for the assessment of SPECT resolution and a phantom with uniform, hot-rod, and cold-rod compartments. For FBP, no filtering and filtering of the projections with either a Butterworth filter (order 3 or 6) or a Hanning filter at various cutoff frequencies were considered. For OSEM, the number of subsets was 1, 4, 8, or 16, and the number of iterations was chosen to obtain a product number of iterations times the number of subsets equal to 16, 32, 48, or 64. The line phantom enabled us to obtain the reconstructed central, radial, and tangential full width at half maximum. The uniform compartment of the second phantom delivered the reconstructed mean pixel counts and SDs from which the coefficients of variation were calculated. Hot contrast and cold contrast were obtained from its rod compartments. For FBP, the full width at half maximum, mean pixel count, coefficient of variation, and contrast were almost software independent. The only exceptions were a smaller (by 0.5 mm) full width at half maximum for one of the software types, higher mean pixel counts for 2 of the software types, and better contrast for 2 of the software types under some filtering conditions. For OSEM, the full width at half maximum differed by 0.1-2.5 mm with the different types of software but was almost independent of the number of subsets or iterations. There was a marked dependence of the mean pixel count on the type of software used, and there was a moderate dependence of the coefficient of variation. Contrast was almost software independent. The mean pixel count varied greatly with the number of iterations for 2 of the software types, and

  19. Metal artifact reduction in x-ray computed tomography by using analytical DBP-type algorithm

    Science.gov (United States)

    Wang, Zhen; Kudo, Hiroyuki

    2012-03-01

    This paper investigates a common metal artifacts problem in X-ray computed tomography (CT). The artifacts in reconstructed image may render image non-diagnostic because of inaccuracy beam hardening correction from high attenuation objects, satisfactory image could not be reconstructed from projections with missing or distorted data. In traditionally analytical metal artifact reduction (MAR) method, firstly subtract the metallic object part of projection data from the original obtained projection, secondly complete the subtracted part in original projection by using various interpolating method, thirdly reconstruction from the interpolated projection by filtered back-projection (FBP) algorithm. The interpolation error occurred during the second step can make unrealistic assumptions about the missing data, leading to DC shift artifact in the reconstructed images. We proposed a differentiated back-projection (DBP) type MAR method by instead of FBP algorithm with DBP algorithm in third step. In FBP algorithm the interpolated projection will be filtered on each projection view angle before back-projection, as a result the interpolation error is propagated to whole projection. However, the property of DBP algorithm provide a chance to do filter after the back-projection in a Hilbert filter direction, as a result the interpolation error affection would be reduce and there is expectation on improving quality of reconstructed images. In other word, if we choose the DBP algorithm instead of the FBP algorithm, less contaminated projection data with interpolation error would be used in reconstruction. A simulation study was performed to evaluate the proposed method using a given phantom.

  20. Image and video compression for multimedia engineering fundamentals, algorithms, and standards

    CERN Document Server

    Shi, Yun Q

    2008-01-01

    Part I: Fundamentals Introduction Quantization Differential Coding Transform Coding Variable-Length Coding: Information Theory Results (II) Run-Length and Dictionary Coding: Information Theory Results (III) Part II: Still Image Compression Still Image Coding: Standard JPEG Wavelet Transform for Image Coding: JPEG2000 Nonstandard Still Image Coding Part III: Motion Estimation and Compensation Motion Analysis and Motion Compensation Block Matching Pel-Recursive Technique Optical Flow Further Discussion and Summary on 2-D Motion Estimation Part IV: Video Compression Fundam

  1. Watermarking Techniques Using Least Significant Bit Algorithm for Digital Image Security Standard Solution- Based Android

    Directory of Open Access Journals (Sweden)

    Ari Muzakir

    2017-05-01

    Full Text Available Ease of deployment of digital image through the internet has positive and negative sides, especially for owners of the original digital image. The positive side of the ease of rapid deployment is the owner of that image deploys digital image files to various sites in the world address. While the downside is that if there is no copyright that serves as protector of the image it will be very easily recognized ownership by other parties. Watermarking is one solution to protect the copyright and know the results of the digital image. With Digital Image Watermarking, copyright resulting digital image will be protected through the insertion of additional information such as owner information and the authenticity of the digital image. The least significant bit (LSB is one of the algorithm is simple and easy to understand. The results of the simulations carried out using android smartphone shows that the LSB watermarking technique is not able to be seen by naked human eye, meaning there is no significant difference in the image of the original files with images that have been inserted watermarking. The resulting image has dimensions of 640x480 with a bit depth of 32 bits. In addition, to determine the function of the ability of the device (smartphone in processing the image using this application used black box testing. 

  2. Optimization of traceable coaxial RF reflection standards with 7-mm-N-connector using genetic algorithms

    Directory of Open Access Journals (Sweden)

    T. Schrader

    2003-01-01

    Full Text Available A new coaxial device with 7-mm-N-connector was developed providing calculable complex reflection coefficients for traceable calibration of vector network analyzers (VNA. It was specifically designed to fill the gap between 0 Hz (DC, direct current and 250MHz, though the device was tested up to 10GHz. The frequency dependent reflection coefficient of this device can be described by a model, which is characterized by traceable measurements. It is therefore regarded as a “traceable model". The new idea of using such models for traceability has been verified, found to be valid and was used for these investigations. The DC resistance value was extracted from RF measurements up to 10 GHz by means of Genetic Algorithms (GA. The GA was used to obtain the elements of the model describing the reflection coefficient Γ of a network of SMD resistors. The DC values determined with the GA from RF measurements match the traceable value at DC within 3·10-3, which is in good agreement with measurements using reference air lines at GHz frequencies.

  3. Guidelines and algorithms: strategies for standardization of referral criteria in diagnostic radiology

    International Nuclear Information System (INIS)

    Kainberger, Franz; Pokieser, Peter; Imhof, Herwig; Czembirek, Heinrich; Fruehwald, Franz

    2002-01-01

    Guidelines can be regarded as special forms of algorithms and have been shown to be useful tools for supporting medical decision making. With the Council Directive 97/43/Euratom recommendations concerning referral criteria for medical exposure have to be implemented into national law of all EU member states. The time- and cost-consuming efforts of developing, implementing, and updating such guidelines are balanced by the acceptance in clinical practice and eventual better health outcomes. Clearly defined objectives with special attention drawn on national and regional differences among potential users, support from organisations with expertise in evidence-based medicine, separated development of the evidence component and the recommendations component, and large-scale strategies for distribution and implementation are necessary. Editors as well as users of guidelines for referral criteria have to be aware which expectations can be met and which cannot be fulfilled with this instrument; thus, dealing with guidelines requires a new form of ''diagnostic reasoning'' based on medical ethics. (orig.)

  4. Correction of computed tomography motion artifacts using pixel-specific back-projection

    International Nuclear Information System (INIS)

    Ritchie, C.J.; Crawford, C.R.; Godwin, J.D.; Kim, Y. King, K.F.

    1996-01-01

    Cardiac and respiratory motion can cause artifacts in computed tomography scans of the chest. The authors describe a new method for reducing these artifacts called pixel-specific back-projection (PSBP). PSBP reduces artifacts caused by in-plane motion by reconstructing each pixel in a frame of reference that moves with the in-plane motion in the volume being scanned. The motion of the frame of reference is specified by constructing maps that describe the motion of each pixel in the image at the time each projection was measured; these maps are based on measurements of the in-plane motion. PSBP has been tested in computer simulations and with volunteer data. In computer simulations, PSBP removed the structured artifacts caused by motion. In scans of two volunteers, PSBP reduced doubling and streaking in chest scans to a level that made the images clinically useful. PSBP corrections of liver scans were less satisfactory because the motion of the liver is predominantly superior-inferior (S-I). PSBP uses a unique set of motion parameters to describe the motion at each point in the chest as opposed to requiring that the motion be described by a single set of parameters. Therefore, PSBP may be more useful in correcting clinical scans than are other correction techniques previously described

  5. A standardized algorithm for determining the underlying cause of death in HIV infection as AIDS or non-AIDS related

    DEFF Research Database (Denmark)

    Kowalska, Justyna D; Mocroft, Amanda; Ledergerber, Bruno

    2011-01-01

    cohort classification (LCC) as reported by the site investigator, and 4 algorithms (ALG) created based on survival times after specific AIDS events. Results: A total of 2,783 deaths occurred, 540 CoDe forms were collected, and 488 were used to evaluate agreements. The agreement between CC and LCC...... are a natural consequence of an increased awareness and knowledge in the field. To monitor and analyze changes in mortality over time, we have explored this issue within the EuroSIDA study and propose a standardized protocol unifying data collected and allowing for classification of all deaths as AIDS or non-AIDS...... related, including events with missing cause of death. Methods: Several classifications of the underlying cause of death as AIDS or non-AIDS related within the EuroSIDA study were compared: central classification (CC-reference group) based on an externally standardised method (the CoDe procedures), local...

  6. [Ischemic heart disease prevalence estimated using a standard algorithm based on electronic health data in various areas of Italy].

    Science.gov (United States)

    Balzi, Daniela; Barchielli, Alessandro; Battistella, Giuseppe; Gnavi, Roberto; Inio, Andrea; Tessari, Roberta; Picariello, Roberta; Canova, Cristina; Simonato, Lorenzo

    2008-01-01

    to define an algorithm to estimate prevalence of ischemic heart disease from health administrative datasets. four Italian areas: Venezia, Treviso, Torino, Firenze. resident population in the four areas in the period 2002-2004 (only 2003 for Firenze) for a total of 2,350,000 inhabitants in 2003. annual crude and standardized prevalence rate (x100 inhabitants), 95% confidence intervals by area. Quality (comparability and coherence) indicators are also reported the algorithm is based on record linkage of hospital discharges (SDO), pharmacological prescriptions (PF), exemptions from health-tax exemptions (ET) and causes of mortality (CM). From SDO we extracted discharges for ICD9-CM codes 410*-414* in all diagnoses in the estimation year and during the four years immediately preceding. We selected from PF subjects with at least two prescriptions of organic nitrates (ATC = C01DA*) in the estimation year. From ET subjects with a new exemption for ischemic heart disease (002.414) or who obtained exemption in the three years preceding, were selected. We also considered all deaths in the year for ischemic heart disease (ICD9 CM 410-414). Cases were defined as ischemic heart disease prevalent cases if they were extracted at least once from one of the datasets and if they were alive on January 1 of the estimation year. estimated crude prevalence ranges from 2.5 to 4%. The standardized prevalence led to a narrower range of values (2.8-3.3%). Venezia and Firenze show a higher standardized prevalence in both sexes (men 4.7% and 4.4%; women 2.3% and 2.2% respectively); Treviso and Torino present a lower standardized prevalence (men: 3.9%; women: 1.9%). The hospital discharges are the main source to identify prevalent subjects (34-48% of subjects are solely identified by SDO), pharmacological prescriptions are a relevant source in Firenze and Torino (27-28%), while they are less relevant in Venezia and Treviso (13-15%). ET shows a different contribution to prevalent case

  7. An Implementation and Parallelization of the Scale Space Meshing Algorithm

    Directory of Open Access Journals (Sweden)

    Julie Digne

    2015-11-01

    Full Text Available Creating an interpolating mesh from an unorganized set of oriented points is a difficult problemwhich is often overlooked. Most methods focus indeed on building a watertight smoothed meshby defining some function whose zero level set is the surface of the object. However in some casesit is crucial to build a mesh that interpolates the points and does not fill the acquisition holes:either because the data are sparse and trying to fill the holes would create spurious artifactsor because the goal is to explore visually the data exactly as they were acquired without anysmoothing process. In this paper we detail a parallel implementation of the Scale-Space Meshingalgorithm, which builds on the scale-space framework for reconstructing a high precision meshfrom an input oriented point set. This algorithm first smoothes the point set, producing asingularity free shape. It then uses a standard mesh reconstruction technique, the Ball PivotingAlgorithm, to build a mesh from the smoothed point set. The final step consists in back-projecting the mesh built on the smoothed positions onto the original point set. The result ofthis process is an interpolating, hole-preserving surface mesh reconstruction.

  8. A comparison of reconstruction algorithms for breast tomosynthesis

    International Nuclear Information System (INIS)

    Wu Tao; Moore, Richard H.; Rafferty, Elizabeth A.; Kopans, Daniel B.

    2004-01-01

    Three algorithms for breast tomosynthesis reconstruction were compared in this paper, including (1) a back-projection (BP) algorithm (equivalent to the shift-and-add algorithm), (2) a Feldkamp filtered back-projection (FBP) algorithm, and (3) an iterative Maximum Likelihood (ML) algorithm. Our breast tomosynthesis system acquires 11 low-dose projections over a 50 deg. angular range using an a-Si (CsI:Tl) flat-panel detector. The detector was stationary during the acquisition. Quality metrics such as signal difference to noise ratio (SDNR) and artifact spread function (ASF) were used for quantitative evaluation of tomosynthesis reconstructions. The results of the quantitative evaluation were in good agreement with the results of the qualitative assessment. In patient imaging, the superimposed breast tissues observed in two-dimensional (2D) mammograms were separated in tomosynthesis reconstructions by all three algorithms. It was shown in both phantom imaging and patient imaging that the BP algorithm provided the best SDNR for low-contrast masses but the conspicuity of the feature details was limited by interplane artifacts; the FBP algorithm provided the highest edge sharpness for microcalcifications but the quality of masses was poor; the information of both the masses and the microcalcifications were well restored with balanced quality by the ML algorithm, superior to the results from the other two algorithms

  9. Mitigating artifacts in back-projection source imaging with implications for frequency-dependent properties of the Tohoku-Oki earthquake

    Science.gov (United States)

    Meng, Lingsen; Ampuero, Jean-Paul; Luo, Yingdi; Wu, Wenbo; Ni, Sidao

    2012-12-01

    Comparing teleseismic array back-projection source images of the 2011 Tohoku-Oki earthquake with results from static and kinematic finite source inversions has revealed little overlap between the regions of high- and low-frequency slip. Motivated by this interesting observation, back-projection studies extended to intermediate frequencies, down to about 0.1 Hz, have suggested that a progressive transition of rupture properties as a function of frequency is observable. Here, by adapting the concept of array response function to non-stationary signals, we demonstrate that the "swimming artifact", a systematic drift resulting from signal non-stationarity, induces significant bias on beamforming back-projection at low frequencies. We introduce a "reference window strategy" into the multitaper-MUSIC back-projection technique and significantly mitigate the "swimming artifact" at high frequencies (1 s to 4 s). At lower frequencies, this modification yields notable, but significantly smaller, artifacts than time-domain stacking. We perform extensive synthetic tests that include a 3D regional velocity model for Japan. We analyze the recordings of the Tohoku-Oki earthquake at the USArray and at the European array at periods from 1 s to 16 s. The migration of the source location as a function of period, regardless of the back-projection methods, has characteristics that are consistent with the expected effect of the "swimming artifact". In particular, the apparent up-dip migration as a function of frequency obtained with the USArray can be explained by the "swimming artifact". This indicates that the most substantial frequency-dependence of the Tohoku-Oki earthquake source occurs at periods longer than 16 s. Thus, low-frequency back-projection needs to be further tested and validated in order to contribute to the characterization of frequency-dependent rupture properties.

  10. Validation of the Welch Allyn SureBP (inflation) and StepBP (deflation) algorithms by AAMI standard testing and BHS data analysis.

    Science.gov (United States)

    Alpert, Bruce S

    2011-04-01

    We evaluated two new Welch Allyn automated blood pressure (BP) algorithms. The first, SureBP, estimates BP during cuff inflation; the second, StepBP, does so during deflation. We followed the American National Standards Institute/Association for the Advancement of Medical Instrumentation SP10:2006 standard for testing and data analysis. The data were also analyzed using the British Hypertension Society analysis strategy. We tested children, adolescents, and adults. The requirements of the American National Standards Institute/Association for the Advancement of Medical Instrumentation SP10:2006 standard were fulfilled with respect to BP levels, arm sizes, and ages. Association for the Advancement of Medical Instrumentation SP10 Method 1 data analysis was used. The mean±standard deviation for the device readings compared with auscultation by paired, trained, blinded observers in the SureBP mode were -2.14±7.44 mmHg for systolic BP (SBP) and -0.55±5.98 mmHg for diastolic BP (DBP). In the StepBP mode, the differences were -3.61±6.30 mmHg for SBP and -2.03±5.30 mmHg for DBP. Both algorithms achieved an A grade for both SBP and DBP by British Hypertension Society analysis. The SureBP inflation-based algorithm will be available in many new-generation Welch Allyn monitors. Its use will reduce the time it takes to estimate BP in critical patient care circumstances. The device will not need to inflate to excessive suprasystolic BPs to obtain the SBP values. Deflation is rapid once SBP has been determined, thus reducing the total time of cuff inflation and reducing patient discomfort. If the SureBP fails to obtain a BP value, the StepBP algorithm is activated to estimate BP by traditional deflation methodology.

  11. Evaluation of the Eclipse eMC algorithm for bolus electron conformal therapy using a standard verification dataset.

    Science.gov (United States)

    Carver, Robert L; Sprunger, Conrad P; Hogstrom, Kenneth R; Popple, Richard A; Antolak, John A

    2016-05-08

    The purpose of this study was to evaluate the accuracy and calculation speed of electron dose distributions calculated by the Eclipse electron Monte Carlo (eMC) algorithm for use with bolus electron conformal therapy (ECT). The recent com-mercial availability of bolus ECT technology requires further validation of the eMC dose calculation algorithm. eMC-calculated electron dose distributions for bolus ECT have been compared to previously measured TLD-dose points throughout patient-based cylindrical phantoms (retromolar trigone and nose), whose axial cross sections were based on the mid-PTV (planning treatment volume) CT anatomy. The phantoms consisted of SR4 muscle substitute, SR4 bone substitute, and air. The treatment plans were imported into the Eclipse treatment planning system, and electron dose distributions calculated using 1% and pencil beam algorithm (PBA). The eMC has comparable accuracy to the pencil beam redefinition algorithm (PBRA) used for bolus ECT planning and has acceptably low dose calculation times. The eMC accuracy decreased when smoothing was used in high-gradient dose regions. The eMC accuracy was consistent with that previously reported for accuracy of the eMC electron dose algorithm and shows that the algorithm is suitable for clinical implementation of bolus ECT.

  12. Deriving causes of child mortality by re–analyzing national verbal autopsy data applying a standardized computer algorithm in Uganda, Rwanda and Ghana

    Directory of Open Access Journals (Sweden)

    Li Liu

    2015-06-01

    Full Text Available Background To accelerate progress toward the Millennium Development Goal 4, reliable information on causes of child mortality is critical. With more national verbal autopsy (VA studies becoming available, how to improve consistency of national VA derived child causes of death should be considered for the purpose of global comparison. We aimed to adapt a standardized computer algorithm to re–analyze national child VA studies conducted in Uganda, Rwanda and Ghana recently, and compare our results with those derived from physician review to explore issues surrounding the application of the standardized algorithm in place of physician review. Methods and Findings We adapted the standardized computer algorithm considering the disease profile in Uganda, Rwanda and Ghana. We then derived cause–specific mortality fractions applying the adapted algorithm and compared the results with those ascertained by physician review by examining the individual– and population–level agreement. Our results showed that the leading causes of child mortality in Uganda, Rwanda and Ghana were pneumonia (16.5–21.1% and malaria (16.8–25.6% among children below five years and intrapartum–related complications (6.4–10.7% and preterm birth complications (4.5–6.3% among neonates. The individual level agreement was poor to substantial across causes (kappa statistics: –0.03 to 0.83, with moderate to substantial agreement observed for injury, congenital malformation, preterm birth complications, malaria and measles. At the population level, despite fairly different cause–specific mortality fractions, the ranking of the leading causes was largely similar. Conclusions The standardized computer algorithm produced internally consistent distribution of causes of child mortality. The results were also qualitatively comparable to those based on physician review from the perspective of public health policy. The standardized computer algorithm has the advantage of

  13. Accuracy of popular automatic QT Interval algorithms assessed by a 'Gold Standard' and comparison with a Novel method: computer simulation study

    Directory of Open Access Journals (Sweden)

    Hunt Anthony

    2005-09-01

    Full Text Available Abstract Background Accurate measurement of the QT interval is very important from a clinical and pharmaceutical drug safety screening perspective. Expert manual measurement is both imprecise and imperfectly reproducible, yet it is used as the reference standard to assess the accuracy of current automatic computer algorithms, which thus produce reproducible but incorrect measurements of the QT interval. There is a scientific imperative to evaluate the most commonly used algorithms with an accurate and objective 'gold standard' and investigate novel automatic algorithms if the commonly used algorithms are found to be deficient. Methods This study uses a validated computer simulation of 8 different noise contaminated ECG waveforms (with known QT intervals of 461 and 495 ms, generated from a cell array using Luo-Rudy membrane kinetics and the Crank-Nicholson method, as a reference standard to assess the accuracy of commonly used QT measurement algorithms. Each ECG contaminated with 39 mixtures of noise at 3 levels of intensity was first filtered then subjected to three threshold methods (T1, T2, T3, two T wave slope methods (S1, S2 and a Novel method. The reproducibility and accuracy of each algorithm was compared for each ECG. Results The coefficient of variation for methods T1, T2, T3, S1, S2 and Novel were 0.36, 0.23, 1.9, 0.93, 0.92 and 0.62 respectively. For ECGs of real QT interval 461 ms the methods T1, T2, T3, S1, S2 and Novel calculated the mean QT intervals(standard deviations to be 379.4(1.29, 368.5(0.8, 401.3(8.4, 358.9(4.8, 381.5(4.6 and 464(4.9 ms respectively. For ECGs of real QT interval 495 ms the methods T1, T2, T3, S1, S2 and Novel calculated the mean QT intervals(standard deviations to be 396.9(1.7, 387.2(0.97, 424.9(8.7, 386.7(2.2, 396.8(2.8 and 493(0.97 ms respectively. These results showed significant differences between means at >95% confidence level. Shifting ECG baselines caused large errors of QT interval with T1 and T2

  14. Fast parallel algorithm for three-dimensional distance-driven model in iterative computed tomography reconstruction

    International Nuclear Information System (INIS)

    Chen Jian-Lin; Li Lei; Wang Lin-Yuan; Cai Ai-Long; Xi Xiao-Qi; Zhang Han-Ming; Li Jian-Xin; Yan Bin

    2015-01-01

    The projection matrix model is used to describe the physical relationship between reconstructed object and projection. Such a model has a strong influence on projection and backprojection, two vital operations in iterative computed tomographic reconstruction. The distance-driven model (DDM) is a state-of-the-art technology that simulates forward and back projections. This model has a low computational complexity and a relatively high spatial resolution; however, it includes only a few methods in a parallel operation with a matched model scheme. This study introduces a fast and parallelizable algorithm to improve the traditional DDM for computing the parallel projection and backprojection operations. Our proposed model has been implemented on a GPU (graphic processing unit) platform and has achieved satisfactory computational efficiency with no approximation. The runtime for the projection and backprojection operations with our model is approximately 4.5 s and 10.5 s per loop, respectively, with an image size of 256×256×256 and 360 projections with a size of 512×512. We compare several general algorithms that have been proposed for maximizing GPU efficiency by using the unmatched projection/backprojection models in a parallel computation. The imaging resolution is not sacrificed and remains accurate during computed tomographic reconstruction. (paper)

  15. Acute appendicitis: prospective evaluation of a diagnostic algorithm integrating ultrasound and low-dose CT to reduce the need of standard CT.

    Science.gov (United States)

    Poletti, Pierre-Alexandre; Platon, Alexandra; De Perrot, Thomas; Sarasin, Francois; Andereggen, Elisabeth; Rutschmann, Olivier; Dupuis-Lozeron, Elise; Perneger, Thomas; Gervaz, Pascal; Becker, Christoph D

    2011-12-01

    To evaluate an algorithm integrating ultrasound and low-dose unenhanced CT with oral contrast medium (LDCT) in the assessment of acute appendicitis, to reduce the need of conventional CT. Ultrasound was performed upon admission in 183 consecutive adult patients (111 women, 72 men, mean age 32) with suspicion of acute appendicitis and a BMI between 18.5 and 30 (step 1). No further examination was recommended when ultrasound was positive for appendicitis, negative with low clinical suspicion, or demonstrated an alternative diagnosis. All other patients underwent LDCT (30 mAs) (step 2). Standard intravenously enhanced CT (180 mAs) was performed after indeterminate LDCT (step 3). No further imaging was recommended after ultrasound in 84 (46%) patients; LDCT was obtained in 99 (54%). LDCT was positive or negative for appendicitis in 81 (82%) of these 99 patients, indeterminate in 18 (18%) who underwent standard CT. Eighty-six (47%) of the 183 patients had a surgically proven appendicitis. The sensitivity and specificity of the algorithm were 98.8% and 96.9%. The proposed algorithm achieved high sensitivity and specificity for detection of acute appendicitis, while reducing the need for standard CT and thus limiting exposition to radiation and to intravenous contrast media.

  16. Acute appendicitis: prospective evaluation of a diagnostic algorithm integrating ultrasound and low-dose CT to reduce the need of standard CT

    Energy Technology Data Exchange (ETDEWEB)

    Poletti, Pierre-Alexandre; Platon, Alexandra [University Hospital of Geneva, Department of Radiology, Geneva (Switzerland); University Hospital of Geneva, Emergency Center, Geneva (Switzerland); Perrot, Thomas de; Becker, Christoph D. [University Hospital of Geneva, Department of Radiology, Geneva (Switzerland); Sarasin, Francois; Rutschmann, Olivier [University Hospital of Geneva, Emergency Center, Geneva (Switzerland); Andereggen, Elisabeth [University Hospital of Geneva, Emergency Center, Geneva (Switzerland); University Hospital of Geneva, Department of Surgery, Geneva (Switzerland); Dupuis-Lozeron, Elise; Perneger, Thomas [University Hospital of Geneva, Division of Clinical Epidemiology, Geneva (Switzerland); Gervaz, Pascal [University Hospital of Geneva, Department of Surgery, Geneva (Switzerland)

    2011-12-15

    To evaluate an algorithm integrating ultrasound and low-dose unenhanced CT with oral contrast medium (LDCT) in the assessment of acute appendicitis, to reduce the need of conventional CT. Ultrasound was performed upon admission in 183 consecutive adult patients (111 women, 72 men, mean age 32) with suspicion of acute appendicitis and a BMI between 18.5 and 30 (step 1). No further examination was recommended when ultrasound was positive for appendicitis, negative with low clinical suspicion, or demonstrated an alternative diagnosis. All other patients underwent LDCT (30 mAs) (step 2). Standard intravenously enhanced CT (180 mAs) was performed after indeterminate LDCT (step 3). No further imaging was recommended after ultrasound in 84 (46%) patients; LDCT was obtained in 99 (54%). LDCT was positive or negative for appendicitis in 81 (82%) of these 99 patients, indeterminate in 18 (18%) who underwent standard CT. Eighty-six (47%) of the 183 patients had a surgically proven appendicitis. The sensitivity and specificity of the algorithm were 98.8% and 96.9%. The proposed algorithm achieved high sensitivity and specificity for detection of acute appendicitis, while reducing the need for standard CT and thus limiting exposition to radiation and to intravenous contrast media. (orig.)

  17. Back-projection stacking of P- and S-waves to determine location and focal mechanism of microseismic events recorded by a surface array

    Czech Academy of Sciences Publication Activity Database

    Vlček, J.; Fischer, Tomáš; Vilhelm, J.

    2016-01-01

    Roč. 64, č. 6 (2016), s. 1428-1440 ISSN 0016-8025 Institutional support: RVO:67985530 Keywords : microseismic monitoring * back-projection stacking * hypocenter location * focal mechanism inversion Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 1.846, year: 2016

  18. Development of regularized expectation maximization algorithms for fan-beam SPECT data

    International Nuclear Information System (INIS)

    Kim, Soo Mee; Lee, Jae Sung; Lee, Dong Soo; Lee, Soo Jin; Kim, Kyeong Min

    2005-01-01

    SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam projection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. For the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions

  19. Accordance System Of Distance Learning “Foundations Of Algorithmization And Programming” To International Standards Of Quality.

    Directory of Open Access Journals (Sweden)

    E. Bakumenko

    2009-06-01

    Full Text Available In the article is examined accordance of the integrated environment of study course «Foundations of algoritmization and programming» with the requirements of international standards of quality IMS and SCORM for distance learning systems.

  20. Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT

    Energy Technology Data Exchange (ETDEWEB)

    Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Mascolo-Fortin, Julia, E-mail: julia.mascolo-fortin.1@ulaval.ca [Département de physique, de génie physique et d’optique, Université Laval, Québec, Québec G1V 0A6 (Canada); Goussard, Yves, E-mail: yves.goussard@polymtl.ca [Département de génie électrique/Institut de génie biomédical, École Polytechnique de Montréal, C.P. 6079, succ. Centre-ville, Montréal, Québec H3C 3A7 (Canada); Després, Philippe, E-mail: philippe.despres@phy.ulaval.ca [Département de physique, de génie physique et d’optique and Centre de recherche sur le cancer, Université Laval, Québec, Québec G1V 0A6, Canada and Département de radio-oncologie and Centre de recherche du CHU de Québec, Québec, Québec G1R 2J6 (Canada)

    2015-11-15

    Purpose: The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. Methods: This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numerical simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. Results: The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. Conclusions: The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can

  1. Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT.

    Science.gov (United States)

    Matenine, Dmitri; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe

    2015-11-01

    The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numerical simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can potentially improve the rendering of

  2. Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT

    International Nuclear Information System (INIS)

    Matenine, Dmitri; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe

    2015-01-01

    Purpose: The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. Methods: This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numerical simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. Results: The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. Conclusions: The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can

  3. Optimization algorithm that generates the lowest ΔEab values to a reference standard based on spectral measurements of solid inks in offset lithography

    DEFF Research Database (Denmark)

    Jensen, Søren Tapdrup

    2014-01-01

    ISO 12647-2 specifies CIELAB values for primary and secondary colors, but only tolerances for the primary solid colors. Press operators in lithography still favor density measurements for process control to assure quality and reproducibility during a production run. Since there is no direct...... that the algorithm has high degree of accuracy in predicting the ink layer thickness that conforms to ISO 12647-2 aim point, but errors in the prediction occur when the measured sum of the secondary colors have a low ∆Eab to the standard....

  4. The use of a standardized PCT-algorithm reduces costs in intensive care in septic patients - a DRG-based simulation model

    Directory of Open Access Journals (Sweden)

    Wilke MH

    2011-12-01

    Full Text Available Abstract Introduction The management of bloodstream infections especially sepsis is a difficult task. An optimal antibiotic therapy (ABX is paramount for success. Procalcitonin (PCT is a well investigated biomarker that allows close monitoring of the infection and management of ABX. It has proven to be a cost-efficient diagnostic tool. In Diagnoses Related Groups (DRG based reimbursement systems, hospitals get only a fixed amount of money for certain treatments. Thus it's very important to obtain an optimal balance of clinical treatment and resource consumption namely the length of stay in hospital and especially in the Intensive Care Unit (ICU. We investigated which economic effects an optimized PCT-based algorithm for antibiotic management could have. Materials and methods We collected inpatient episode data from 16 hospitals. These data contain administrative and clinical information such as length of stay, days in the ICU or diagnoses and procedures. From various RCTs and reviews there are different algorithms for the use of PCT to manage ABX published. Moreover RCTs and meta-analyses have proven possible savings in days of ABX (ABD and length of stay in ICU (ICUD. As the meta-analyses use studies on different patient populations (pneumonia, sepsis, other bacterial infections, we undertook a short meta-analyses of 6 relevant studies investigating in sepsis or ventilator associated pneumonia (VAP. From this analyses we obtained savings in ABD and ICUD by calculating the weighted mean differences. Then we designed a new PCT-based algorithm using results from two very recent reviews. The algorithm contains evidence from several studies. From the patient data we calculated cost estimates using German National standard costing information for the German G-DRG system. We developed a simulation model where the possible savings and the extra costs for (in average 8 PCT tests due to our algorithm were brought into equation. Results We calculated ABD

  5. Comparison Between Manual Auditing and a Natural Language Process With Machine Learning Algorithm to Evaluate Faculty Use of Standardized Reports in Radiology.

    Science.gov (United States)

    Guimaraes, Carolina V; Grzeszczuk, Robert; Bisset, George S; Donnelly, Lane F

    2018-03-01

    When implementing or monitoring department-sanctioned standardized radiology reports, feedback about individual faculty performance has been shown to be a useful driver of faculty compliance. Most commonly, these data are derived from manual audit, which can be both time-consuming and subject to sampling error. The purpose of this study was to evaluate whether a software program using natural language processing and machine learning could accurately audit radiologist compliance with the use of standardized reports compared with performed manual audits. Radiology reports from a 1-month period were loaded into such a software program, and faculty compliance with use of standardized reports was calculated. For that same period, manual audits were performed (25 reports audited for each of 42 faculty members). The mean compliance rates calculated by automated auditing were then compared with the confidence interval of the mean rate by manual audit. The mean compliance rate for use of standardized reports as determined by manual audit was 91.2% with a confidence interval between 89.3% and 92.8%. The mean compliance rate calculated by automated auditing was 92.0%, within that confidence interval. This study shows that by use of natural language processing and machine learning algorithms, an automated analysis can accurately define whether reports are compliant with use of standardized report templates and language, compared with manual audits. This may avoid significant labor costs related to conducting the manual auditing process. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  6. Interior tomography: theory, algorithms and applications

    Science.gov (United States)

    Yu, Hengyong; Ye, Yangbo; Wang, Ge

    2008-08-01

    The conventional wisdom states that the interior problem (reconstruction of an interior region from projection data along lines only through that region) is NOT uniquely solvable. While it remains correct, our recent theoretical and numerical results demonstrated that this interior problem CAN be solved in a theoretically exact and numerically stable fashion if a sub-region within the interior region is known. In contrast to the well-established lambda tomography, the studies on this type of exact interior reconstruction are referred to as "interior tomography". In this paper, we will overview the development of interior tomography, involving theory, algorithms and applications. The essence of interior tomography is to find the unique solution from highly truncated projection data via analytic continuation. Such an extension can be done either in the filtered backprojection or backprojection filtration formats. The key issue for the exact interior reconstruction is how to invert the truncated Hilbert transform. We have developed a projection onto convex set (POCS) algorithm and a singular value decomposition (SVD) method and produced excellent results in numerical simulations and practical applications.

  7. GPU-Accelerated Forward and Back-Projections with Spatially Varying Kernels for 3D DIRECT TOF PET Reconstruction.

    Science.gov (United States)

    Ha, S; Matej, S; Ispiryan, M; Mueller, K

    2013-02-01

    We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with.

  8. Revealing the Rupture Complexity of the 2016 Mw 7.8 Kaikoura, New Zealand Earthquake with the Slowness-calibrated Back-projection.

    Science.gov (United States)

    Meng, L.; Zhang, A.; Fang, L.

    2017-12-01

    Characterizing the geometrical complexity of earthquake rupture is of great importance to understand the physical mechanisms of earthquakes. Back-projection stands out as a robust technique to capture spatiotemporal properties of the rupture, such as its length, direction, speed, and segmentation. Conventional back-projection utilizes "hypocenter correction" to mitigate 3D structural effect. However, due to 3D velocity variations, static hypocenter center correction can be improper as the rupture front move away from the hypocenter. For Mw 7.8 Kaikoura earthquake sequence, the apparent source locations inferred from aftershock back-projections indicates systematic westward biases away from the GNS catalogue locations. Here, we applied a physics-based aftershock calibration to account for the travel time variation due to 3D structures. We implement a 2D slowness vector correction to back-projection of the Kaikoura earthquake recorded by the China Array. The Kaikoura earthquake occurred in a complicated fault setting of a transitional plate boundary. The calibrated back projection reveals that the earthquake initiated to the south of Hope fault and propagated northeastward over 100 km through stepping and branching on at least 6 distinct fault planes with a slow overall rupture speed of 1.4 km/s. The high-frequency radiations occur mainly on three shallow thrust faults located in the dilatational quadrants of rupture on the Hope-Kekerengu fault, consistent with the unclamping effect predicted by the dynamic Coulomb stress. This study demonstrates the capability of the BP method, enhanced by aftershock calibrations, to describe earthquake rupture kinematics in regions of complex fault systems.

  9. Regional Implementation of a Pediatric Cardiology Syncope Algorithm Using Standardized Clinical Assessment and Management Plans (SCAMPS) Methodology

    OpenAIRE

    Paris, Yvonne; Toro?Salazar, Olga H.; Gauthier, Naomi S.; Rotondo, Kathleen M.; Arnold, Lucy; Hamershock, Rose; Saudek, David E.; Fulton, David R.; Renaud, Ashley; Alexander, Mark E.

    2016-01-01

    Background: Pediatric syncope is common. Cardiac causes are rarely found. We describe and assess a pragmatic approach to these patients first seen by a pediatric cardiologist in the New England region, using Standardized Clinical Assessment and Management Plans (SCAMPs). Methods and Results: Ambulatory patients aged 7 to 21 years initially seen for syncope at participating New England Congenital Cardiology Association practices over a 2.5‐year period were evaluated using a SCAMP. Findings wer...

  10. A novel standardized algorithm using SPECT/CT evaluating unhappy patients after unicondylar knee arthroplasty– a combined analysis of tracer uptake distribution and component position

    International Nuclear Information System (INIS)

    Suter, Basil; Testa, Enrique; Stämpfli, Patrick; Konala, Praveen; Rasch, Helmut; Friederich, Niklaus F; Hirschmann, Michael T

    2015-01-01

    The introduction of a standardized SPECT/CT algorithm including a localization scheme, which allows accurate identification of specific patterns and thresholds of SPECT/CT tracer uptake, could lead to a better understanding of the bone remodeling and specific failure modes of unicondylar knee arthroplasty (UKA). The purpose of the present study was to introduce a novel standardized SPECT/CT algorithm for patients after UKA and evaluate its clinical applicability, usefulness and inter- and intra-observer reliability. Tc-HDP-SPECT/CT images of consecutive patients (median age 65, range 48–84 years) with 21 knees after UKA were prospectively evaluated. The tracer activity on SPECT/CT was localized using a specific standardized UKA localization scheme. For tracer uptake analysis (intensity and anatomical distribution pattern) a 3D volumetric quantification method was used. The maximum intensity values were recorded for each anatomical area. In addition, ratios between the respective value in the measured area and the background tracer activity were calculated. The femoral and tibial component position (varus-valgus, flexion-extension, internal and external rotation) was determined in 3D-CT. The inter- and intraobserver reliability of the localization scheme, grading of the tracer activity and component measurements were determined by calculating the intraclass correlation coefficients (ICC). The localization scheme, grading of the tracer activity and component measurements showed high inter- and intra-observer reliabilities for all regions (tibia, femur and patella). For measurement of component position there was strong agreement between the readings of the two observers; the ICC for the orientation of the femoral component was 0.73-1.00 (intra-observer reliability) and 0.91-1.00 (inter-observer reliability). The ICC for the orientation of the tibial component was 0.75-1.00 (intra-observer reliability) and 0.77-1.00 (inter-observer reliability). The SPECT

  11. A novel standardized algorithm using SPECT/CT evaluating unhappy patients after unicondylar knee arthroplasty--a combined analysis of tracer uptake distribution and component position.

    Science.gov (United States)

    Suter, Basil; Testa, Enrique; Stämpfli, Patrick; Konala, Praveen; Rasch, Helmut; Friederich, Niklaus F; Hirschmann, Michael T

    2015-03-20

    The introduction of a standardized SPECT/CT algorithm including a localization scheme, which allows accurate identification of specific patterns and thresholds of SPECT/CT tracer uptake, could lead to a better understanding of the bone remodeling and specific failure modes of unicondylar knee arthroplasty (UKA). The purpose of the present study was to introduce a novel standardized SPECT/CT algorithm for patients after UKA and evaluate its clinical applicability, usefulness and inter- and intra-observer reliability. Tc-HDP-SPECT/CT images of consecutive patients (median age 65, range 48-84 years) with 21 knees after UKA were prospectively evaluated. The tracer activity on SPECT/CT was localized using a specific standardized UKA localization scheme. For tracer uptake analysis (intensity and anatomical distribution pattern) a 3D volumetric quantification method was used. The maximum intensity values were recorded for each anatomical area. In addition, ratios between the respective value in the measured area and the background tracer activity were calculated. The femoral and tibial component position (varus-valgus, flexion-extension, internal and external rotation) was determined in 3D-CT. The inter- and intraobserver reliability of the localization scheme, grading of the tracer activity and component measurements were determined by calculating the intraclass correlation coefficients (ICC). The localization scheme, grading of the tracer activity and component measurements showed high inter- and intra-observer reliabilities for all regions (tibia, femur and patella). For measurement of component position there was strong agreement between the readings of the two observers; the ICC for the orientation of the femoral component was 0.73-1.00 (intra-observer reliability) and 0.91-1.00 (inter-observer reliability). The ICC for the orientation of the tibial component was 0.75-1.00 (intra-observer reliability) and 0.77-1.00 (inter-observer reliability). The SPECT/CT algorithm

  12. Performing daily prostate targeting with a standard V-EPID and an automated radio-opaque marker detection algorithm

    International Nuclear Information System (INIS)

    Beaulieu, Luc; Girouard, Louis-Martin; Aubin, Sylviane; Aubry, Jean-Francois; Brouard, Lucie; Roy-Lacroix, Lise; Dumont, Jean; Tremblay, Daniel; Laverdiere, Jacques; Vigneault, Eric

    2004-01-01

    Online prostate positioning using gold markers and a standard video-based electronic portal imaging device is reported. The average systematic (random) errors have been reduced from 2.1 mm (2.7 mm) to 0.5 mm (1.5 mm) in AP direction, 1.1 mm (1.7 mm) to 0.7 mm (1.2 mm) SI and 1.2 mm (1.7 mm) to 0.6 mm (1.3 mm) LR

  13. Cone-beam and fan-beam image reconstruction algorithms based on spherical and circular harmonics

    International Nuclear Information System (INIS)

    Zeng, Gengsheng L; Gullberg, Grant T

    2004-01-01

    A cone-beam image reconstruction algorithm using spherical harmonic expansions is proposed. The reconstruction algorithm is in the form of a summation of inner products of two discrete arrays of spherical harmonic expansion coefficients at each cone-beam point of acquisition. This form is different from the common filtered backprojection algorithm and the direct Fourier reconstruction algorithm. There is no re-sampling of the data, and spherical harmonic expansions are used instead of Fourier expansions. As a special case, a new fan-beam image reconstruction algorithm is also derived in terms of a circular harmonic expansion. Computer simulation results for both cone-beam and fan-beam algorithms are presented for circular planar orbit acquisitions. The algorithms give accurate reconstructions; however, the implementation of the cone-beam reconstruction algorithm is computationally intensive. A relatively efficient algorithm is proposed for reconstructing the central slice of the image when a circular scanning orbit is used

  14. Preliminary Study of Image Reconstruction Algorithm on a Digital Signal Processor

    Science.gov (United States)

    2014-03-01

    hardware description language ( VHDL ) languages. The programming process is completely different for FPGAs since the development essentially involves...5.2 Comparison of CPU-GPU, CPU-FPGA, and CPU-DSP Designs The work for implementing VHDL description of the back-projection algorithm on a physical...FPGA was not complete. Hence, the DSP implementation results are compared with the simulated results for the VHDL design. Simulating VHDL provides an

  15. An algorithm for three-dimensional imaging in the positron camera

    International Nuclear Information System (INIS)

    Chen Kun; Ma Mei; Xu Rongfen; Shen Miaohe

    1986-01-01

    A mathematical algorithm of back-projection filtered for image reconstructions using two-dimensional signals detected from parallel multiwire proportional chambers is described. The approaches of pseudo three-dimensional and full three-dimensional image reconstructions are introduced, and the available point response functions are defined as well. The designing parameters and computation procedure of the full three-dimensional method is presented

  16. Profiling and sorting Mangifera Indica morphology for quality attributes and grade standards using integrated image processing algorithms

    Science.gov (United States)

    Balbin, Jessie R.; Fausto, Janette C.; Janabajab, John Michael M.; Malicdem, Daryl James L.; Marcelo, Reginald N.; Santos, Jan Jeffrey Z.

    2017-06-01

    Mango production is highly vital in the Philippines. It is very essential in the food industry as it is being used in markets and restaurants daily. The quality of mangoes can affect the income of a mango farmer, thus incorrect time of harvesting will result to loss of quality mangoes and income. Scientific farming is much needed nowadays together with new gadgets because wastage of mangoes increase annually due to uncouth quality. This research paper focuses on profiling and sorting of Mangifera Indica using image processing techniques and pattern recognition. The image of a mango is captured on a weekly basis from its early stage. In this study, the researchers monitor the growth and color transition of a mango for profiling purposes. Actual dimensions of the mango are determined through image conversion and determination of pixel and RGB values covered through MATLAB. A program is developed to determine the range of the maximum size of a standard ripe mango. Hue, light, saturation (HSL) correction is used in the filtering process to assure the exactness of RGB values of a mango subject. By pattern recognition technique, the program can determine if a mango is standard and ready to be exported.

  17. A new simple h-mesh adaptation algorithm for standard Smagorinsky LES: a first step of Taylor scale as a refinement variable

    Directory of Open Access Journals (Sweden)

    S Kaennakham

    2016-09-01

    Full Text Available The interaction between discretization error and modeling error has led to some doubts in adopting Solution Adaptive Grid (SAG strategies with LES. Existing SAG approaches contain undesired aspects making the use of one complicated and less convenient to apply to real engineering applications. In this work, a new refinement algorithm is proposed aiming to enhance the efficiency of SAG methodology in terms of simplicity in defining, less user's judgment, designed especially for standard Smagorinsky LES and computational affordability. The construction of a new refinement variable as a function of the Taylor scale, corresponding to the kinetic energy balance requirement of the Smagorinsky SGS model is presented. The numerical study has been tested out with a turbulent plane jet in two dimensions. It is found that the result quality can be effectively improved as well as a significant reduction in CPU time compared to fixed grid cases.

  18. Regional Implementation of a Pediatric Cardiology Syncope Algorithm Using Standardized Clinical Assessment and Management Plans (SCAMPS) Methodology.

    Science.gov (United States)

    Paris, Yvonne; Toro-Salazar, Olga H; Gauthier, Naomi S; Rotondo, Kathleen M; Arnold, Lucy; Hamershock, Rose; Saudek, David E; Fulton, David R; Renaud, Ashley; Alexander, Mark E

    2016-02-19

    Pediatric syncope is common. Cardiac causes are rarely found. We describe and assess a pragmatic approach to these patients first seen by a pediatric cardiologist in the New England region, using Standardized Clinical Assessment and Management Plans (SCAMPs). Ambulatory patients aged 7 to 21 years initially seen for syncope at participating New England Congenital Cardiology Association practices over a 2.5-year period were evaluated using a SCAMP. Findings were iteratively analyzed and the care pathway was revised. The vast majority (85%) of the 1254 patients had typical syncope. A minority had exercise-related or more problematic symptoms. Guideline-defined testing identified one patient with cardiac syncope. Syncope Severity Scores correlated well between physician and patient perceived symptoms. Orthostatic vital signs were of limited use. Largely incidental findings were seen in 10% of ECGs and 11% of echocardiograms. The 10% returning for follow-up, by design, reported more significant symptoms, but did not have newly recognized cardiac disease. Iterative analysis helped refine the approach. SCAMP methodology confirmed that the vast majority of children referred to the outpatient pediatric cardiology setting had typical low-severity neurally mediated syncope that could be effectively evaluated in a single visit using minimal resources. A simple scoring system can help triage patients into treatment categories. Prespecified criteria permitted the effective diagnosis of the single patient with a clear cardiac etiology. Patients with higher syncope scores still have a very low risk of cardiac disease, but may warrant attention. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  19. High-resolution backprojection at regional distance: Application to the Haiti M7.0 earthquake and comparisons with finite source studies

    Science.gov (United States)

    Meng, L.; Ampuero, J.-P.; Sladen, A.; Rendon, H.

    2012-04-01

    A catastrophic Mw7 earthquake ruptured on 12 January 2010 on a complex fault system near Port-au-Prince, Haiti. Offshore rupture is suggested by aftershock locations and marine geophysics studies, but its extent remains difficult to define using geodetic and teleseismic observations. Here we perform the multitaper multiple signal classification (MUSIC) analysis, a high-resolution array technique, at regional distance with recordings from the Venezuela National Seismic Network to resolve high-frequency (about 0.4 Hz) aspects of the earthquake process. Our results indicate westward rupture with two subevents, roughly 35 km apart. In comparison, a lower-frequency finite source inversion with fault geometry based on new geologic and aftershock data shows two slip patches with centroids 21 km apart. Apparent source time functions from USArray further constrain the intersubevent time delay, implying a rupture speed of 3.3 km/s. The tips of the slip zones coincide with subevents imaged by backprojections. The different subevent locations found by backprojection and source inversion suggest spatial complementarity between high- and low-frequency source radiation consistent with high-frequency radiation originating from rupture arrest phases at the edges of main slip areas. The centroid moment tensor (CMT) solution and a geodetic-only inversion have similar moment, indicating most of the moment released is captured by geodetic observations and no additional rupture is required beyond where it is imaged in our preferred model. Our results demonstrate the contribution of backprojections of regional seismic array data for earthquakes down to M ≈ 7, especially when incomplete coverage of seismic and geodetic data implies large uncertainties in source inversions.

  20. Three dimensional computed tomography (CT) algorithms for a planar object

    International Nuclear Information System (INIS)

    Chi, Yong Ki

    2007-02-01

    Recently modern X-ray computed tomography (CT) scanner is rapidly moving towards cone-beam geometry. One of the important advantages of the cone-beam CT is its fast volumetric scanning capability. Also it provides the opportunity for tomographic image reconstruction with magnified resolution. This opportunity is applicable for Emission CT (ECT) scanner with a convergent collimator, which functions as cone beam geometry. However, in a cone-beam image reconstruction, current existing reconstruction algorithms put limitations from long object problems due to the nature of insufficient data or limited source scanning. Therefore, the algorithms that is based on cone-beam geometry and free from limited source scanning highly demanded these days. In this study, for planar object, we have developed full and half-scan algorithms based on approximated cone-beam back-projection. For solving long object problems, many other reconstruction algorithms have been adopted by several helical CT scanners that are composed of a micro-focus X-ray tube and flat panel detector. Although these efforts make the long object problem solved, it remains for planar object as ever due to limited source scanning such as non-isocentric circular orbit. Prior to the algorithmic development, we report digital tomosynthesis (DT) called laminography using geometric projection methods for reconstructing arbitrary cross-section images as well as three dimensional laminography images for cone-beam CT. Digital laminography are advantageous in terms of temporal resolution, and widely used only with a few number of projection data on cone-beam geometry. While existing laminography algorithms use the geometric projection methods, in this dissertation we substitute back-projection technique instead of the geometric projection. Both of laminography without filtering and weighting steps have similar results except for the complexity between their algorithms but it makes the blurring and other severe artifacts in

  1. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  2. A New Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Medha Gupta

    2016-07-01

    Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.

  3. Vascular diameter measurement in CT angiography: comparison of model-based iterative reconstruction and standard filtered back projection algorithms in vitro.

    Science.gov (United States)

    Suzuki, Shigeru; Machida, Haruhiko; Tanaka, Isao; Ueno, Eiko

    2013-03-01

    The purpose of this study was to evaluate the performance of model-based iterative reconstruction (MBIR) in measurement of the inner diameter of models of blood vessels and compare performance between MBIR and a standard filtered back projection (FBP) algorithm. Vascular models with wall thicknesses of 0.5, 1.0, and 1.5 mm were scanned with a 64-MDCT unit and densities of contrast material yielding 275, 396, and 542 HU. Images were reconstructed images by MBIR and FBP, and the mean diameter of each model vessel was measured by software automation. Twenty separate measurements were repeated for each vessel, and variance among the repeated measures was analyzed for determination of measurement error. For all nine model vessels, CT attenuation profiles were compared along a line passing through the luminal center on axial images reconstructed with FBP and MBIR, and the 10-90% edge rise distances at the boundary between the vascular wall and the lumen were evaluated. For images reconstructed with FBP, measurement errors were smallest for models with 1.5-mm wall thickness, except those filled with 275-HU contrast material, and errors grew as the density of the contrast material decreased. Measurement errors with MBIR were comparable to or less than those with FBP. In CT attenuation profiles of images reconstructed with MBIR, the 10-90% edge rise distances at the boundary between the lumen and vascular wall were relatively short for each vascular model compared with those of the profile curves of FBP images. MBIR is better than standard FBP for reducing reconstruction blur and improving the accuracy of diameter measurement at CT angiography.

  4. Algorithms for Computation of Fundamental Properties of Seawater. Endorsed by Unesco/SCOR/ICES/IAPSO Joint Panel on Oceanographic Tables and Standards and SCOR Working Group 51. Unesco Technical Papers in Marine Science, No. 44.

    Science.gov (United States)

    Fofonoff, N. P.; Millard, R. C., Jr.

    Algorithms for computation of fundamental properties of seawater, based on the practicality salinity scale (PSS-78) and the international equation of state for seawater (EOS-80), are compiled in the present report for implementing and standardizing computer programs for oceanographic data processing. Sample FORTRAN subprograms and tables are given…

  5. Trends in causes of death among children under 5 in Bangladesh, 1993-2004: an exercise applying a standardized computer algorithm to assign causes of death using verbal autopsy data

    Directory of Open Access Journals (Sweden)

    Walker Neff

    2011-08-01

    Full Text Available Abstract Background Trends in the causes of child mortality serve as important global health information to guide efforts to improve child survival. With child mortality declining in Bangladesh, the distribution of causes of death also changes. The three verbal autopsy (VA studies conducted with the Bangladesh Demographic and Health Surveys provide a unique opportunity to study these changes in child causes of death. Methods To ensure comparability of these trends, we developed a standardized algorithm to assign causes of death using symptoms collected through the VA studies. The original algorithms applied were systematically reviewed and key differences in cause categorization, hierarchy, case definition, and the amount of data collected were compared to inform the development of the standardized algorithm. Based primarily on the 2004 cause categorization and hierarchy, the standardized algorithm guarantees comparability of the trends by only including symptom data commonly available across all three studies. Results Between 1993 and 2004, pneumonia remained the leading cause of death in Bangladesh, contributing to 24% to 33% of deaths among children under 5. The proportion of neonatal mortality increased significantly from 36% (uncertainty range [UR]: 31%-41% to 56% (49%-62% during the same period. The cause-specific mortality fractions due to birth asphyxia/birth injury and prematurity/low birth weight (LBW increased steadily, with both rising from 3% (2%-5% to 13% (10%-17% and 10% (7%-15%, respectively. The cause-specific mortality rates decreased significantly due to neonatal tetanus and several postneonatal causes (tetanus: from 7 [4-11] to 2 [0.4-4] per 1,000 live births (LB; pneumonia: from 26 [20-33] to 15 [11-20] per 1,000 LB; diarrhea: from 12 [8-17] to 4 [2-7] per 1,000 LB; measles: from 5 [2-8] to 0.2 [0-0.7] per 1,000 LB; injury: from 11 [7-17] to 3 [1-5] per 1,000 LB; and malnutrition: from 9 [6-13] to 5 [2-7]. Conclusions

  6. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  7. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  8. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  9. Group leaders optimization algorithm

    Science.gov (United States)

    Daskin, Anmer; Kais, Sabre

    2011-03-01

    We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.

  10. Suspected acute pulmonary emboli: cost-effectiveness of chest helical computed tomography versus a standard diagnostic algorithm incorporating ventilation-perfusion scintigraphy

    International Nuclear Information System (INIS)

    Larcos, G.; Chi, K.K.G.; Berry, G.; Westmead Hospital, Sydney, NSW; Shiell, A.

    2000-01-01

    There is a controversy regarding the investigation of patients with suspected acute pulmonary embolism (PE). To compare the cost-effectiveness of alternative methods of diagnosing acute PE, chest helical computed tomography (CT) alone and in combination with venous ultrasound (US) of legs and pulmonary angiography (PA) were compared to a conventional algorithm using ventilation-perfusion (V/Q) scintigraphy supplemented in selected cases by US and PA. A decision-analytical model was constructed to model the costs and effects of the three diagnostic strategies in a hypothetical cohort of 1000 patients each. Transition probabilities were based on published data. Life years gained by each strategy were estimated from published mortality rates. Schedule fees were used to estimate costs. The V/Q protocol is both more expensive and more effective than CT alone resulting in 20.1 additional lives saved at a (discounted) cost of $940 per life year gained. An additional 2.5 lives can be saved if CT replaces V/Q scintigraphy in the diagnostic algorithm but at a cost of $23,905 per life year saved. It resulted that the more effective diagnostic strategies are also more expensive. In patients with suspected PE, the incremental cost-effectiveness of the V/Q based strategy over CT alone is reasonable in comparison with other health interventions. The cost-effectiveness of the supplemented CT strategy is more questionable. Copyright (2000) The Australasian College of Physicians

  11. Algorithms in Singular

    Directory of Open Access Journals (Sweden)

    Hans Schonemann

    1996-12-01

    Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].

  12. Significant factors selection in the chemical and enzymatic hydrolysis of lignocellulosic residues by a genetic algorithm analysis and comparison with the standard Plackett-Burman methodology.

    Science.gov (United States)

    Giordano, Pablo C; Beccaria, Alejandro J; Goicoechea, Héctor C

    2011-11-01

    A comparison between the classic Plackett-Burman design (PB) ANOVA analysis and a genetic algorithm (GA) approach to identify significant factors have been carried out. This comparison was made by applying both analyses to data obtained from the experimental results when optimizing both chemical and enzymatic hydrolysis of three lignocellulosic feedstocks (corn and wheat bran, and pine sawdust) by a PB experimental design. Depending on the kind of biomass and the hydrolysis being considered, different results were obtained. Interestingly, some interactions were found to be significant by the GA approach and allowed to identify significant factors, that otherwise, based only in the classic PB analysis, would have not been taken into account in a further optimization step. Improvements in the fitting of c.a. 80% were obtained when comparing the coefficient of determination (R2) computed for both methods. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Assessment of structural similarity in CT using filtered backprojection and iterative reconstruction: a phantom study with 3D printed lung vessels.

    Science.gov (United States)

    Joemai, Raoul M S; Geleijns, Jacob

    2017-11-01

    To compare the performance of three generations of CT reconstruction techniques using structural similarity (SSIM) as a measure of image quality for CT scans of a chest phantom with 3D printed lung vessels. CT images of the chest phantom were acquired at seven dose levels by changing the tube current while other acquisition parameters were kept constant. Three CT reconstruction techniques were applied on each acquisition. The first technique was filtered backprojection (FBP), the second technique was FBP with iterative filtering (adaptive iteration dose reduction in 3 dimensions (AIDR 3D)) and the third technique was model-based iterative reconstruction (Forward projected model-based Iterative Reconstruction SoluTion (FIRST)). Image quality of the CT data was quantified in terms of SSIM. The SSIM index was used for image quality comparison between the dose levels and different reconstruction techniques. The SSIM index gives a value between 0 and 1, with 0 as the lowest image quality and 1 as an excellent image quality. The lowest SSIM index was observed for FBP at all dose levels. The reconstruction technique with the highest SSIM depends on the dose level. For tube currents higher than 80 mA, AIDR 3D showed the highest SSIM index, and for tube currents lower or equal to 80 mA FIRST showed the highest SSIM index. SSIM index is a robust quantity and is correlated to the image quality as perceived by the humans. Advanced CT reconstruction techniques provide better image quality in all conditions compared to FBP. Advances in knowledge: SSIM is a robust measure to compare CT image quality for advanced reconstruction techniques relative to a reference. The 3D print technology is an useful method for the development of dedicated phantoms for CT image quality evaluation.

  14. RFID Location Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Zi Min

    2016-01-01

    Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.

  15. Developpement d'algorithmes de reconstruction statistique appliques en tomographie rayons-X assistee par ordinateur

    Science.gov (United States)

    Thibaudeau, Christian

    La tomodensitometrie (TDM) permet d'obtenir, et ce de facon non invasive, une image tridimensionnelle de l'anatomie interne d'un sujet. Elle constitue l'evolution logique de la radiographie et permet l'observation d'un volume sous differents plans (sagittal, coronal, axial ou n'importe quel autre plan). La TDM peut avantageusement completer la tomographie d'emission par positrons (TEP), un outil de predilection utilise en recherche biomedicale et pour le diagnostic du cancer. La TEP fournit une information fonctionnelle, physiologique et metabolique, permettant la localisation et la quantification de radiotraceurs a l'interieur du corps humain. Cette derniere possede une sensibilite inegalee, mais peut neanmoins souffrir d'une faible resolution spatiale et d'un manque de repere anatomique selon le radiotraceur utilise. La combinaison, ou fusion, des images TEP et TDM permet d'obtenir cette localisation anatomique de la distribution du radiotraceur. L'image TDM represente une carte de l'attenuation subie par les rayons-X lors de leur passage a travers les tissus. Elle permet donc aussi d'ameliorer la quantification de l'image TEP en offrant la possibilite de corriger pour l'attenuation. L'image TDM s'obtient par la transformation de profils d'attenuation en une image cartesienne pouvant etre interpretee par l'humain. Si la qualite de cette image est fortement influencee par les performances de l'appareil, elle depend aussi grandement de la capacite de l'algorithme de reconstruction a obtenir une representation fidele du milieu image. Les techniques de reconstruction standards, basees sur la retroprojection filtree (FBP, filtered back-projection), reposent sur un modele mathematiquement parfait de la geometrie d'acquisition. Une alternative a cette methode etalon est appelee reconstruction statistique, ou iterative. Elle permet d'obtenir de meilleurs resultats en presence de bruit ou d'une quantite limitee d'information et peut virtuellement s'adapter a toutes formes

  16. Algorithms for limited-view computed tomography: an annotated bibliography and a challenge

    International Nuclear Information System (INIS)

    Rangayyan, R.; Dhawan, A.P.; Gordon, R.

    1985-01-01

    In many applications of computed tomography, it may not be possible to acquire projection data at all angles, as required by the most commonly used algorithm of convolution backprojection. In such a limited-data situation, we face an ill-posed problem in attempting to reconstruct an image from an incomplete set of projections. Many techniques have been proposed to tackle this situation, employing diverse theories such as signal recovery, image restoration, constrained deconvolution, and constrained optimization, as well as novel schemes such as iterative object-dependent algorithms incorporating a priori knowledge and use of multispectral radiation. The authors present an overview of such techniques and offer a challenge to all readers to reconstruct images from a set of limited-view data provided here

  17. Algorithmic complexity of quantum capacity

    Science.gov (United States)

    Oskouei, Samad Khabbazi; Mancini, Stefano

    2018-04-01

    We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.

  18. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  19. Iterative concurrent reconstruction algorithms for emission computed tomography

    International Nuclear Information System (INIS)

    Brown, J.K.; Hasegawa, B.H.; Lang, T.F.

    1994-01-01

    Direct reconstruction techniques, such as those based on filtered backprojection, are typically used for emission computed tomography (ECT), even though it has been argued that iterative reconstruction methods may produce better clinical images. The major disadvantage of iterative reconstruction algorithms, and a significant reason for their lack of clinical acceptance, is their computational burden. We outline a new class of ''concurrent'' iterative reconstruction techniques for ECT in which the reconstruction process is reorganized such that a significant fraction of the computational processing occurs concurrently with the acquisition of ECT projection data. These new algorithms use the 10-30 min required for acquisition of a typical SPECT scan to iteratively process the available projection data, significantly reducing the requirements for post-acquisition processing. These algorithms are tested on SPECT projection data from a Hoffman brain phantom acquired with a 2 x 10 5 counts in 64 views each having 64 projections. The SPECT images are reconstructed as 64 x 64 tomograms, starting with six angular views. Other angular views are added to the reconstruction process sequentially, in a manner that reflects their availability for a typical acquisition protocol. The results suggest that if T s of concurrent processing are used, the reconstruction processing time required after completion of the data acquisition can be reduced by at least 1/3 T s. (Author)

  20. Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm

    Science.gov (United States)

    Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad

    2018-01-01

    Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.

  1. High-speed computation of the EM algorithm for PET image reconstruction

    International Nuclear Information System (INIS)

    Rajan, K.; Patnaik, L.M.; Ramakrishna, J.

    1994-01-01

    The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution backprojection algorithms. However, two major drawbacks have impeded the routine use of the EM algorithm, namely, the long computational time due to slow convergence and the large memory required for the storage of the image, projection data and the probability matrix. In this study, the authors attempts to solve these two problems by parallelizing the EM algorithm on a multiprocessor system. The authors have implemented an extended hypercube (EH) architecture for the high-speed computation of the EM algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs). The authors discuss and compare the performance of the EM algorithm on a 386/387 machine, CD 4360 mainframe, and on the EH system. The results show that the computational speed performance of an EH using DSP chips as PEs executing the EM image reconstruction algorithm is about 130 times better than that of the CD 4360 mainframe. The EH topology is expandable with more number of PEs

  2. Modular Regularization Algorithms

    DEFF Research Database (Denmark)

    Jacobsen, Michael

    2004-01-01

    The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen......The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed...... into independent modules. These modules are then combined to form new regularization algorithms with other properties than those we started out with. Several variations are tested using the Matlab toolbox MOORe Tools created in connection with this thesis. Object oriented programming techniques are explained...... and used to set up the illposed problems in the toolbox. Hereby, we are able to write regularization algorithms that automatically exploit structure in the ill-posed problem without being rewritten explicitly. We explain how to implement a stopping criteria for a parameter choice method based upon...

  3. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...

  4. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  5. Genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Grefenstette, J.J.

    1994-12-31

    Genetic algorithms solve problems by using principles inspired by natural population genetics: They maintain a population of knowledge structures that represent candidate solutions, and then let that population evolve over time through competition and controlled variation. GAs are being applied to a wide range of optimization and learning problems in many domains.

  6. Optimal Fungal Space Searching Algorithms.

    Science.gov (United States)

    Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V

    2016-10-01

    Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.

  7. Engineering a cache-oblivious sorting algorithm

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Vinther, Kristoffer

    2007-01-01

    This paper is an algorithmic engineering study of cache-oblivious sorting. We investigate by empirical methods a number of implementation issues and parameter choices for the cache-oblivious sorting algorithm Lazy Funnelsort, and compare the final algorithm with Quicksort, the established standard...

  8. NEUTRON ALGORITHM VERIFICATION TESTING

    Energy Technology Data Exchange (ETDEWEB)

    COWGILL,M.; MOSBY,W.; ARGONNE NATIONAL LABORATORY-WEST

    2000-07-19

    Active well coincidence counter assays have been performed on uranium metal highly enriched in {sup 235}U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the {sup 235}U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the {sup 235}U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility.

  9. Derivation and implementation of a cone-beam reconstruction algorithm for nonplanar orbits

    International Nuclear Information System (INIS)

    Kudo, Hiroyuki; Saito, Tsuneo

    1994-01-01

    Smith and Grangeat derived a cone-beam inversion formula that can be applied when a nonplanar orbit satisfying the completeness condition is used. Although Grangeat's inversion formula is mathematically different from Smith's, they have similar overall structures to each other. The contribution of this paper is two-fold. First, based on the derivation of Smith, the authors point out that Grangeat's inversion formula and Smith's can be conveniently described using a single formula (the Smith-Grangeat inversion formula) that is in the form of space-variant filtering followed by cone-beam backprojection. Furthermore, the resulting formula is reformulated for data acquisition systems with a planar detector to obtain a new reconstruction algorithm. Second, the authors make two significant modifications to the new algorithm to reduce artifacts and numerical errors encountered in direct implementation of the new algorithm. As for exactness of the new algorithm, the following fact can be stated. The algorithm based on Grangeat's intermediate function is exact for any complete orbit, whereas that based on Smith's intermediate function should be considered as an approximate inverse excepting the special case where almost every plane in 3-D space meets the orbit. The validity of the new algorithm is demonstrated by simulation studies

  10. Opposition-Based Adaptive Fireworks Algorithm

    Directory of Open Access Journals (Sweden)

    Chibing Gong

    2016-07-01

    Full Text Available A fireworks algorithm (FWA is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA. The purpose of this paper is to add opposition-based learning (OBL to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based adaptive fireworks algorithm (OAFWA. The final results conclude that OAFWA significantly outperformed EFWA and AFWA in terms of solution accuracy. Additionally, OAFWA was compared with a bat algorithm (BA, differential evolution (DE, self-adapting control parameters in differential evolution (jDE, a firefly algorithm (FA, and a standard particle swarm optimization 2011 (SPSO2011 algorithm. The research results indicate that OAFWA ranks the highest of the six algorithms for both solution accuracy and runtime cost.

  11. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  12. Development of computed tomography system and image reconstruction algorithm

    International Nuclear Information System (INIS)

    Khairiah Yazid; Mohd Ashhar Khalid; Azaman Ahmad; Khairul Anuar Mohd Salleh; Ab Razak Hamzah

    2006-01-01

    Computed tomography is one of the most advanced and powerful nondestructive inspection techniques, which is currently used in many different industries. In several CT systems, detection has been by combination of an X-ray image intensifier and charge -coupled device (CCD) camera or by using line array detector. The recent development of X-ray flat panel detector has made fast CT imaging feasible and practical. Therefore this paper explained the arrangement of a new detection system which is using the existing high resolution (127 μm pixel size) flat panel detector in MINT and the image reconstruction technique developed. The aim of the project is to develop a prototype flat panel detector based CT imaging system for NDE. The prototype consisted of an X-ray tube, a flat panel detector system, a rotation table and a computer system to control the sample motion and image acquisition. Hence this project is divided to two major tasks, firstly to develop image reconstruction algorithm and secondly to integrate X-ray imaging components into one CT system. The image reconstruction algorithm using filtered back-projection method is developed and compared to other techniques. The MATLAB program is the tools used for the simulations and computations for this project. (Author)

  13. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    Science.gov (United States)

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low

  14. Algorithms on ensemble quantum computers.

    Science.gov (United States)

    Boykin, P Oscar; Mor, Tal; Roychowdhury, Vwani; Vatan, Farrokh

    2010-06-01

    In ensemble (or bulk) quantum computation, all computations are performed on an ensemble of computers rather than on a single computer. Measurements of qubits in an individual computer cannot be performed; instead, only expectation values (over the complete ensemble of computers) can be measured. As a result of this limitation on the model of computation, many algorithms cannot be processed directly on such computers, and must be modified, as the common strategy of delaying the measurements usually does not resolve this ensemble-measurement problem. Here we present several new strategies for resolving this problem. Based on these strategies we provide new versions of some of the most important quantum algorithms, versions that are suitable for implementing on ensemble quantum computers, e.g., on liquid NMR quantum computers. These algorithms are Shor's factorization algorithm, Grover's search algorithm (with several marked items), and an algorithm for quantum fault-tolerant computation. The first two algorithms are simply modified using a randomizing and a sorting strategies. For the last algorithm, we develop a classical-quantum hybrid strategy for removing measurements. We use it to present a novel quantum fault-tolerant scheme. More explicitly, we present schemes for fault-tolerant measurement-free implementation of Toffoli and σ(z)(¼) as these operations cannot be implemented "bitwise", and their standard fault-tolerant implementations require measurement.

  15. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation.

    Science.gov (United States)

    Chen, Ming; Yu, Hengyong

    2015-10-01

    This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and matlab. While the basic platform is constructed in matlab, the computationally intensive segments are coded in c + +, which are linked via a mex interface. A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.

  16. Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm

    Science.gov (United States)

    Gilat Schmidt, Taly; Barber, Rina F.; Sidky, Emil Y.

    2017-03-01

    Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained `One-Step' Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.

  17. A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems

    International Nuclear Information System (INIS)

    Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung

    2009-01-01

    The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries

  18. C-arm cone beam CT perfusion imaging using the SMART-RECON algorithm to improve temporal sampling density and temporal resolution

    Science.gov (United States)

    Li, Yinsheng; Niu, Kai; Li, Ke; Schafer, Sebastian; Royalty, Kevin; Strother, Charles; Chen, Guang-Hong

    2016-03-01

    In this work, a newly developed reconstruction algorithm, Synchronized MultiArtifact Reduction with Tomographic RECONstruction (SMART-RECON), was applied to C-arm cone beam CT perfusion (CBCTP) imaging. This algorithm contains a special rank regularizer, designed to reduce limited-view artifacts associated with super- short scan reconstructions. As a result, high temporal sampling and temporal resolution image reconstructions were achieved using an interventional C-arm x-ray system. The algorithm was evaluated in terms of the fidelity of the dynamic contrast update curves and the accuracy of perfusion parameters through numerical simulation studies. Results shows that, not only were the dynamic curves accurately recovered (relative root mean square error ∈ [3%, 5%] compared with [13%, 22%] for FBP), but also the noise in the final perfusion maps was dramatically reduced. Compared with filtered backprojection, SMART-RECON generated CBCTP maps with much improved capability in differentiating lesions with perfusion deficits from the surrounding healthy brain tissues.

  19. New Algorithm For Calculating Wavelet Transforms

    Directory of Open Access Journals (Sweden)

    Piotr Lipinski

    2009-04-01

    Full Text Available In this article we introduce a new algorithm for computing Discrete Wavelet Transforms (DWT. The algorithm aims at reducing the number of multiplications, required to compute a DWT. The algorithm is general and can be used to compute a variety of wavelet transform (Daubechies and CDF. Here we focus on CDF 9/7 filters, which are used in JPEG2000 compression standard. We show that the algorithm outperforms convolution-based and lifting-based algorithms in terms of number of multiplications.

  20. A fast rebinning algorithm for 3D positron emission tomography using John's equation

    Science.gov (United States)

    Defrise, Michel; Liu, Xuan

    1999-08-01

    Volume imaging in positron emission tomography (PET) requires the inversion of the three-dimensional (3D) x-ray transform. The usual solution to this problem is based on 3D filtered-backprojection (FBP), but is slow. Alternative methods have been proposed which factor the 3D data into independent 2D data sets corresponding to the 2D Radon transforms of a stack of parallel slices. Each slice is then reconstructed using 2D FBP. These so-called rebinning methods are numerically efficient but are approximate. In this paper a new exact rebinning method is derived by exploiting the fact that the 3D x-ray transform of a function is the solution to the second-order partial differential equation first studied by John. The method is proposed for two sampling schemes, one corresponding to a pair of infinite plane detectors and another one corresponding to a cylindrical multi-ring PET scanner. The new FORE-J algorithm has been implemented for this latter geometry and was compared with the approximate Fourier rebinning algorithm FORE and with another exact rebinning algorithm, FOREX. Results with simulated data demonstrate a significant improvement in accuracy compared to FORE, while the reconstruction time is doubled. Compared to FOREX, the FORE-J algorithm is slightly less accurate but more than three times faster.

  1. Testing algorithms for critical slowing down

    Directory of Open Access Journals (Sweden)

    Cossu Guido

    2018-01-01

    Full Text Available We present the preliminary tests on two modifications of the Hybrid Monte Carlo (HMC algorithm. Both algorithms are designed to travel much farther in the Hamiltonian phase space for each trajectory and reduce the autocorrelations among physical observables thus tackling the critical slowing down towards the continuum limit. We present a comparison of costs of the new algorithms with the standard HMC evolution for pure gauge fields, studying the autocorrelation times for various quantities including the topological charge.

  2. FIREWORKS ALGORITHM FOR UNCONSTRAINED FUNCTION OPTIMIZATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    Evans BAIDOO

    2017-03-01

    Full Text Available Modern real world science and engineering problems can be classified as multi-objective optimisation problems which demand for expedient and efficient stochastic algorithms to respond to the optimization needs. This paper presents an object-oriented software application that implements a firework optimization algorithm for function optimization problems. The algorithm, a kind of parallel diffuse optimization algorithm is based on the explosive phenomenon of fireworks. The algorithm presented promising results when compared to other population or iterative based meta-heuristic algorithm after it was experimented on five standard benchmark problems. The software application was implemented in Java with interactive interface which allow for easy modification and extended experimentation. Additionally, this paper validates the effect of runtime on the algorithm performance.

  3. SU-E-I-33: Initial Evaluation of Model-Based Iterative CT Reconstruction Using Standard Image Quality Phantoms

    International Nuclear Information System (INIS)

    Gingold, E; Dave, J

    2014-01-01

    Purpose: The purpose of this study was to compare a new model-based iterative reconstruction with existing reconstruction methods (filtered backprojection and basic iterative reconstruction) using quantitative analysis of standard image quality phantom images. Methods: An ACR accreditation phantom (Gammex 464) and a CATPHAN600 phantom were scanned using 3 routine clinical acquisition protocols (adult axial brain, adult abdomen, and pediatric abdomen) on a Philips iCT system. Each scan was acquired using default conditions and 75%, 50% and 25% dose levels. Images were reconstructed using standard filtered backprojection (FBP), conventional iterative reconstruction (iDose4) and a prototype model-based iterative reconstruction (IMR). Phantom measurements included CT number accuracy, contrast to noise ratio (CNR), modulation transfer function (MTF), low contrast detectability (LCD), and noise power spectrum (NPS). Results: The choice of reconstruction method had no effect on CT number accuracy, or MTF (p<0.01). The CNR of a 6 HU contrast target was improved by 1–67% with iDose4 relative to FBP, while IMR improved CNR by 145–367% across all protocols and dose levels. Within each scan protocol, the CNR improvement from IMR vs FBP showed a general trend of greater improvement at lower dose levels. NPS magnitude was greatest for FBP and lowest for IMR. The NPS of the IMR reconstruction showed a pronounced decrease with increasing spatial frequency, consistent with the unusual noise texture seen in IMR images. Conclusion: Iterative Model Reconstruction reduces noise and improves contrast-to-noise ratio without sacrificing spatial resolution in CT phantom images. This offers the possibility of radiation dose reduction and improved low contrast detectability compared with filtered backprojection or conventional iterative reconstruction

  4. A three-dimensional reconstruction algorithm for an inverse-geometry volumetric CT system

    International Nuclear Information System (INIS)

    Schmidt, Taly Gilat; Fahrig, Rebecca; Pelc, Norbert J.

    2005-01-01

    An inverse-geometry volumetric computed tomography (IGCT) system has been proposed capable of rapidly acquiring sufficient data to reconstruct a thick volume in one circular scan. The system uses a large-area scanned source opposite a smaller detector. The source and detector have the same extent in the axial, or slice, direction, thus providing sufficient volumetric sampling and avoiding cone-beam artifacts. This paper describes a reconstruction algorithm for the IGCT system. The algorithm first rebins the acquired data into two-dimensional (2D) parallel-ray projections at multiple tilt and azimuthal angles, followed by a 3D filtered backprojection. The rebinning step is performed by gridding the data onto a Cartesian grid in a 4D projection space. We present a new method for correcting the gridding error caused by the finite and asymmetric sampling in the neighborhood of each output grid point in the projection space. The reconstruction algorithm was implemented and tested on simulated IGCT data. Results show that the gridding correction reduces the gridding errors to below one Hounsfield unit. With this correction, the reconstruction algorithm does not introduce significant artifacts or blurring when compared to images reconstructed from simulated 2D parallel-ray projections. We also present an investigation of the noise behavior of the method which verifies that the proposed reconstruction algorithm utilizes cross-plane rays as efficiently as in-plane rays and can provide noise comparable to an in-plane parallel-ray geometry for the same number of photons. Simulations of a resolution test pattern and the modulation transfer function demonstrate that the IGCT system, using the proposed algorithm, is capable of 0.4 mm isotropic resolution. The successful implementation of the reconstruction algorithm is an important step in establishing feasibility of the IGCT system

  5. AES ALGORITHM IMPLEMENTATION IN PROGRAMMING LANGUAGES

    Directory of Open Access Journals (Sweden)

    Luminiţa DEFTA

    2010-12-01

    Full Text Available Information encryption represents the usage of an algorithm to convert an unknown message into an encrypted one. It is used to protect the data against unauthorized access. Protected data can be stored on a media device or can be transmitted through the network. In this paper we describe a concrete implementation of the AES algorithm in the Java programming language (available from Java Development Kit 6 libraries and C (using the OpenSSL library. AES (Advanced Encryption Standard is an asymmetric key encryption algorithm formally adopted by the U.S. government and was elected after a long process of standardization.

  6. The Copenhagen Triage Algorithm

    DEFF Research Database (Denmark)

    Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia

    2016-01-01

    BACKGROUND: Crowding in the emergency department (ED) is a well-known problem resulting in an increased risk of adverse outcomes. Effective triage might counteract this problem by identifying the sickest patients and ensuring early treatment. In the last two decades, systematic triage has become...... the standard in ED's worldwide. However, triage models are also time consuming, supported by limited evidence and could potentially be of more harm than benefit. The aim of this study is to develop a quicker triage model using data from a large cohort of unselected ED patients and evaluate if this new model...... is non-inferior to an existing triage model in a prospective randomized trial. METHODS: The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years...

  7. Some nonlinear space decomposition algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  8. The algorithm design manual

    CERN Document Server

    Skiena, Steven S

    2008-01-01

    Explaining designing algorithms, and analyzing their efficacy and efficiency, this book covers combinatorial algorithms technology, stressing design over analysis. It presents instruction on methods for designing and analyzing computer algorithms. It contains the catalog of algorithmic resources, implementations and a bibliography

  9. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops....... Examining how algorithms make people feel, then, seems crucial if we want to understand their social power....

  10. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  11. A fast DFT algorithm using complex integer transforms

    Science.gov (United States)

    Reed, I. S.; Truong, T. K.

    1978-01-01

    Winograd's algorithm for computing the discrete Fourier transform is extended considerably for certain large transform lengths. This is accomplished by performing the cyclic convolution, required by Winograd's method, by a fast transform over certain complex integer fields. This algorithm requires fewer multiplications than either the standard fast Fourier transform or Winograd's more conventional algorithms.

  12. Fully Consistent SIMPLE-Like Algorithms on Collocated Grids

    DEFF Research Database (Denmark)

    Kolmogorov, Dmitry; Shen, Wen Zhong; Sørensen, Niels N.

    2015-01-01

    To increase the convergence rate of SIMPLE-like algorithms on collocated grids, a compatibility condition between mass flux interpolation methods and SIMPLE-like algorithms is presented. Results of unsteady flow computations show that the SIMPLEC algorithm, when obeying the compatibility condition......, may obtain up to 35% higher convergence rate as compared to the standard SIMPLEC algorithm. Two new interpolation methods, fully compatible with the SIMPLEC algorithm, are presented and compared with some existing interpolation methods, including the standard methods of Choi [9] and Shen et al. [8...

  13. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    Directory of Open Access Journals (Sweden)

    Adis Alihodzic

    2014-01-01

    Full Text Available Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed.

  14. Fast volume reconstruction in positron emission tomography: Implementation of four algorithms on a high-performance scalable parallel platform

    International Nuclear Information System (INIS)

    Egger, M.L.; Scheurer, A.H.; Joseph, C.

    1996-01-01

    The issue of long reconstruction times in PET has been addressed from several points of view, resulting in an affordable dedicated system capable of handling routine 3D reconstruction in a few minutes per frame: on the hardware side using fast processors and a parallel architecture, and on the software side, using efficient implementations of computationally less intensive algorithms. Execution times obtained for the PRT-1 data set on a parallel system of five hybrid nodes, each combining an Alpha processor for computation and a transputer for communication, are the following (256 sinograms of 96 views by 128 radial samples): Ramp algorithm 56 s, Favor 81 s and reprojection algorithm of Kinahan and Rogers 187 s. The implementation of fast rebinning algorithms has shown our hardware platform to become communications-limited; they execute faster on a conventional single-processor Alpha workstation: single-slice rebinning 7 s, Fourier rebinning 22 s, 2D filtered backprojection 5 s. The scalability of the system has been demonstrated, and a saturation effect at network sizes above ten nodes has become visible; new T9000-based products lifting most of the constraints on network topology and link throughput are expected to result in improved parallel efficiency and scalability properties

  15. Semiconvergence and Relaxation Parameters for Projected SIRT Algorithms

    DEFF Research Database (Denmark)

    Elfving, Tommy; Hansen, Per Christian; Nikazad, Touraj

    2012-01-01

    We give a detailed study of the semiconverg ence behavior of projected nonstationary simultaneous iterative reconstruction technique (SIRT) algorithms, including the projected Landweber algorithm. We also consider the use of a relaxation parameter strategy, proposed recently for the standard...... algorithms, for controlling the semiconvergence of the projected algorithms. We demonstrate the semiconvergence and the performance of our strategies by examples taken from tomographic imaging. © 2012 Society for Industrial and Applied Mathematics....

  16. Evaluating the sensitivity of the optimization of acquisition geometry to the choice of reconstruction algorithm in digital breast tomosynthesis through a simulation study

    International Nuclear Information System (INIS)

    Zeng, Rongping; Park, Subok; Myers, Kyle J; Bakic, Predrag

    2015-01-01

    Due to the limited number of views and limited angular span in digital breast tomosynthesis (DBT), the acquisition geometry design is an important factor that affects the image quality. Therefore, intensive studies have been conducted regarding the optimization of the acquisition geometry. However, different reconstruction algorithms were used in most of the reported studies. Because each type of reconstruction algorithm can provide images with its own image resolution, noise properties and artifact appearance, it is unclear whether the optimal geometries concluded for the DBT system in one study can be generalized to the DBT systems with a reconstruction algorithm different to the one applied in that study. Hence, we investigated the effect of the reconstruction algorithm on the optimization of acquisition geometry parameters through carefully designed simulation studies. Our results show that using various reconstruction algorithms, including the filtered back-projection, the simultaneous algebraic reconstruction technique, the maximum-likelihood method and the total-variation regularized least-square method, gave similar performance trends for the acquisition parameters for detecting lesions. The consistency of system ranking indicates that the choice of the reconstruction algorithm may not be critical for DBT system geometry optimization. (paper)

  17. A unified analysis of FBP-based algorithms in helical cone-beam and circular cone- and fan-beam scans

    International Nuclear Information System (INIS)

    Pan Xiaochuan; Xia Dan; Zou Yu; Yu Lifeng

    2004-01-01

    A circular scanning trajectory is and will likely remain a popular choice of trajectory in computed tomography (CT) imaging because it is easy to implement and control. Filtered-backprojection (FBP)-based algorithms have been developed previously for approximate and exact reconstruction of the entire image or a region of interest within the image in circular cone-beam and fan-beam cases. Recently, we have developed a 3D FBP-based algorithm for image reconstruction on PI-line segments in a helical cone-beam scan. In this work, we demonstrated that the 3D FBP-based algorithm indeed provided a rather general formulation for image reconstruction from divergent projections (such as cone-beam and fan-beam projections). On the basis of this formulation we derived new approximate or exact algorithms for image reconstruction in circular cone-beam or fan-beam scans, which can be interpreted as special cases of the helical scan. Existing algorithms corresponding to the derived algorithms were identified. We also performed a preliminary numerical study to verify our theoretical results in each of the cases. The results in the work can readily be generalized to other non-circular trajectories

  18. Approximate iterative algorithms

    CERN Document Server

    Almudevar, Anthony Louis

    2014-01-01

    Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a

  19. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  20. A verified LLL algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  1. A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT

    International Nuclear Information System (INIS)

    Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.; Pan Xiaochuan

    2010-01-01

    Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.

  2. Particle swarm genetic algorithm and its application

    International Nuclear Information System (INIS)

    Liu Chengxiang; Yan Changxiang; Wang Jianjun; Liu Zhenhai

    2012-01-01

    To solve the problems of slow convergence speed and tendency to fall into the local optimum of the standard particle swarm optimization while dealing with nonlinear constraint optimization problem, a particle swarm genetic algorithm is designed. The proposed algorithm adopts feasibility principle handles constraint conditions and avoids the difficulty of penalty function method in selecting punishment factor, generates initial feasible group randomly, which accelerates particle swarm convergence speed, and introduces genetic algorithm crossover and mutation strategy to avoid particle swarm falls into the local optimum Through the optimization calculation of the typical test functions, the results show that particle swarm genetic algorithm has better optimized performance. The algorithm is applied in nuclear power plant optimization, and the optimization results are significantly. (authors)

  3. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  4. Image Denoising Algorithm Combined with SGK Dictionary Learning and Principal Component Analysis Noise Estimation

    Directory of Open Access Journals (Sweden)

    Wenjing Zhao

    2018-01-01

    Full Text Available SGK (sequential generalization of K-means dictionary learning denoising algorithm has the characteristics of fast denoising speed and excellent denoising performance. However, the noise standard deviation must be known in advance when using SGK algorithm to process the image. This paper presents a denoising algorithm combined with SGK dictionary learning and the principal component analysis (PCA noise estimation. At first, the noise standard deviation of the image is estimated by using the PCA noise estimation algorithm. And then it is used for SGK dictionary learning algorithm. Experimental results show the following: (1 The SGK algorithm has the best denoising performance compared with the other three dictionary learning algorithms. (2 The SGK algorithm combined with PCA is superior to the SGK algorithm combined with other noise estimation algorithms. (3 Compared with the original SGK algorithm, the proposed algorithm has higher PSNR and better denoising performance.

  5. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  6. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  7. Digital Arithmetic: Division Algorithms

    DEFF Research Database (Denmark)

    Montuschi, Paolo; Nannarelli, Alberto

    2017-01-01

    implement it in hardware to not compromise the overall computation performances. This entry explains the basic algorithms, suitable for hardware and software, to implement division in computer systems. Two classes of algorithms implement division or square root: digit-recurrence and multiplicative (e.......g., Newton–Raphson) algorithms. The first class of algorithms, the digit-recurrence type, is particularly suitable for hardware implementation as it requires modest resources and provides good performance on contemporary technology. The second class of algorithms, the multiplicative type, requires...

  8. Cache-Oblivious Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    2004-01-01

    as standard RAM algorithms with only one memory level, i.e. without any knowledge about memory hierarchies, but are analyzed in the two-level I/O model of Aggarwal and Vitter for an arbitrary memory and block size and an optimal off-line cache replacement strategy. The result are algorithms that automatically...... apply to multi-level memory hierarchies. This paper gives an overview of the results achieved on cache-oblivious algorithms and data structures since the seminal paper by Frigo et al....

  9. Adaptive Algorithm for Chirp-Rate Estimation

    Directory of Open Access Journals (Sweden)

    Igor Djurović

    2009-01-01

    Full Text Available Chirp-rate, as a second derivative of signal phase, is an important feature of nonstationary signals in numerous applications such as radar, sonar, and communications. In this paper, an adaptive algorithm for the chirp-rate estimation is proposed. It is based on the confidence intervals rule and the cubic-phase function. The window width is adaptively selected to achieve good tradeoff between bias and variance of the chirp-rate estimate. The proposed algorithm is verified by simulations and the results show that it outperforms the standard algorithm with fixed window width.

  10. Some multigrid algorithms for SIMD machines

    Energy Technology Data Exchange (ETDEWEB)

    Dendy, J.E. Jr. [Los Alamos National Lab., NM (United States)

    1996-12-31

    Previously a semicoarsening multigrid algorithm suitable for use on SIMD architectures was investigated. Through the use of new software tools, the performance of this algorithm has been considerably improved. The method has also been extended to three space dimensions. The method performs well for strongly anisotropic problems and for problems with coefficients jumping by orders of magnitude across internal interfaces. The parallel efficiency of this method is analyzed, and its actual performance on the CM-5 is compared with its performance on the CRAY-YMP. A standard coarsening multigrid algorithm is also considered, and we compare its performance on these two platforms as well.

  11. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  12. Algorithmic and user study of an autocompletion algorithm on a large medical vocabulary.

    Science.gov (United States)

    Sevenster, Merlijn; van Ommering, Rob; Qian, Yuechen

    2012-02-01

    Autocompletion supports human-computer interaction in software applications that let users enter textual data. We will be inspired by the use case in which medical professionals enter ontology concepts, catering the ongoing demand for structured and standardized data in medicine. Goal is to give an algorithmic analysis of one particular autocompletion algorithm, called multi-prefix matching algorithm, which suggests terms whose words' prefixes contain all words in the string typed by the user, e.g., in this sense, opt ner me matches optic nerve meningioma. Second we aim to investigate how well it supports users entering concepts from a large and comprehensive medical vocabulary (snomed ct). We give a concise description of the multi-prefix algorithm, and sketch how it can be optimized to meet required response time. Performance will be compared to a baseline algorithm, which gives suggestions that extend the string typed by the user to the right, e.g. optic nerve m gives optic nerve meningioma, but opt ner me does not. We conduct a user experiment in which 12 participants are invited to complete 40 snomed ct terms with the baseline algorithm and another set of 40 snomed ct terms with the multi-prefix algorithm. Our results show that users need significantly fewer keystrokes when supported by the multi-prefix algorithm than when supported by the baseline algorithm. The proposed algorithm is a competitive candidate for searching and retrieving terms from a large medical ontology. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Analysis and Improvement of Fireworks Algorithm

    Directory of Open Access Journals (Sweden)

    Xi-Guang Li

    2017-02-01

    Full Text Available The Fireworks Algorithm is a recently developed swarm intelligence algorithm to simulate the explosion process of fireworks. Based on the analysis of each operator of Fireworks Algorithm (FWA, this paper improves the FWA and proves that the improved algorithm converges to the global optimal solution with probability 1. The proposed algorithm improves the goal of further boosting performance and achieving global optimization where mainly include the following strategies. Firstly using the opposition-based learning initialization population. Secondly a new explosion amplitude mechanism for the optimal firework is proposed. In addition, the adaptive t-distribution mutation for non-optimal individuals and elite opposition-based learning for the optimal individual are used. Finally, a new selection strategy, namely Disruptive Selection, is proposed to reduce the running time of the algorithm compared with FWA. In our simulation, we apply the CEC2013 standard functions and compare the proposed algorithm (IFWA with SPSO2011, FWA, EFWA and dynFWA. The results show that the proposed algorithm has better overall performance on the test functions.

  14. ['Gold standard', not 'golden standard'

    NARCIS (Netherlands)

    Claassen, J.A.H.R.

    2005-01-01

    In medical literature, both 'gold standard' and 'golden standard' are employed to describe a reference test used for comparison with a novel method. The term 'gold standard' in its current sense in medical research was coined by Rudd in 1979, in reference to the monetary gold standard. In the same

  15. Sampling Within k-Means Algorithm to Cluster Large Datasets

    Energy Technology Data Exchange (ETDEWEB)

    Bejarano, Jeremy [Brigham Young University; Bose, Koushiki [Brown University; Brannan, Tyler [North Carolina State University; Thomas, Anita [Illinois Institute of Technology; Adragni, Kofi [University of Maryland; Neerchal, Nagaraj [University of Maryland; Ostrouchov, George [ORNL

    2011-08-01

    Due to current data collection technology, our ability to gather data has surpassed our ability to analyze it. In particular, k-means, one of the simplest and fastest clustering algorithms, is ill-equipped to handle extremely large datasets on even the most powerful machines. Our new algorithm uses a sample from a dataset to decrease runtime by reducing the amount of data analyzed. We perform a simulation study to compare our sampling based k-means to the standard k-means algorithm by analyzing both the speed and accuracy of the two methods. Results show that our algorithm is significantly more efficient than the existing algorithm with comparable accuracy. Further work on this project might include a more comprehensive study both on more varied test datasets as well as on real weather datasets. This is especially important considering that this preliminary study was performed on rather tame datasets. Also, these datasets should analyze the performance of the algorithm on varied values of k. Lastly, this paper showed that the algorithm was accurate for relatively low sample sizes. We would like to analyze this further to see how accurate the algorithm is for even lower sample sizes. We could find the lowest sample sizes, by manipulating width and confidence level, for which the algorithm would be acceptably accurate. In order for our algorithm to be a success, it needs to meet two benchmarks: match the accuracy of the standard k-means algorithm and significantly reduce runtime. Both goals are accomplished for all six datasets analyzed. However, on datasets of three and four dimension, as the data becomes more difficult to cluster, both algorithms fail to obtain the correct classifications on some trials. Nevertheless, our algorithm consistently matches the performance of the standard algorithm while becoming remarkably more efficient with time. Therefore, we conclude that analysts can use our algorithm, expecting accurate results in considerably less time.

  16. Cloud Model Bat Algorithm

    OpenAIRE

    Yongquan Zhou; Jian Xie; Liangliang Li; Mingzhi Ma

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformati...

  17. Recursive forgetting algorithms

    DEFF Research Database (Denmark)

    Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan

    1992-01-01

    In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...

  18. Dynamic Inertia Weight Binary Bat Algorithm with Neighborhood Search

    Directory of Open Access Journals (Sweden)

    Xingwang Huang

    2017-01-01

    Full Text Available Binary bat algorithm (BBA is a binary version of the bat algorithm (BA. It has been proven that BBA is competitive compared to other binary heuristic algorithms. Since the update processes of velocity in the algorithm are consistent with BA, in some cases, this algorithm also faces the premature convergence problem. This paper proposes an improved binary bat algorithm (IBBA to solve this problem. To evaluate the performance of IBBA, standard benchmark functions and zero-one knapsack problems have been employed. The numeric results obtained by benchmark functions experiment prove that the proposed approach greatly outperforms the original BBA and binary particle swarm optimization (BPSO. Compared with several other heuristic algorithms on zero-one knapsack problems, it also verifies that the proposed algorithm is more able to avoid local minima.

  19. Wolf Search Algorithm for Solving Optimal Reactive Power Dispatch Problem

    Directory of Open Access Journals (Sweden)

    Kanagasabai Lenin

    2015-03-01

    Full Text Available This paper presents a new bio-inspired heuristic optimization algorithm called the Wolf Search Algorithm (WSA for solving the multi-objective reactive power dispatch problem. Wolf Search algorithm is a new bio – inspired heuristic algorithm which based on wolf preying behaviour. The way wolves search for food and survive by avoiding their enemies has been imitated to formulate the algorithm for solving the reactive power dispatches. And the speciality  of wolf is  possessing  both individual local searching ability and autonomous flocking movement and this special property has been utilized to formulate the search algorithm .The proposed (WSA algorithm has been tested on standard IEEE 30 bus test system and simulation results shows clearly about the good performance of the proposed algorithm .

  20. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  1. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  2. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  3. Accounting standards

    NARCIS (Netherlands)

    Stellinga, B.; Mügge, D.

    2014-01-01

    The European and global regulation of accounting standards have witnessed remarkable changes over the past twenty years. In the early 1990s, EU accounting practices were fragmented along national lines and US accounting standards were the de facto global standards. Since 2005, all EU listed

  4. Network-Oblivious Algorithms

    DEFF Research Database (Denmark)

    Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino

    2016-01-01

    A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network-oblivi...

  5. Graph Colouring Algorithms

    DEFF Research Database (Denmark)

    Husfeldt, Thore

    2015-01-01

    This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...

  6. Efficient graph algorithms

    Indian Academy of Sciences (India)

    Computing connectivities between all pairs of vertices good algorithm wrt both space and time to compute the exact solution. Computing all-pairs distances good algorithm wrt both space and time - but only approximate solutions can be found. Optimal bipartite matchings an optimal matching need not always exist.

  7. Algorithms and Their Explanations

    NARCIS (Netherlands)

    Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.

    2014-01-01

    By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of

  8. 8. Algorithm Design Techniques

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...

  9. 8. Algorithm Design Techniques

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 Issue 8 August 1997 pp 6-17. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/002/08/0006-0017 ...

  10. Introduction to Algorithms -14 ...

    Indian Academy of Sciences (India)

    As elaborated in the earlier articles, algorithms must be written in an unambiguous formal way. Algorithms intended for automatic execution by computers are called programs and the formal notations used to write programs are called programming languages. The concept of a programming language has been around ...

  11. Immersive Algorithms: Better Visualization with Less Information

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li

    2017-01-01

    Visualizing algorithms, such as drawings, slideshow presentations, animations, videos, and software tools, is a key concept to enhance and support student learning. A typical visualization of an algorithm show the data and then perform computation on the data. For instance, a standard visualization......” the full sorted array, but only the single position that it accesses during each step of the computation. To fix this discrepancy we introduce the immersive principle that states that at any point in time, the displayed information should closely match the information accessed by the algorithm. We give...... several examples of immersive visualizations of basic algorithms and data structures, discuss methods for implementing it, and briefly evaluate it....

  12. The variance of two game tree algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yanjun [Southern Methodist Univ., Dallas, TX (United States)

    1997-06-01

    This paper studies the variance of two game tree algorithms {alpha}-{beta} search and SCOUT, in the stochastic i.i.d. model. The problem of determining the variance of the classic {alpha}-{beta} search algorithm in the i.i.d. model has been long open. This paper resolves this problem partially. It is shown, by the martingale method, that the standard deviation of the weaker {alpha}-{beta} search without deep cutoffs is of the same order as the expected number of leaves evaluated. A nearly-optimal upper bound on the variance of the general {alpha}-{beta} search is obtained, and this upper bound yields an optimal bound if the current upper bound on the expected number of leaves evaluated by {alpha}-{beta} search can be improved. A thorough treatment of the two-pass SCOUT algorithm is presented. The variance of the SCOUT algorithm is determined.

  13. Governance by algorithms

    Directory of Open Access Journals (Sweden)

    Francesca Musiani

    2013-08-01

    Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.

  14. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  15. Calculation of Five Thermodynamic Molecular Descriptors by Means of a General Computer Algorithm Based on the Group-Additivity Method: Standard Enthalpies of Vaporization, Sublimation and Solvation, and Entropy of Fusion of Ordinary Organic Molecules and Total Phase-Change Entropy of Liquid Crystals.

    Science.gov (United States)

    Naef, Rudolf; Acree, William E

    2017-06-25

    The calculation of the standard enthalpies of vaporization, sublimation and solvation of organic molecules is presented using a common computer algorithm on the basis of a group-additivity method. The same algorithm is also shown to enable the calculation of their entropy of fusion as well as the total phase-change entropy of liquid crystals. The present method is based on the complete breakdown of the molecules into their constituting atoms and their immediate neighbourhood; the respective calculations of the contribution of the atomic groups by means of the Gauss-Seidel fitting method is based on experimental data collected from literature. The feasibility of the calculations for each of the mentioned descriptors was verified by means of a 10-fold cross-validation procedure proving the good to high quality of the predicted values for the three mentioned enthalpies and for the entropy of fusion, whereas the predictive quality for the total phase-change entropy of liquid crystals was poor. The goodness of fit ( Q ²) and the standard deviation (σ) of the cross-validation calculations for the five descriptors was as follows: 0.9641 and 4.56 kJ/mol ( N = 3386 test molecules) for the enthalpy of vaporization, 0.8657 and 11.39 kJ/mol ( N = 1791) for the enthalpy of sublimation, 0.9546 and 4.34 kJ/mol ( N = 373) for the enthalpy of solvation, 0.8727 and 17.93 J/mol/K ( N = 2637) for the entropy of fusion and 0.5804 and 32.79 J/mol/K ( N = 2643) for the total phase-change entropy of liquid crystals. The large discrepancy between the results of the two closely related entropies is discussed in detail. Molecules for which both the standard enthalpies of vaporization and sublimation were calculable, enabled the estimation of their standard enthalpy of fusion by simple subtraction of the former from the latter enthalpy. For 990 of them the experimental enthalpy-of-fusion values are also known, allowing their comparison with predictions, yielding a correlation coefficient R

  16. Loop algorithms for quantum simulations of fermion models on lattices

    International Nuclear Information System (INIS)

    Kawashima, N.; Gubernatis, J.E.; Evertz, H.G.

    1994-01-01

    Two cluster algorithms, based on constructing and flipping loops, are presented for world-line quantum Monte Carlo simulations of fermions and are tested on the one-dimensional repulsive Hubbard model. We call these algorithms the loop-flip and loop-exchange algorithms. For these two algorithms and the standard world-line algorithm, we calculated the autocorrelation times for various physical quantities and found that the ordinary world-line algorithm, which uses only local moves, suffers from very long correlation times that makes not only the estimate of the error difficult but also the estimate of the average values themselves difficult. These difficulties are especially severe in the low-temperature, large-U regime. In contrast, we find that new algorithms, when used alone or in combinations with themselves and the standard algorithm, can have significantly smaller autocorrelation times, in some cases being smaller by three orders of magnitude. The new algorithms, which use nonlocal moves, are discussed from the point of view of a general prescription for developing cluster algorithms. The loop-flip algorithm is also shown to be ergodic and to belong to the grand canonical ensemble. Extensions to other models and higher dimensions are briefly discussed

  17. Genetic Algorithms to Optimizatize Lecturer Assessment's Criteria

    Science.gov (United States)

    Jollyta, Deny; Johan; Hajjah, Alyauma

    2017-12-01

    The lecturer assessment criteria is used as a measurement of the lecturer's performance in a college environment. To determine the value for a criteriais complicated and often leads to doubt. The absence of a standard valuefor each assessment criteria will affect the final results of the assessment and become less presentational data for the leader of college in taking various policies relate to reward and punishment. The Genetic Algorithm comes as an algorithm capable of solving non-linear problems. Using chromosomes in the random initial population, one of the presentations is binary, evaluates the fitness function and uses crossover genetic operator and mutation to obtain the desired crossbreed. It aims to obtain the most optimum criteria values in terms of the fitness function of each chromosome. The training results show that Genetic Algorithm able to produce the optimal values of lecturer assessment criteria so that can be usedby the college as a standard value for lecturer assessment criteria.

  18. An Algorithm for Fault-Tree Construction

    DEFF Research Database (Denmark)

    Taylor, J. R.

    1982-01-01

    An algorithm for performing certain parts of the fault tree construction process is described. Its input is a flow sheet of the plant, a piping and instrumentation diagram, or a wiring diagram of the circuits, to be analysed, together with a standard library of component functional and failure mo...

  19. Reverse Universal Resolving Algorithm and inverse driving

    DEFF Research Database (Denmark)

    Pécseli, Thomas

    2012-01-01

    Inverse interpretation is a semantics based, non-standard interpretation of programs. Given a program and a value, an inverse interpreter finds all or one of the inputs, that would yield the given value as output with normal forward evaluation. The Reverse Universal Resolving Algorithm is a new v...

  20. SU-C-BRC-04: Efficient Dose Calculation Algorithm for FFF IMRT with a Simplified Bivariate Gaussian Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Li, F; Park, J; Barraclough, B; Lu, B; Li, J; Liu, C; Yan, G [University Florida, Gainesville, FL (United States)

    2016-06-15

    Purpose: To develop an efficient and accurate independent dose calculation algorithm with a simplified analytical source model for the quality assurance and safe delivery of Flattening Filter Free (FFF)-IMRT on an Elekta Versa HD. Methods: The source model consisted of a point source and a 2D bivariate Gaussian source, respectively modeling the primary photons and the combined effect of head scatter, monitor chamber backscatter and collimator exchange effect. The in-air fluence was firstly calculated by back-projecting the edges of beam defining devices onto the source plane and integrating the visible source distribution. The effect of the rounded MLC leaf end, tongue-and-groove and interleaf transmission was taken into account in the back-projection. The in-air fluence was then modified with a fourth degree polynomial modeling the cone-shaped dose distribution of FFF beams. Planar dose distribution was obtained by convolving the in-air fluence with a dose deposition kernel (DDK) consisting of the sum of three 2D Gaussian functions. The parameters of the source model and the DDK were commissioned using measured in-air output factors (Sc) and cross beam profiles, respectively. A novel method was used to eliminate the volume averaging effect of ion chambers in determining the DDK. Planar dose distributions of five head-and-neck FFF-IMRT plans were calculated and compared against measurements performed with a 2D diode array (MapCHECK™) to validate the accuracy of the algorithm. Results: The proposed source model predicted Sc for both 6MV and 10MV with an accuracy better than 0.1%. With a stringent gamma criterion (2%/2mm/local difference), the passing rate of the FFF-IMRT dose calculation was 97.2±2.6%. Conclusion: The removal of the flattening filter represents a simplification of the head structure which allows the use of a simpler source model for very accurate dose calculation. The proposed algorithm offers an effective way to ensure the safe delivery of FFF-IMRT.

  1. Communications standards

    CERN Document Server

    Stokes, A V

    1986-01-01

    Communications Standards deals with the standardization of computer communication networks. This book examines the types of local area networks (LANs) that have been developed and looks at some of the relevant protocols in more detail. The work of Project 802 is briefly discussed, along with a protocol which has developed from one of the LAN standards and is now a de facto standard in one particular area, namely the Manufacturing Automation Protocol (MAP). Factors that affect the usage of networks, such as network management and security, are also considered. This book is divided into three se

  2. War-Algorithm Accountability

    OpenAIRE

    Lewis, Dustin A.; Blum, Gabriella; Modirzadeh, Naz K.

    2016-01-01

    In this briefing report, we introduce a new concept — war algorithms — that elevates algorithmically-derived “choices” and “decisions” to a, and perhaps the, central concern regarding technical autonomy in war. We thereby aim to shed light on and recast the discussion regarding “autonomous weapon systems.” We define “war algorithm” as any algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed co...

  3. Accuracy of 3-dimensional reconstruction algorithms for the high-resolution research tomograph.

    Science.gov (United States)

    van Velden, Floris H P; Kloet, Reina W; van Berckel, Bart N M; Lammertsma, Adriaan A; Boellaard, Ronald

    2009-01-01

    The high-resolution research tomograph (HRRT) is a dedicated human brain PET scanner. At present, iterative reconstruction methods are preferred for reconstructing HRRT studies. However, these iterative reconstruction algorithms show bias in short-duration frames. New algorithms such as the shifted Poisson ordered-subsets expectation maximization (SP-OSEM) and ordered-subsets weighted least squares (OSWLS) showed promising results in bias reduction, compared with the recommended ordinary Poisson OSEM (OP-OSEM). The goal of this study was to evaluate quantitative accuracy of these iterative reconstruction algorithms, compared with 3-dimensional filtered backprojection (3D-FBP). The 3 above-mentioned 3D iterative reconstruction methods were implemented for the HRRT. To evaluate the various 3D iterative reconstruction techniques quantitatively, several phantom studies and a human brain study (n=5) were performed. OSWLS showed a low and almost linearly increasing coefficient of variation (SD over average activity concentration), with decreasing noise-equivalent count rates. In decay studies, OSWLS showed good agreement with the 3D-FBP gray matter (GM)-to-white matter (WM) contrast ratio (noise-equivalent count rates; this variability was much higher for other iterative methods (>92%). 3D-FBP showed the least variability (34%). Visually, OSWLS hardly showed any artifacts in parametric images and showed good agreement with 3D-FBP data for parametric images, especially in the case of reference-tissue kinetic methods (slope, 1.02; Pearson correlation coefficient, 0.99). OP-OSEM, SP-OSEM, and OSWLS showed good performance for phantom studies. In addition, OSWLS showed better results for parametric analysis of clinical studies and is therefore recommended for quantitative HRRT brain PET studies.

  4. A new algorithm for 3D reconstruction from support functions

    DEFF Research Database (Denmark)

    Gardner, Richard; Kiderlen, Markus

    2009-01-01

    We introduce a new algorithm for reconstructing an unknown shape from a finite number of noisy measurements of its support function. The algorithm, based on a least squares procedure, is very easy to program in standard software such as Matlab and allows, for the first time, good 3D reconstructio...

  5. A Cache-Optimal Alternative to the Unidirectional Hierarchization Algorithm

    DEFF Research Database (Denmark)

    Hupp, Philipp; Jacob, Riko

    2016-01-01

    of the cache misses by a factor of d compared to the unidirectional algorithm which is the common standard up to now. The new algorithm is also optimal in the sense that the leading term of the cache misses is reduced to scanning complexity, i.e., every degree of freedom has to be touched once. We also present...

  6. Autofocus algorithm for synthetic aperture radar imaging with large curvilinear apertures

    Science.gov (United States)

    Bleszynski, E.; Bleszynski, M.; Jaroszewicz, T.

    2013-05-01

    An approach to autofocusing for large curved synthetic aperture radar (SAR) apertures is presented. Its essential feature is that phase corrections are being extracted not directly from SAR images, but rather from reconstructed SAR phase-history data representing windowed patches of the scene, of sizes sufficiently small to allow the linearization of the forward- and back-projection formulae. The algorithm processes data associated with each patch independently and in two steps. The first step employs a phase-gradient-type method in which phase correction compensating (possibly rapid) trajectory perturbations are estimated from the reconstructed phase history for the dominant scattering point on the patch. The second step uses phase-gradient-corrected data and extracts the absolute phase value, removing in this way phase ambiguities and reducing possible imperfections of the first stage, and providing the distances between the sensor and the scattering point with accuracy comparable to the wavelength. The features of the proposed autofocusing method are illustrated in its applications to intentionally corrupted small-scene 2006 Gotcha data. The examples include the extraction of absolute phases (ranges) for selected prominent point targets. They are then used to focus the scene and determine relative target-target distances.

  7. Achieving Standardization

    DEFF Research Database (Denmark)

    Henningsson, Stefan

    2016-01-01

    competitive, national customs and regional economic organizations are seeking to establish a standardized solution for digital reporting of customs data. However, standardization has proven hard to achieve in the socio-technical e-Customs solution. In this chapter, the authors identify and describe what has......International e-Customs is going through a standardization process. Driven by the need to increase control in the trade process to address security challenges stemming from threats of terrorists, diseases, and counterfeit products, and to lower the administrative burdens on traders to stay...... to be harmonized in order for a global company to perceive e-Customs as standardized. In doing so, they contribute an explanation of the challenges associated with using a standardization mechanism for harmonizing socio-technical information systems....

  8. Achieving Standardization

    DEFF Research Database (Denmark)

    Henningsson, Stefan

    2014-01-01

    competitive, national customs and regional economic organizations are seeking to establish a standardized solution for digital reporting of customs data. However, standardization has proven hard to achieve in the socio-technical e-Customs solution. In this chapter, the authors identify and describe what has......International e-Customs is going through a standardization process. Driven by the need to increase control in the trade process to address security challenges stemming from threats of terrorists, diseases, and counterfeit products, and to lower the administrative burdens on traders to stay...... to be harmonized in order for a global company to perceive e-Customs as standardized. In doing so, they contribute an explanation of the challenges associated with using a standardization mechanism for harmonizing socio-technical information systems....

  9. Training Standardization

    International Nuclear Information System (INIS)

    Agnihotri, Newal

    2003-01-01

    The article describes the benefits of and required process and recommendations for implementing the standardization of training in the nuclear power industry in the United States and abroad. Current Information and Communication Technologies (ICT) enable training standardization in the nuclear power industry. The delivery of training through the Internet, Intranet and video over IP will facilitate this standardization and bring multiple benefits to the nuclear power industry worldwide. As the amount of available qualified and experienced professionals decreases because of retirements and fewer nuclear engineering institutions, standardized training will help increase the number of available professionals in the industry. Technology will make it possible to use the experience of retired professionals who may be interested in working part-time from a remote location. Well-planned standardized training will prevent a fragmented approach among utilities, and it will save the industry considerable resources in the long run. It will also ensure cost-effective and safe nuclear power plant operation

  10. Validation for 2D/3D registration. II: The comparison of intensity- and gradient-based merit functions using a new gold standard data set.

    Science.gov (United States)

    Gendrin, Christelle; Markelj, Primoz; Pawiro, Supriyanto Ardjo; Spoerk, Jakob; Bloch, Christoph; Weber, Christoph; Figl, Michael; Bergmann, Helmar; Birkfellner, Wolfgang; Likar, Bostjan; Pernus, Franjo

    2011-03-01

    A new gold standard data set for validation of 2D/3D registration based on a porcine cadaver head with attached fiducial markers was presented in the first part of this article. The advantage of this new phantom is the large amount of soft tissue, which simulates realistic conditions for registration. This article tests the performance of intensity- and gradient-based algorithms for 2D/3D registration using the new phantom data set. Intensity-based methods with four merit functions, namely, cross correlation, rank correlation, correlation ratio, and mutual information (MI), and two gradient-based algorithms, the backprojection gradient-based (BGB) registration method and the reconstruction gradient-based (RGB) registration method, were compared. Four volumes consisting of CBCT with two fields of view, 64 slice multidetector CT, and magnetic resonance-T1 weighted images were registered to a pair of kV x-ray images and a pair of MV images. A standardized evaluation methodology was employed. Targets were evenly spread over the volumes and 250 starting positions of the 3D volumes with initial displacements of up to 25 mm from the gold standard position were calculated. After the registration, the displacement from the gold standard was retrieved and the root mean square (RMS), mean, and standard deviation mean target registration errors (mTREs) over 250 registrations were derived. Additionally, the following merit properties were computed: Accuracy, capture range, number of minima, risk of nonconvergence, and distinctiveness of optimum for better comparison of the robustness of each merit. Among the merit functions used for the intensity-based method, MI reached the best accuracy with an RMS mTRE down to 1.30 mm. Furthermore, it was the only merit function that could accurately register the CT to the kV x rays with the presence of tissue deformation. As for the gradient-based methods, BGB and RGB methods achieved subvoxel accuracy (RMS mTRE down to 0.56 and 0.70 mm

  11. Cloud model bat algorithm.

    Science.gov (United States)

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: "bats approach their prey." Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization.

  12. Cloud Model Bat Algorithm

    Directory of Open Access Journals (Sweden)

    Yongquan Zhou

    2014-01-01

    Full Text Available Bat algorithm (BA is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: “bats approach their prey.” Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization.

  13. Unsupervised learning algorithms

    CERN Document Server

    Aydin, Kemal

    2016-01-01

    This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...

  14. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  15. An Objective Evaluation of Four SAR Image Segmentation Algorithms

    National Research Council Canada - National Science Library

    Gregga, Jason

    2001-01-01

    .... A key step towards automated SAR image analysis is image segmentation. There are many segmentation algorithms, but they have not been tested on a common set of images, and there are no standard test methods...

  16. Static Analysis Numerical Algorithms

    Science.gov (United States)

    2016-04-01

    STATIC ANALYSIS OF NUMERICAL ALGORITHMS KESTREL TECHNOLOGY, LLC APRIL 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION...3. DATES COVERED (From - To) NOV 2013 – NOV 2015 4. TITLE AND SUBTITLE STATIC ANALYSIS OF NUMERICAL ALGORITHMS 5a. CONTRACT NUMBER FA8750-14-C...and Honeywell Aerospace Advanced Technology to combine model-based development of complex avionics control software with static analysis of the

  17. Improved Chaff Solution Algorithm

    Science.gov (United States)

    2009-03-01

    Programme de démonstration de technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré...technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré pour déterminer automatiquement

  18. Optimization algorithms and applications

    CERN Document Server

    Arora, Rajesh Kumar

    2015-01-01

    Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc

  19. Image Segmentation Algorithms Overview

    OpenAIRE

    Yuheng, Song; Hao, Yan

    2017-01-01

    The technology of image segmentation is widely used in medical image processing, face recognition pedestrian detection, etc. The current image segmentation techniques include region-based segmentation, edge detection segmentation, segmentation based on clustering, segmentation based on weakly-supervised learning in CNN, etc. This paper analyzes and summarizes these algorithms of image segmentation, and compares the advantages and disadvantages of different algorithms. Finally, we make a predi...

  20. Algorithmic Principles of Mathematical Programming

    NARCIS (Netherlands)

    Faigle, Ulrich; Kern, Walter; Still, Georg

    2002-01-01

    Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear

  1. Applications of nonlocal means algorithm in low-dose X-ray CT image processing and reconstruction: a review

    Science.gov (United States)

    Zhang, Hao; Zeng, Dong; Zhang, Hua; Wang, Jing; Liang, Zhengrong

    2017-01-01

    Low-dose X-ray computed tomography (LDCT) imaging is highly recommended for use in the clinic because of growing concerns over excessive radiation exposure. However, the CT images reconstructed by the conventional filtered back-projection (FBP) method from low-dose acquisitions may be severely degraded with noise and streak artifacts due to excessive X-ray quantum noise, or with view-aliasing artifacts due to insufficient angular sampling. In 2005, the nonlocal means (NLM) algorithm was introduced as a non-iterative edge-preserving filter to denoise natural images corrupted by additive Gaussian noise, and showed superior performance. It has since been adapted and applied to many other image types and various inverse problems. This paper specifically reviews the applications of the NLM algorithm in LDCT image processing and reconstruction, and explicitly demonstrates its improving effects on the reconstructed CT image quality from low-dose acquisitions. The effectiveness of these applications on LDCT and their relative performance are described in detail. PMID:28303644

  2. Information filtering via weighted heat conduction algorithm

    Science.gov (United States)

    Liu, Jian-Guo; Guo, Qiang; Zhang, Yi-Cheng

    2011-06-01

    In this paper, by taking into account effects of the user and object correlations on a heat conduction (HC) algorithm, a weighted heat conduction (WHC) algorithm is presented. We argue that the edge weight of the user-object bipartite network should be embedded into the HC algorithm to measure the object similarity. The numerical results indicate that both the accuracy and diversity could be improved greatly compared with the standard HC algorithm and the optimal values reached simultaneously. On the Movielens and Netflix datasets, the algorithmic accuracy, measured by the average ranking score, can be improved by 39.7% and 56.1% in the optimal case, respectively, and the diversity could reach 0.9587 and 0.9317 when the recommendation list equals to 5. Further statistical analysis indicates that, in the optimal case, the distributions of the edge weight are changed to the Poisson form, which may be the reason why HC algorithm performance could be improved. This work highlights the effect of edge weight on a personalized recommendation study, which maybe an important factor affecting personalized recommendation performance.

  3. Analysis of a parallel multigrid algorithm

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1989-01-01

    The parallel multigrid algorithm of Frederickson and McBryan (1987) is considered. This algorithm uses multiple coarse-grid problems (instead of one problem) in the hope of accelerating convergence and is found to have a close relationship to traditional multigrid methods. Specifically, the parallel coarse-grid correction operator is identical to a traditional multigrid coarse-grid correction operator, except that the mixing of high and low frequencies caused by aliasing error is removed. Appropriate relaxation operators can be chosen to take advantage of this property. Comparisons between the standard multigrid and the new method are made.

  4. Structural health monitoring algorithm comparisons using standard data sets

    Energy Technology Data Exchange (ETDEWEB)

    Figueiredo, Eloi; Park, Gyuhae; Figueiras, Joaquim; Farrar, Charles; Worden, Keith

    2009-03-01

    The real-world structures are subjected to operational and environmental condition changes that impose difficulties in detecting and identifying structural damage. The aim of this report is to detect damage with the presence of such operational and environmental condition changes through the application of the Los Alamos National Laboratory’s statistical pattern recognition paradigm for structural health monitoring (SHM). The test structure is a laboratory three-story building, and the damage is simulated through nonlinear effects introduced by a bumper mechanism that simulates a repetitive impact-type nonlinearity. The report reviews and illustrates various statistical principles that have had wide application in many engineering fields. The intent is to provide the reader with an introduction to feature extraction and statistical modelling for feature classification in the context of SHM. In this process, the strengths and limitations of some actual statistical techniques used to detect damage in the structures are discussed. In the hierarchical structure of damage detection, this report is only concerned with the first step of the damage detection strategy, which is the evaluation of the existence of damage in the structure. The data from this study and a detailed description of the test structure are available for download at: http://institute.lanl.gov/ei/software-and-data/.

  5. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  6. Agency and Algorithms

    Directory of Open Access Journals (Sweden)

    Hanns Holger Rutz

    2016-11-01

    Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory. 

  7. Evaluation of a Cross Layer Scheduling Algorithm for LTE Downlink

    Directory of Open Access Journals (Sweden)

    A. Popovska Avramova

    2013-06-01

    Full Text Available The LTE standard is a leading standard in the wireless broadband market. The Radio Resource Management at the base station plays a major role in satisfying users demand for high data rates and quality of service. This paper evaluates a cross layer scheduling algorithm that aims at minimizing the resource utilization. The algorithm makes decisions based on channel conditions, the size of transmission buffers and different quality of service demands. Simulation results show that the new algorithm improves the resource utilization and provides better guarantees for service quality.

  8. Improved Heat-Stress Algorithm

    Science.gov (United States)

    Teets, Edward H., Jr.; Fehn, Steven

    2007-01-01

    NASA Dryden presents an improved and automated site-specific algorithm for heat-stress approximation using standard atmospheric measurements routinely obtained from the Edwards Air Force Base weather detachment. Heat stress, which is the net heat load a worker may be exposed to, is officially measured using a thermal-environment monitoring system to calculate the wet-bulb globe temperature (WBGT). This instrument uses three independent thermometers to measure wet-bulb, dry-bulb, and the black-globe temperatures. By using these improvements, a more realistic WBGT estimation value can now be produced. This is extremely useful for researchers and other employees who are working on outdoor projects that are distant from the areas that the Web system monitors. Most importantly, the improved WBGT estimations will make outdoor work sites safer by reducing the likelihood of heat stress.

  9. Hierarchical matrices algorithms and analysis

    CERN Document Server

    Hackbusch, Wolfgang

    2015-01-01

    This self-contained monograph presents matrix algorithms and their analysis. The new technique enables not only the solution of linear systems but also the approximation of matrix functions, e.g., the matrix exponential. Other applications include the solution of matrix equations, e.g., the Lyapunov or Riccati equation. The required mathematical background can be found in the appendix. The numerical treatment of fully populated large-scale matrices is usually rather costly. However, the technique of hierarchical matrices makes it possible to store matrices and to perform matrix operations approximately with almost linear cost and a controllable degree of approximation error. For important classes of matrices, the computational cost increases only logarithmically with the approximation error. The operations provided include the matrix inversion and LU decomposition. Since large-scale linear algebra problems are standard in scientific computing, the subject of hierarchical matrices is of interest to scientists ...

  10. Bayesian integration of networks without gold standards.

    Science.gov (United States)

    Weile, Jochen; James, Katherine; Hallinan, Jennifer; Cockell, Simon J; Lord, Phillip; Wipat, Anil; Wilkinson, Darren J

    2012-06-01

    Biological experiments give insight into networks of processes inside a cell, but are subject to error and uncertainty. However, due to the overlap between the large number of experiments reported in public databases it is possible to assess the chances of individual observations being correct. In order to do so, existing methods rely on high-quality 'gold standard' reference networks, but such reference networks are not always available. We present a novel algorithm for computing the probability of network interactions that operates without gold standard reference data. We show that our algorithm outperforms existing gold standard-based methods. Finally, we apply the new algorithm to a large collection of genetic interaction and protein-protein interaction experiments. The integrated dataset and a reference implementation of the algorithm as a plug-in for the Ondex data integration framework are available for download at http://bio-nexus.ncl.ac.uk/projects/nogold/

  11. Algorithms in invariant theory

    CERN Document Server

    Sturmfels, Bernd

    2008-01-01

    J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.

  12. Detection of algorithmic trading

    Science.gov (United States)

    Bogoev, Dimitar; Karam, Arzé

    2017-10-01

    We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.

  13. The Retina Algorithm

    CERN Multimedia

    CERN. Geneva; PUNZI, Giovanni

    2015-01-01

    Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.

  14. Handbook of Memetic Algorithms

    CERN Document Server

    Cotta, Carlos; Moscato, Pablo

    2012-01-01

    Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems.  The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes.   “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now.  A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem,  memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...

  15. Named Entity Linking Algorithm

    Directory of Open Access Journals (Sweden)

    M. F. Panteleev

    2017-01-01

    Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.

  16. A Validation Process for Underwater Localization Algorithms

    Directory of Open Access Journals (Sweden)

    Marc Hildebrandt

    2014-09-01

    Full Text Available This paper describes the validation process of a localization algorithm for underwater vehicles. In order to develop new localization algorithms, it is essential to characterize them with regard to their accuracy, long-term stability and robustness to external sources of noise. This is only possible if a gold-standard reference localization (GSRL is available against which any new localization algorithm (NLA can be tested. This process requires a vehicle which carries all the required sensor and processing systems for both the GSRL and the NLA. This paper will show the necessity of such a validation process, briefly sketch the test vehicle and its capabilities, describe the challenges in computing the localizations of both the GSRL and the NLA simultaneously for comparison, and conclude with experimental data of real-world trials.

  17. Detection of Cheating by Decimation Algorithm

    Science.gov (United States)

    Yamanaka, Shogo; Ohzeki, Masayuki; Decelle, Aurélien

    2015-02-01

    We expand the item response theory to study the case of "cheating students" for a set of exams, trying to detect them by applying a greedy algorithm of inference. This extended model is closely related to the Boltzmann machine learning. In this paper we aim to infer the correct biases and interactions of our model by considering a relatively small number of sets of training data. Nevertheless, the greedy algorithm that we employed in the present study exhibits good performance with a few number of training data. The key point is the sparseness of the interactions in our problem in the context of the Boltzmann machine learning: the existence of cheating students is expected to be very rare (possibly even in real world). We compare a standard approach to infer the sparse interactions in the Boltzmann machine learning to our greedy algorithm and we find the latter to be superior in several aspects.

  18. Parallel GPU implementation of iterative PCA algorithms.

    Science.gov (United States)

    Andrecut, M

    2009-11-01

    Principal component analysis (PCA) is a key statistical technique for multivariate data analysis. For large data sets, the common approach to PCA computation is based on the standard NIPALS-PCA algorithm, which unfortunately suffers from loss of orthogonality, and therefore its applicability is usually limited to the estimation of the first few components. Here we present an algorithm based on Gram-Schmidt orthogonalization (called GS-PCA), which eliminates this shortcoming of NIPALS-PCA. Also, we discuss the GPU (Graphics Processing Unit) parallel implementation of both NIPALS-PCA and GS-PCA algorithms. The numerical results show that the GPU parallel optimized versions, based on CUBLAS (NVIDIA), are substantially faster (up to 12 times) than the CPU optimized versions based on CBLAS (GNU Scientific Library).

  19. Analysis of ANSI N13.11: the performance algorithm

    International Nuclear Information System (INIS)

    Roberson, P.L.; Hadley, R.T.; Thorson, M.R.

    1982-06-01

    The method of performance testing for personnel dosimeters specified in draft ANSI N13.11, Criteria for Testing Personnel Dosimetry Performance is evaluated. Points addressed are: (1) operational behavior of the performance algorithm; (2) dependence on the number of test dosimeters; (3) basis for choosing an algorithm; and (4) other possible algorithms. The performance algorithm evaluated for each test category is formed by adding the calibration bias and its standard deviation. This algorithm is not optimal due to a high dependence on the standard deviation. The dependence of the calibration bias on the standard deviation is significant because of the low number of dosimeters (15) evaluated per category. For categories with large standard deviations the uncertainty in determining the performance criterion is large. To have a reasonable chance of passing all categories in one test, we required a 95% probability of passing each category. Then, the maximum permissible standard deviation is 30% even with a zero bias. For test categories with standard deviations <10%, the bias can be as high as 35%. For intermediate standard deviations, the chance of passing a category is improved by using a 5 to 10% negative bias. Most multipurpose personnel dosimetry systems will probably require detailed calibration adjustments to pass all categories within two rounds of testing

  20. A cluster algorithm for graphs

    NARCIS (Netherlands)

    S. van Dongen

    2000-01-01

    textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)

  1. Law and Order in Algorithmics

    NARCIS (Netherlands)

    Fokkinga, M.M.

    1992-01-01

    An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as

  2. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  3. Wireless communications algorithmic techniques

    CERN Document Server

    Vitetta, Giorgio; Colavolpe, Giulio; Pancaldi, Fabrizio; Martin, Philippa A

    2013-01-01

    This book introduces the theoretical elements at the basis of various classes of algorithms commonly employed in the physical layer (and, in part, in MAC layer) of wireless communications systems. It focuses on single user systems, so ignoring multiple access techniques. Moreover, emphasis is put on single-input single-output (SISO) systems, although some relevant topics about multiple-input multiple-output (MIMO) systems are also illustrated.Comprehensive wireless specific guide to algorithmic techniquesProvides a detailed analysis of channel equalization and channel coding for wi

  4. Algorithms for Reinforcement Learning

    CERN Document Server

    Szepesvari, Csaba

    2010-01-01

    Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'

  5. Neural-Network-Biased Genetic Algorithms for Materials Design: Evolutionary Algorithms That Learn.

    Science.gov (United States)

    Patra, Tarak K; Meenakshisundaram, Venkatesh; Hung, Jui-Hsiang; Simmons, David S

    2017-02-13

    Machine learning has the potential to dramatically accelerate high-throughput approaches to materials design, as demonstrated by successes in biomolecular design and hard materials design. However, in the search for new soft materials exhibiting properties and performance beyond those previously achieved, machine learning approaches are frequently limited by two shortcomings. First, because they are intrinsically interpolative, they are better suited to the optimization of properties within the known range of accessible behavior than to the discovery of new materials with extremal behavior. Second, they require large pre-existing data sets, which are frequently unavailable and prohibitively expensive to produce. Here we describe a new strategy, the neural-network-biased genetic algorithm (NBGA), for combining genetic algorithms, machine learning, and high-throughput computation or experiment to discover materials with extremal properties in the absence of pre-existing data. Within this strategy, predictions from a progressively constructed artificial neural network are employed to bias the evolution of a genetic algorithm, with fitness evaluations performed via direct simulation or experiment. In effect, this strategy gives the evolutionary algorithm the ability to "learn" and draw inferences from its experience to accelerate the evolutionary process. We test this algorithm against several standard optimization problems and polymer design problems and demonstrate that it matches and typically exceeds the efficiency and reproducibility of standard approaches including a direct-evaluation genetic algorithm and a neural-network-evaluated genetic algorithm. The success of this algorithm in a range of test problems indicates that the NBGA provides a robust strategy for employing informatics-accelerated high-throughput methods to accelerate materials design in the absence of pre-existing data.

  6. Frequency standards

    CERN Document Server

    Riehle, Fritz

    2006-01-01

    Of all measurement units, frequency is the one that may be determined with the highest degree of accuracy. It equally allows precise measurements of other physical and technical quantities, whenever they can be measured in terms of frequency.This volume covers the central methods and techniques relevant for frequency standards developed in physics, electronics, quantum electronics, and statistics. After a review of the basic principles, the book looks at the realisation of commonly used components. It then continues with the description and characterisation of important frequency standards

  7. A novel algorithm for segmentation of brain MR images

    International Nuclear Information System (INIS)

    Sial, M.Y.; Yu, L.; Chowdhry, B.S.; Rajput, A.Q.K.; Bhatti, M.I.

    2006-01-01

    Accurate and fully automatic segmentation of brain from magnetic resonance (MR) scans is a challenging problem that has received an enormous amount of . attention lately. Many researchers have applied various techniques however a standard fuzzy c-means algorithm has produced better results compared to other methods. In this paper, we present a modified fuzzy c-means (FCM) based algorithm for segmentation of brain MR images. Our algorithm is formulated by modifying the objective function of the standard FCM and uses a special spread method to get a smooth and slow varying bias field This method has the advantage that it can be applied at an early stage in an automated data analysis before a tissue model is available. The results on MRI images show that this method provides better results compared to standard FCM algorithms. (author)

  8. Algorithms for optimizing drug therapy

    Directory of Open Access Journals (Sweden)

    Martin Lene

    2004-07-01

    based on one of these three models could be constructed regarding all but one of the studied disorders. The single exception was depression, where reliable relationships between patient characteristics, drug classes and outcome of therapy remain to be defined. Conclusion Algorithms for optimizing drug therapy can, with presumably rare exceptions, be developed for any disorder, using standard Internet programming methods.

  9. New Optimization Algorithms in Physics

    CERN Document Server

    Hartmann, Alexander K

    2004-01-01

    Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

  10. Relevant Standards

    Indian Academy of Sciences (India)

    .86: Ethernet over LAPS. Standard in China and India. G.7041: Generic Framing Procedure (GFP). Supports Ethernet as well as other data formats (e.g., Fibre Channel); Protocol of ... IEEE 802.3x for flow control of incoming Ethernet data ...

  11. From Story to Algorithm.

    Science.gov (United States)

    Ball, Stanley

    1986-01-01

    Presents a developmental taxonomy which promotes sequencing activities to enhance the potential of matching these activities with learner needs and readiness, suggesting that the order commonly found in the classroom needs to be inverted. The proposed taxonomy (story, skill, and algorithm) involves problem-solving emphasis in the classroom. (JN)

  12. The Design of Algorithms.

    Science.gov (United States)

    Ferguson, David L.; Henderson, Peter B.

    1987-01-01

    Designed initially for use in college computer science courses, the model and computer-aided instructional environment (CAIE) described helps students develop algorithmic problem solving skills. Cognitive skills required are discussed, and implications for developing computer-based design environments in other disciplines are suggested by…

  13. Improved Approximation Algorithm for

    NARCIS (Netherlands)

    Byrka, Jaroslaw; Li, S.; Rybicki, Bartosz

    2014-01-01

    We study the k-level uncapacitated facility location problem (k-level UFL) in which clients need to be connected with paths crossing open facilities of k types (levels). In this paper we first propose an approximation algorithm that for any constant k, in polynomial time, delivers solutions of

  14. Enhanced sampling algorithms.

    Science.gov (United States)

    Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko

    2013-01-01

    In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.

  15. Algorithm Theory - SWAT 2006

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...

  16. Algorithmic information theory

    NARCIS (Netherlands)

    Grünwald, P.D.; Vitányi, P.M.B.; Adriaans, P.; van Benthem, J.

    2008-01-01

    We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining 'information'. We discuss the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are

  17. Algorithmic information theory

    NARCIS (Netherlands)

    Grünwald, P.D.; Vitányi, P.M.B.

    2008-01-01

    We introduce algorithmic information theory, also known as the theory of Kolmogorov complexity. We explain the main concepts of this quantitative approach to defining `information'. We discuss the extent to which Kolmogorov's and Shannon's information theory have a common purpose, and where they are

  18. Introduction to Algorithms

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 9. Introduction to Algorithms Turtle Graphics. R K Shyamasundar. Series Article Volume 1 ... Author Affiliations. R K Shyamasundar1. Computer Science Group Tata Institute of Fundamental Research Homi Bhabha Road Mumbai 400 005, India.

  19. Algorithms for SCC Decomposition

    NARCIS (Netherlands)

    J. Barnat; J. Chaloupka (Jakub); J.C. van de Pol (Jaco)

    2008-01-01

    htmlabstractWe study and improve the OBF technique [Barnat, J. and P.Moravec, Parallel algorithms for finding SCCs in implicitly given graphs, in: Proceedings of the 5th International Workshop on Parallel and Distributed Methods in Verification (PDMC 2006), LNCS (2007)], which was used in

  20. Median prior constrained TV algorithm for sparse view low-dose CT reconstruction.

    Science.gov (United States)

    Liu, Yi; Shangguan, Hong; Zhang, Quan; Zhu, Hongqing; Shu, Huazhong; Gui, Zhiguo

    2015-05-01

    It is known that lowering the X-ray tube current (mAs) or tube voltage (kVp) and simultaneously reducing the total number of X-ray views (sparse view) is an effective means to achieve low-dose in computed tomography (CT) scan. However, the associated image quality by the conventional filtered back-projection (FBP) usually degrades due to the excessive quantum noise. Although sparse-view CT reconstruction algorithm via total variation (TV), in the scanning protocol of reducing X-ray tube current, has been demonstrated to be able to result in significant radiation dose reduction while maintain image quality, noticeable patchy artifacts still exist in reconstructed images. In this study, to address the problem of patchy artifacts, we proposed a median prior constrained TV regularization to retain the image quality by introducing an auxiliary vector m in register with the object. Specifically, the approximate action of m is to draw, in each iteration, an object voxel toward its own local median, aiming to improve low-dose image quality with sparse-view projection measurements. Subsequently, an alternating optimization algorithm is adopted to optimize the associative objective function. We refer to the median prior constrained TV regularization as "TV_MP" for simplicity. Experimental results on digital phantoms and clinical phantom demonstrated that the proposed TV_MP with appropriate control parameters can not only ensure a higher signal to noise ratio (SNR) of the reconstructed image, but also its resolution compared with the original TV method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Structure-Based Algorithms for Microvessel Classification

    KAUST Repository

    Smith, Amy F.

    2015-02-01

    © 2014 The Authors. Microcirculation published by John Wiley & Sons Ltd. Objective: Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Methods: Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. Results: The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. Conclusions: The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries, and venules.

  2. A New Optimized GA-RBF Neural Network Algorithm

    Directory of Open Access Journals (Sweden)

    Weikuan Jia

    2014-01-01

    Full Text Available When confronting the complex problems, radial basis function (RBF neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm, which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer’s neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid.

  3. Evaluation of hybrid SART  +  OS  +  TV iterative reconstruction algorithm for optical-CT gel dosimeter imaging

    Science.gov (United States)

    Du, Yi; Wang, Xiangang; Xiang, Xincheng; Wei, Zhouping

    2016-12-01

    Optical computed tomography (optical-CT) is a high-resolution, fast, and easily accessible readout modality for gel dosimeters. This paper evaluates a hybrid iterative image reconstruction algorithm for optical-CT gel dosimeter imaging, namely, the simultaneous algebraic reconstruction technique (SART) integrated with ordered subsets (OS) iteration and total variation (TV) minimization regularization. The mathematical theory and implementation workflow of the algorithm are detailed. Experiments on two different optical-CT scanners were performed for cross-platform validation. For algorithm evaluation, the iterative convergence is first shown, and peak-to-noise-ratio (PNR) and contrast-to-noise ratio (CNR) results are given with the cone-beam filtered backprojection (FDK) algorithm and the FDK results followed by median filtering (mFDK) as reference. The effect on spatial gradients and reconstruction artefacts is also investigated. The PNR curve illustrates that the results of SART  +  OS  +  TV finally converges to that of FDK but with less noise, which implies that the dose-OD calibration method for FDK is also applicable to the proposed algorithm. The CNR in selected regions-of-interest (ROIs) of SART  +  OS  +  TV results is almost double that of FDK and 50% higher than that of mFDK. The artefacts in SART  +  OS  +  TV results are still visible, but have been much suppressed with little spatial gradient loss. Based on the assessment, we can conclude that this hybrid SART  +  OS  +  TV algorithm outperforms both FDK and mFDK in denoising, preserving spatial dose gradients and reducing artefacts, and its effectiveness and efficiency are platform independent.

  4. Python algorithms mastering basic algorithms in the Python language

    CERN Document Server

    Hetland, Magnus Lie

    2014-01-01

    Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc

  5. Evaluation of a Cross Layer Scheduling Algorithm for LTE Downlink

    DEFF Research Database (Denmark)

    Popovska Avramova, Andrijana; Yan, Ying; Dittmann, Lars

    2013-01-01

    The LTE standard is a leading standard in the wireless broadband market. The Radio Resource Management at the base station plays a major role in satisfying users demand for high data rates and quality of service. This paper evaluates a cross layer scheduling algorithm that aims at minimizing...

  6. Fast autodidactic adaptive equalization algorithms

    Science.gov (United States)

    Hilal, Katia

    Autodidactic equalization by adaptive filtering is addressed in a mobile radio communication context. A general method, using an adaptive stochastic gradient Bussgang type algorithm, to deduce two low cost computation algorithms is given: one equivalent to the initial algorithm and the other having improved convergence properties thanks to a block criteria minimization. Two start algorithms are reworked: the Godard algorithm and the decision controlled algorithm. Using a normalization procedure, and block normalization, the performances are improved, and their common points are evaluated. These common points are used to propose an algorithm retaining the advantages of the two initial algorithms. This thus inherits the robustness of the Godard algorithm and the precision and phase correction of the decision control algorithm. The work is completed by a study of the stable states of Bussgang type algorithms and of the stability of the Godard algorithms, initial and normalized. The simulation of these algorithms, carried out in a mobile radio communications context, and under severe conditions on the propagation channel, gave a 75% reduction in the number of samples required for the processing in relation with the initial algorithms. The improvement of the residual error was of a much lower return. These performances are close to making possible the use of autodidactic equalization in the mobile radio system.

  7. Application of the Levenberg-Marquardt Scheme to the MUSIC Algorithm for AOA Estimation

    Directory of Open Access Journals (Sweden)

    Joon-Ho Lee

    2013-01-01

    can be expressed in a least squares form. Based on this observation, we present a rigorous Levenberg-Marquardt (LM formulation of the MUSIC algorithm for simultaneous estimation of an azimuth and an elevation. We show a convergence property and compare the performance of the LM-based MUSIC algorithm with that of the standard MUSIC algorithm via Monte-Carlo simulation. We also compare the performance of the MUSIC algorithm with that of the Capon algorithm both for the standard implementation and for the LM-based implementation.

  8. Comparison Of Hybrid Sorting Algorithms Implemented On Different Parallel Hardware Platforms

    Directory of Open Access Journals (Sweden)

    Dominik Zurek

    2013-01-01

    Full Text Available Sorting is a common problem in computer science. There are lot of well-known sorting algorithms created for sequential execution on a single processor. Recently, hardware platforms enable to create wide parallel algorithms. We have standard processors consist of multiple cores and hardware accelerators like GPU. The graphic cards with their parallel architecture give new possibility to speed up many algorithms. In this paper we describe results of implementation of a few different sorting algorithms on GPU cards and multicore processors. Then hybrid algorithm will be presented which consists of parts executed on both platforms, standard CPU and GPU.

  9. A MEDLINE categorization algorithm

    Directory of Open Access Journals (Sweden)

    Gehanno Jean-Francois

    2006-02-01

    Full Text Available Abstract Background Categorization is designed to enhance resource description by organizing content description so as to enable the reader to grasp quickly and easily what are the main topics discussed in it. The objective of this work is to propose a categorization algorithm to classify a set of scientific articles indexed with the MeSH thesaurus, and in particular those of the MEDLINE bibliographic database. In a large bibliographic database such as MEDLINE, finding materials of particular interest to a specialty group, or relevant to a particular audience, can be difficult. The categorization refines the retrieval of indexed material. In the CISMeF terminology, metaterms can be considered as super-concepts. They were primarily conceived to improve recall in the CISMeF quality-controlled health gateway. Methods The MEDLINE categorization algorithm (MCA is based on semantic links existing between MeSH terms and metaterms on the one hand and between MeSH subheadings and metaterms on the other hand. These links are used to automatically infer a list of metaterms from any MeSH term/subheading indexing. Medical librarians manually select the semantic links. Results The MEDLINE categorization algorithm lists the medical specialties relevant to a MEDLINE file by decreasing order of their importance. The MEDLINE categorization algorithm is available on a Web site. It can run on any MEDLINE file in a batch mode. As an example, the top 3 medical specialties for the set of 60 articles published in BioMed Central Medical Informatics & Decision Making, which are currently indexed in MEDLINE are: information science, organization and administration and medical informatics. Conclusion We have presented a MEDLINE categorization algorithm in order to classify the medical specialties addressed in any MEDLINE file in the form of a ranked list of relevant specialties. The categorization method introduced in this paper is based on the manual indexing of resources

  10. Reactive Collision Avoidance Algorithm

    Science.gov (United States)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  11. A MEDLINE categorization algorithm

    Science.gov (United States)

    Darmoni, Stefan J; Névéol, Aurelie; Renard, Jean-Marie; Gehanno, Jean-Francois; Soualmia, Lina F; Dahamna, Badisse; Thirion, Benoit

    2006-01-01

    Background Categorization is designed to enhance resource description by organizing content description so as to enable the reader to grasp quickly and easily what are the main topics discussed in it. The objective of this work is to propose a categorization algorithm to classify a set of scientific articles indexed with the MeSH thesaurus, and in particular those of the MEDLINE bibliographic database. In a large bibliographic database such as MEDLINE, finding materials of particular interest to a specialty group, or relevant to a particular audience, can be difficult. The categorization refines the retrieval of indexed material. In the CISMeF terminology, metaterms can be considered as super-concepts. They were primarily conceived to improve recall in the CISMeF quality-controlled health gateway. Methods The MEDLINE categorization algorithm (MCA) is based on semantic links existing between MeSH terms and metaterms on the one hand and between MeSH subheadings and metaterms on the other hand. These links are used to automatically infer a list of metaterms from any MeSH term/subheading indexing. Medical librarians manually select the semantic links. Results The MEDLINE categorization algorithm lists the medical specialties relevant to a MEDLINE file by decreasing order of their importance. The MEDLINE categorization algorithm is available on a Web site. It can run on any MEDLINE file in a batch mode. As an example, the top 3 medical specialties for the set of 60 articles published in BioMed Central Medical Informatics & Decision Making, which are currently indexed in MEDLINE are: information science, organization and administration and medical informatics. Conclusion We have presented a MEDLINE categorization algorithm in order to classify the medical specialties addressed in any MEDLINE file in the form of a ranked list of relevant specialties. The categorization method introduced in this paper is based on the manual indexing of resources with MeSH (terms

  12. Genetic Algorithms and Local Search

    Science.gov (United States)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  13. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    Science.gov (United States)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  14. Algorithms for Global Positioning

    DEFF Research Database (Denmark)

    Borre, Kai; Strang, Gilbert

    and replaces the authors' previous work, Linear Algebra, Geodesy, and GPS (1997). An initial discussion of the basic concepts, characteristics and technical aspects of different satellite systems is followed by the necessary mathematical content which is presented in a detailed and self-contained fashion......The emergence of satellite technology has changed the lives of millions of people. In particular, GPS has brought an unprecedented level of accuracy to the field of geodesy. This text is a guide to the algorithms and mathematical principles that account for the success of GPS technology....... At the heart of the matter are the positioning algorithms on which GPS technology relies, the discussion of which will affirm the mathematical contents of the previous chapters. Numerous ready-to-use MATLAB codes are included for the reader. This comprehensive guide will be invaluable for engineers...

  15. Genetic algorithm essentials

    CERN Document Server

    Kramer, Oliver

    2017-01-01

    This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.

  16. DAL Algorithms and Python

    CERN Document Server

    Aydemir, Bahar

    2017-01-01

    The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider (LHC) at CERN is composed of a large number of distributed hardware and software components. TDAQ system consists of about 3000 computers and more than 25000 applications which, in a coordinated manner, provide the data-taking functionality of the overall system. There is a number of online services required to configure, monitor and control the ATLAS data taking. In particular, the configuration service is used to provide configuration of above components. The configuration of the ATLAS data acquisition system is stored in XML-based object database named OKS. DAL (Data Access Library) allowing to access it's information by C++, Java and Python clients in a distributed environment. Some information has quite complicated structure, so it's extraction requires writing special algorithms. Algorithms available on C++ programming language and partially reimplemented on Java programming language. The goal of the projec...

  17. Partitional clustering algorithms

    CERN Document Server

    2015-01-01

    This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...

  18. Fatigue Evaluation Algorithms: Review

    DEFF Research Database (Denmark)

    Passipoularidis, Vaggelis; Brøndsted, Povl

    A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck...... series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor...... blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects...

  19. Boosting foundations and algorithms

    CERN Document Server

    Schapire, Robert E

    2012-01-01

    Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.

  20. Likelihood Inflating Sampling Algorithm

    OpenAIRE

    Entezari, Reihaneh; Craiu, Radu V.; Rosenthal, Jeffrey S.

    2016-01-01

    Markov Chain Monte Carlo (MCMC) sampling from a posterior distribution corresponding to a massive data set can be computationally prohibitive since producing one sample requires a number of operations that is linear in the data size. In this paper, we introduce a new communication-free parallel method, the Likelihood Inflating Sampling Algorithm (LISA), that significantly reduces computational costs by randomly splitting the dataset into smaller subsets and running MCMC methods independently ...

  1. Constrained Minimization Algorithms

    Science.gov (United States)

    Lantéri, H.; Theys, C.; Richard, C.

    2013-03-01

    In this paper, we consider the inverse problem of restoring an unknown signal or image, knowing the transformation suffered by the unknowns. More specifically we deal with transformations described by a linear model linking the unknown signal to an unnoisy version of the data. The measured data are generally corrupted by noise. This aspect of the problem is presented in the introduction for general models. In Section 2, we introduce the linear models, and some examples of linear inverse problems are presented. The specificities of the inverse problems are briefly mentionned and shown on a simple example. In Section 3, we give some information on classical distances or divergences. Indeed, an inverse problem is generally solved by minimizing a discrepancy function (divergence or distance) between the measured data and the model (here linear) of such data. Section 4 deals with the likelihood maximization and with their links with divergences minimization. The physical constraints on the solution are indicated and the Split Gradient Method (SGM) is detailed in Section 5. A constraint on the inferior bound of the solution is introduced at first; the positivity constraint is a particular case of such a constraint. We show how to obtain strictly, the multiplicative form of the algorithms. In a second step, the so-called flux constraint is introduced, and a complete algorithmic form is given. In Section 6 we give some brief information on acceleration method of such algorithms. A conclusion is given in Section 7.

  2. ALGORITHM OF OBJECT RECOGNITION

    Directory of Open Access Journals (Sweden)

    Loktev Alexey Alexeevich

    2012-10-01

    Full Text Available The second important problem to be resolved to the algorithm and its software, that comprises an automatic design of a complex closed circuit television system, represents object recognition, by virtue of which an image is transmitted by the video camera. Since imaging of almost any object is dependent on many factors, including its orientation in respect of the camera, lighting conditions, parameters of the registering system, static and dynamic parameters of the object itself, it is quite difficult to formalize the image and represent it in the form of a certain mathematical model. Therefore, methods of computer-aided visualization depend substantially on the problems to be solved. They can be rarely generalized. The majority of these methods are non-linear; therefore, there is a need to increase the computing power and complexity of algorithms to be able to process the image. This paper covers the research of visual object recognition and implementation of the algorithm in the view of the software application that operates in the real-time mode

  3. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  4. The Hip Restoration Algorithm

    Science.gov (United States)

    Stubbs, Allston Julius; Atilla, Halis Atil

    2016-01-01

    Summary Background Despite the rapid advancement of imaging and arthroscopic techniques about the hip joint, missed diagnoses are still common. As a deep joint and compared to the shoulder and knee joints, localization of hip symptoms is difficult. Hip pathology is not easily isolated and is often related to intra and extra-articular abnormalities. In light of these diagnostic challenges, we recommend an algorithmic approach to effectively diagnoses and treat hip pain. Methods In this review, hip pain is evaluated from diagnosis to treatment in a clear decision model. First we discuss emergency hip situations followed by the differentiation of intra and extra-articular causes of the hip pain. We differentiate the intra-articular hip as arthritic and non-arthritic and extra-articular pain as surrounding or remote tissue generated. Further, extra-articular hip pain is evaluated according to pain location. Finally we summarize the surgical treatment approach with an algorithmic diagram. Conclusion Diagnosis of hip pathology is difficult because the etiologies of pain may be various. An algorithmic approach to hip restoration from diagnosis to rehabilitation is crucial to successfully identify and manage hip pathologies. Level of evidence: V. PMID:28066734

  5. An efficient algorithm for function optimization: modified stem cells algorithm

    Science.gov (United States)

    Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad

    2013-03-01

    In this paper, we propose an optimization algorithm based on the intelligent behavior of stem cell swarms in reproduction and self-organization. Optimization algorithms, such as the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) algorithm, Ant Colony Optimization (ACO) algorithm and Artificial Bee Colony (ABC) algorithm, can give solutions to linear and non-linear problems near to the optimum for many applications; however, in some case, they can suffer from becoming trapped in local optima. The Stem Cells Algorithm (SCA) is an optimization algorithm inspired by the natural behavior of stem cells in evolving themselves into new and improved cells. The SCA avoids the local optima problem successfully. In this paper, we have made small changes in the implementation of this algorithm to obtain improved performance over previous versions. Using a series of benchmark functions, we assess the performance of the proposed algorithm and compare it with that of the other aforementioned optimization algorithms. The obtained results prove the superiority of the Modified Stem Cells Algorithm (MSCA).

  6. Convex hull ranking algorithm for multi-objective evolutionary algorithms

    NARCIS (Netherlands)

    Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.

    2012-01-01

    Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity

  7. Standardization of SPECT imaging

    International Nuclear Information System (INIS)

    Mishio, Kouji

    1989-01-01

    Though the use of instruments for SPECT imaging is prevailing, the SPECT images from the several instruments appears many differences in quality respectively. For the purpose of studying the cause of different image quality between several instruments, SPECT images of the same phantom were acquired and processed using 6 instruments in 5 institutes to compare. Up to now the quality of SPECT images was foundamentally dependent on the hardware, but factors of software, such as reconstruction algorithms and determinations of severl parameters seemed to have more important effect upon the image quality. The adoption of appropriate processing method after minimizing the imaging deterioration due to the hardware would make the difference of image quality minimum, and could make the standardization of SPECT imaging possible. (author)

  8. Applicability of a set of tomographic reconstruction algorithms for quantitative SPECT on irradiated nuclear fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Jacobsson Svärd, Staffan, E-mail: staffan.jacobsson_svard@physics.uu.se; Holcombe, Scott; Grape, Sophie

    2015-05-21

    assessment, which may be particularly useful in the latter application. Two main classes of algorithms are covered; (1) analytic filtered back-projection algorithms, and (2) a group of model-based or algebraic algorithms. For the former class, a basic algorithm has been implemented, which does not take attenuation in the materials of the fuel assemblies into account and which assumes an idealized imaging geometry. In addition, a novel methodology has been presented for introducing a first-order correction to the obtained images for these deficits; in particular, the effects of attenuation are taken into account by modelling the response for an object with a homogeneous mix of fuel materials in the image area. Neither the basic algorithm, nor the correction method requires prior knowledge of the fuel geometry, but they result in images of the assembly's internal activity distribution. Image analysis is then applied to deduce quantitative information. Two algebraic algorithms are also presented, which model attenuation in the fuel assemblies to different degrees; either assuming a homogenous mix of materials in the image area without a priori information or utilizing known information of the assembly geometry and of its position in the measuring setup for modelling the gamma-ray attenuation in detail. Both algorithms model the detection system in detail. The former algorithm returns an image of the cross-section of the object, from which quantitative information is extracted, whereas the latter returns conclusive relative rod-by-rod data. Here, all reconstruction methods are demonstrated on simulated data of a 96-rod fuel assembly in a tomographic measurement setup. The assembly was simulated with the same activity content in all rods for evaluation purposes. Based on the results, it is argued that the choice of algorithm to a large degree depends on application, and also that a combination of reconstruction methods may be useful. A discussion on alternative analysis

  9. Iterative Algorithms for Nonexpansive Mappings

    Directory of Open Access Journals (Sweden)

    Yao Yonghong

    2008-01-01

    Full Text Available Abstract We suggest and analyze two new iterative algorithms for a nonexpansive mapping in Banach spaces. We prove that the proposed iterative algorithms converge strongly to some fixed point of .

  10. An Enhanced Genetic Algorithm for the Generalized Traveling Salesman Problem

    Directory of Open Access Journals (Sweden)

    H. Jafarzadeh

    2017-12-01

    Full Text Available The generalized traveling salesman problem (GTSP deals with finding the minimum-cost tour in a clustered set of cities. In this problem, the traveler is interested in finding the best path that goes through all clusters. As this problem is NP-hard, implementing a metaheuristic algorithm to solve the large scale problems is inevitable. The performance of these algorithms can be intensively promoted by other heuristic algorithms. In this study, a search method is developed that improves the quality of the solutions and competition time considerably in comparison with Genetic Algorithm. In the proposed algorithm, the genetic algorithms with the Nearest Neighbor Search (NNS are combined and a heuristic mutation operator is applied. According to the experimental results on a set of standard test problems with symmetric distances, the proposed algorithm finds the best solutions in most cases with the least computational time. The proposed algorithm is highly competitive with the published until now algorithms in both solution quality and running time.

  11. An Enhanced Jaya Algorithm with a Two Group Adaption

    Directory of Open Access Journals (Sweden)

    Chibing Gong

    2017-01-01

    Full Text Available This paper proposes a novel performance enhanced Jaya algorithm with a two group adaption (E-Jaya. Two improvements are presented in E-Jaya. First, instead of using the best and the worst values in Jaya algorithm, EJaya separates all candidates into two groups: the better and the worse groups based on their fitness values, then the mean of the better group and the mean of the worse group are used. Second, in order to add non algorithm-specific parameters in E-Jaya, a novel adaptive method of dividing the two groups has been developed. Finally, twelve benchmark functions with different dimensionality, such as 40, 60, and 100, were evaluated using the proposed EJaya algorithm. The results show that E-Jaya significantly outperformed Jaya algorithm in terms of the solution accuracy. Additionally, E-Jaya was also compared with a differential evolution (DE, a self-adapting control parameters in differential evolution (jDE, a firefly algorithm (FA, and a standard particle swarm optimization 2011 (SPSO2011 algorithm. E-Jaya algorithm outperforms all the algorithms.

  12. Foundations of genetic algorithms 1991

    CERN Document Server

    1991-01-01

    Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition

  13. Parallel Architectures and Bioinspired Algorithms

    CERN Document Server

    Pérez, José; Lanchares, Juan

    2012-01-01

    This monograph presents examples of best practices when combining bioinspired algorithms with parallel architectures. The book includes recent work by leading researchers in the field and offers a map with the main paths already explored and new ways towards the future. Parallel Architectures and Bioinspired Algorithms will be of value to both specialists in Bioinspired Algorithms, Parallel and Distributed Computing, as well as computer science students trying to understand the present and the future of Parallel Architectures and Bioinspired Algorithms.

  14. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  15. Efficient GPS Position Determination Algorithms

    National Research Council Canada - National Science Library

    Nguyen, Thao Q

    2007-01-01

    ... differential GPS algorithm for a network of users. The stand-alone user GPS algorithm is a direct, closed-form, and efficient new position determination algorithm that exploits the closed-form solution of the GPS trilateration equations and works...

  16. Recent results on howard's algorithm

    DEFF Research Database (Denmark)

    Miltersen, P.B.

    2012-01-01

    Howard’s algorithm is a fifty-year old generally applicable algorithm for sequential decision making in face of uncertainty. It is routinely used in practice in numerous application areas that are so important that they usually go by their acronyms, e.g., OR, AI, and CAV. While Howard’s algorithm...

  17. Multisensor estimation: New distributed algorithms

    Directory of Open Access Journals (Sweden)

    Plataniotis K. N.

    1997-01-01

    Full Text Available The multisensor estimation problem is considered in this paper. New distributed algorithms, which are able to locally process the information and which deliver identical results to those generated by their centralized counterparts are presented. The algorithms can be used to provide robust and computationally efficient solutions to the multisensor estimation problem. The proposed distributed algorithms are theoretically interesting and computationally attractive.

  18. A rotating and warping projector/backprojector for fan-beam and cone-beam iterative algorithm

    International Nuclear Information System (INIS)

    Zeng, G.L.; Hsieh, Y.L.; Gullberg, G.T.

    1994-01-01

    A rotating-and-warping projector/backprojector is proposed for iterative algorithms used to reconstruct fan-beam and cone-beam single photon emission computed tomography (SPECT) data. The development of a new projector/backprojector for implementing attenuation, geometric point response, and scatter models is motivated by the need to reduce the computation time yet to preserve the fidelity of the corrected reconstruction. At each projection angle, the projector/backprojector first rotates the image volume so that the pixelized cube remains parallel to the detector, and then warps the image volume so that the fan-beam and cone-beam rays are converted into parallel rays. In the authors implementation, these two steps are combined so that the interpolation of voxel values are performed only once. The projection operation is achieved by a simple weighted summation, and the backprojection operation is achieved by copying weighted projection array values to the image volume. An advantage of this projector/backprojector is that the system point response function can be deconvolved via the Fast Fourier Transform using the shift-invariant property of the point response when the voxel-to-detector distance is constant. The fan-beam and cone-beam rotating-and-warping projector/backprojector is applied to SPECT data showing improved resolution

  19. A modified genetic algorithm with fuzzy roulette wheel selection for job-shop scheduling problems

    Science.gov (United States)

    Thammano, Arit; Teekeng, Wannaporn

    2015-05-01

    The job-shop scheduling problem is one of the most difficult production planning problems. Since it is in the NP-hard class, a recent trend in solving the job-shop scheduling problem is shifting towards the use of heuristic and metaheuristic algorithms. This paper proposes a novel metaheuristic algorithm, which is a modification of the genetic algorithm. This proposed algorithm introduces two new concepts to the standard genetic algorithm: (1) fuzzy roulette wheel selection and (2) the mutation operation with tabu list. The proposed algorithm has been evaluated and compared with several state-of-the-art algorithms in the literature. The experimental results on 53 JSSPs show that the proposed algorithm is very effective in solving the combinatorial optimization problems. It outperforms all state-of-the-art algorithms on all benchmark problems in terms of the ability to achieve the optimal solution and the computational time.

  20. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    Science.gov (United States)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  1. Using edge-preserving algorithm with non-local mean for significantly improved image-domain material decomposition in dual-energy CT.

    Science.gov (United States)

    Zhao, Wei; Niu, Tianye; Xing, Lei; Xie, Yaoqin; Xiong, Guanglei; Elmore, Kimberly; Zhu, Jun; Wang, Luyao; Min, James K

    2016-02-07

    Increased noise is a general concern for dual-energy material decomposition. Here, we develop an image-domain material decomposition algorithm for dual-energy CT (DECT) by incorporating an edge-preserving filter into the Local HighlY constrained backPRojection reconstruction (HYPR-LR) framework. With effective use of the non-local mean, the proposed algorithm, which is referred to as HYPR-NLM, reduces the noise in dual-energy decomposition while preserving the accuracy of quantitative measurement and spatial resolution of the material-specific dual-energy images. We demonstrate the noise reduction and resolution preservation of the algorithm with an iodine concentrate numerical phantom by comparing the HYPR-NLM algorithm to the direct matrix inversion, HYPR-LR and iterative image-domain material decomposition (Iter-DECT). We also show the superior performance of the HYPR-NLM over the existing methods by using two sets of cardiac perfusing imaging data. The DECT material decomposition comparison study shows that all four algorithms yield acceptable quantitative measurements of iodine concentrate. Direct matrix inversion yields the highest noise level, followed by HYPR-LR and Iter-DECT. HYPR-NLM in an iterative formulation significantly reduces image noise and the image noise is comparable to or even lower than that generated using Iter-DECT. For the HYPR-NLM method, there are marginal edge effects in the difference image, suggesting the high-frequency details are well preserved. In addition, when the search window size increases from to , there are no significant changes or marginal edge effects in the HYPR-NLM difference images. The reference drawn from the comparison study includes: (1) HYPR-NLM significantly reduces the DECT material decomposition noise while preserving quantitative measurements and high-frequency edge information, and (2) HYPR-NLM is robust with respect to parameter selection.

  2. The SRT reconstruction algorithm for semiquantification in PET imaging

    International Nuclear Information System (INIS)

    Kastis, George A.; Gaitanis, Anastasios; Samartzis, Alexandros P.; Fokas, Athanasios S.

    2015-01-01

    Purpose: The spline reconstruction technique (SRT) is a new, fast algorithm based on a novel numerical implementation of an analytic representation of the inverse Radon transform. The mathematical details of this algorithm and comparisons with filtered backprojection were presented earlier in the literature. In this study, the authors present a comparison between SRT and the ordered-subsets expectation–maximization (OSEM) algorithm for determining contrast and semiquantitative indices of 18 F-FDG uptake. Methods: The authors implemented SRT in the software for tomographic image reconstruction (STIR) open-source platform and evaluated this technique using simulated and real sinograms obtained from the GE Discovery ST positron emission tomography/computer tomography scanner. All simulations and reconstructions were performed in STIR. For OSEM, the authors used the clinical protocol of their scanner, namely, 21 subsets and two iterations. The authors also examined images at one, four, six, and ten iterations. For the simulation studies, the authors analyzed an image-quality phantom with cold and hot lesions. Two different versions of the phantom were employed at two different hot-sphere lesion-to-background ratios (LBRs), namely, 2:1 and 4:1. For each noiseless sinogram, 20 Poisson realizations were created at five different noise levels. In addition to making visual comparisons of the reconstructed images, the authors determined contrast and bias as a function of the background image roughness (IR). For the real-data studies, sinograms of an image-quality phantom simulating the human torso were employed. The authors determined contrast and LBR as a function of the background IR. Finally, the authors present plots of contrast as a function of IR after smoothing each reconstructed image with Gaussian filters of six different sizes. Statistical significance was determined by employing the Wilcoxon rank-sum test. Results: In both simulated and real studies, SRT

  3. An innovative localisation algorithm for railway vehicles

    Science.gov (United States)

    Allotta, B.; D'Adamio, P.; Malvezzi, M.; Pugi, L.; Ridolfi, A.; Rindi, A.; Vettori, G.

    2014-11-01

    In modern railway automatic train protection and automatic train control systems, odometry is a safety relevant on-board subsystem which estimates the instantaneous speed and the travelled distance of the train; a high reliability of the odometry estimate is fundamental, since an error on the train position may lead to a potentially dangerous overestimation of the distance available for braking. To improve the odometry estimate accuracy, data fusion of different inputs coming from a redundant sensor layout may be used. The aim of this work has been developing an innovative localisation algorithm for railway vehicles able to enhance the performances, in terms of speed and position estimation accuracy, of the classical odometry algorithms, such as the Italian Sistema Controllo Marcia Treno (SCMT). The proposed strategy consists of a sensor fusion between the information coming from a tachometer and an Inertial Measurements Unit (IMU). The sensor outputs have been simulated through a 3D multibody model of a railway vehicle. The work has provided the development of a custom IMU, designed by ECM S.p.a, in order to meet their industrial and business requirements. The industrial requirements have to be compliant with the European Train Control System (ETCS) standards: the European Rail Traffic Management System (ERTMS), a project developed by the European Union to improve the interoperability among different countries, in particular as regards the train control and command systems, fixes some standard values for the odometric (ODO) performance, in terms of speed and travelled distance estimation. The reliability of the ODO estimation has to be taken into account basing on the allowed speed profiles. The results of the currently used ODO algorithms can be improved, especially in case of degraded adhesion conditions; it has been verified in the simulation environment that the results of the proposed localisation algorithm are always compliant with the ERTMS requirements

  4. Transition Matrix Cluster Algorithms

    OpenAIRE

    Yevick, David; Lee, Yong Hwan

    2018-01-01

    We demonstrate that a series of simple procedures for increasing the efficiency of transition matrix calculations can be realized by integrating the standard single-spin reversal transition matrix method with global cluster inversion techniques.

  5. Methodology and basic algorithms of the Livermore Economic Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    Bell, R.B.

    1981-03-17

    The methodology and the basic pricing algorithms used in the Livermore Economic Modeling System (EMS) are described. The report explains the derivations of the EMS equations in detail; however, it could also serve as a general introduction to the modeling system. A brief but comprehensive explanation of what EMS is and does, and how it does it is presented. The second part examines the basic pricing algorithms currently implemented in EMS. Each algorithm's function is analyzed and a detailed derivation of the actual mathematical expressions used to implement the algorithm is presented. EMS is an evolving modeling system; improvements in existing algorithms are constantly under development and new submodels are being introduced. A snapshot of the standard version of EMS is provided and areas currently under study and development are considered briefly.

  6. A Simple Linear Ranking Algorithm Using Query Dependent Intercept Variables

    OpenAIRE

    Ailon, Nir

    2008-01-01

    The LETOR website contains three information retrieval datasets used as a benchmark for testing machine learning ideas for ranking. Algorithms participating in the challenge are required to assign score values to search results for a collection of queries, and are measured using standard IR ranking measures (NDCG, precision, MAP) that depend only the relative score-induced order of the results. Similarly to many of the ideas proposed in the participating algorithms, we train a linear classifi...

  7. On the runtime analysis of the Simple Genetic Algorithm

    DEFF Research Database (Denmark)

    Oliveto, Pietro S.; Witt, Carsten

    2014-01-01

    For many years it has been a challenge to analyze the time complexity of Genetic Algorithms (GAs) using stochastic selection together with crossover and mutation. This paper presents a rigorous runtime analysis of the well-known Simple Genetic Algorithm (SGA) for OneMax. It is proved that the SGA...... for a standard benchmark function. The presented techniques might serve as a first basis towards systematic runtime analyses of GAs....

  8. An investigation of messy genetic algorithms

    Science.gov (United States)

    Goldberg, David E.; Deb, Kalyanmoy; Korb, Bradley

    1990-01-01

    Genetic algorithms (GAs) are search procedures based on the mechanics of natural selection and natural genetics. They combine the use of string codings or artificial chromosomes and populations with the selective and juxtapositional power of reproduction and recombination to motivate a surprisingly powerful search heuristic in many problems. Despite their empirical success, there has been a long standing objection to the use of GAs in arbitrarily difficult problems. A new approach was launched. Results to a 30-bit, order-three-deception problem were obtained using a new type of genetic algorithm called a messy genetic algorithm (mGAs). Messy genetic algorithms combine the use of variable-length strings, a two-phase selection scheme, and messy genetic operators to effect a solution to the fixed-coding problem of standard simple GAs. The results of the study of mGAs in problems with nonuniform subfunction scale and size are presented. The mGA approach is summarized, both its operation and the theory of its use. Experiments on problems of varying scale, varying building-block size, and combined varying scale and size are presented.

  9. Efficiency of the Sophisticated DSMC Algorithm

    Science.gov (United States)

    Gallis, M. A.; Torczynski, J. R.

    2008-11-01

    Bird's sophisticated 2007 algorithm (DSMC07) is implemented in a two-dimensional Direct Simulation Monte Carlo (DSMC) code and compared to the standard 1994 algorithm (DSMC94) for multi-dimensional real-world rarefied-gas problems. Two test cases are examined. The first test case involves a typical DSMC problem, hypersonic flow over a wedge. The goal of this test case is to compare the algorithms when the same simulation parameters are used. The second test case involves a systematic analysis of the relative performance of the two algorithms for a real-world microsystem application that is out of reach for most DSMC codes. These comparisons confirm that when the discretization error tends to zero, both DSMC94 and DSMC07 produce results of the same accuracy. However, the two methods have a marked difference in their run times. For these cases, DSMC07 simulates 2-3 times as much physical time per processor-hour as DSMC94 at the same accuracy. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  10. An Algorithmic Diversity Diet?

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk; Schmidt, Jan-Hinrik

    2016-01-01

    diet system however triggers not only the classic discussion of the reach – distinctiveness balance for PSM, but also shows that ‘diversity’ is understood very differently in algorithmic recommender system communities than it is editorially and politically in the context of PSM. The design...... of a diversity diet system generates questions not just about editorial power, personal freedom and techno-paternalism, but also about the embedded politics of recommender systems as well as the human skills affiliated with PSM editorial work and the nature of PSM content....

  11. Randomized Filtering Algorithms

    DEFF Research Database (Denmark)

    Katriel, Irit; Van Hentenryck, Pascal

    2008-01-01

    of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed......Filtering every global constraint of a CPS to are consistency at every search step can be costly and solvers often compromise on either the level of consistency or the frequency at which are consistency is enforced. In this paper we propose two randomized filtering schemes for dense instances...

  12. Evaluation of the image quality in digital breast tomosynthesis (DBT) employed with a compressed-sensing (CS)-based reconstruction algorithm by using the mammographic accreditation phantom

    International Nuclear Information System (INIS)

    Park, Yeonok; Cho, Heemoon; Je, Uikyu; Cho, Hyosung; Park, Chulkyu; Lim, Hyunwoo; Kim, Kyuseok; Kim, Guna; Park, Soyoung; Woo, Taeho; Choi, Sungil

    2015-01-01

    In this work, we have developed a prototype digital breast tomosynthesis (DBT) system which mainly consists of an x-ray generator (28 kV p , 7 mA s), a CMOS-type flat-panel detector (70-μm pixel size, 230.5×339 mm 2 active area), and a rotational arm to move the x-ray generator in an arc. We employed a compressed-sensing (CS)-based reconstruction algorithm, rather than a common filtered-backprojection (FBP) one, for more accurate DBT reconstruction. Here the CS is a state-of-the-art mathematical theory for solving the inverse problems, which exploits the sparsity of the image with substantially high accuracy. We evaluated the reconstruction quality in terms of the detectability, the contrast-to-noise ratio (CNR), and the slice-sensitive profile (SSP) by using the mammographic accreditation phantom (Model 015, CIRS Inc.) and compared it to the FBP-based quality. The CS-based algorithm yielded much better image quality, preserving superior image homogeneity, edge sharpening, and cross-plane resolution, compared to the FBP-based one. - Highlights: • A prototype digital breast tomosynthesis (DBT) system is developed. • Compressed-sensing (CS) based reconstruction framework is employed. • We reconstructed high-quality DBT images by using the proposed reconstruction framework.

  13. Evaluation of the image quality in digital breast tomosynthesis (DBT) employed with a compressed-sensing (CS)-based reconstruction algorithm by using the mammographic accreditation phantom

    Energy Technology Data Exchange (ETDEWEB)

    Park, Yeonok; Cho, Heemoon; Je, Uikyu; Cho, Hyosung, E-mail: hscho1@yonsei.ac.kr; Park, Chulkyu; Lim, Hyunwoo; Kim, Kyuseok; Kim, Guna; Park, Soyoung; Woo, Taeho; Choi, Sungil

    2015-12-21

    In this work, we have developed a prototype digital breast tomosynthesis (DBT) system which mainly consists of an x-ray generator (28 kV{sub p}, 7 mA s), a CMOS-type flat-panel detector (70-μm pixel size, 230.5×339 mm{sup 2} active area), and a rotational arm to move the x-ray generator in an arc. We employed a compressed-sensing (CS)-based reconstruction algorithm, rather than a common filtered-backprojection (FBP) one, for more accurate DBT reconstruction. Here the CS is a state-of-the-art mathematical theory for solving the inverse problems, which exploits the sparsity of the image with substantially high accuracy. We evaluated the reconstruction quality in terms of the detectability, the contrast-to-noise ratio (CNR), and the slice-sensitive profile (SSP) by using the mammographic accreditation phantom (Model 015, CIRS Inc.) and compared it to the FBP-based quality. The CS-based algorithm yielded much better image quality, preserving superior image homogeneity, edge sharpening, and cross-plane resolution, compared to the FBP-based one. - Highlights: • A prototype digital breast tomosynthesis (DBT) system is developed. • Compressed-sensing (CS) based reconstruction framework is employed. • We reconstructed high-quality DBT images by using the proposed reconstruction framework.

  14. An efficient cuckoo search algorithm for numerical function optimization

    Science.gov (United States)

    Ong, Pauline; Zainuddin, Zarita

    2013-04-01

    Cuckoo search algorithm which reproduces the breeding strategy of the best known brood parasitic bird, the cuckoos has demonstrated its superiority in obtaining the global solution for numerical optimization problems. However, the involvement of fixed step approach in its exploration and exploitation behavior might slow down the search process considerably. In this regards, an improved cuckoo search algorithm with adaptive step size adjustment is introduced and its feasibility on a variety of benchmarks is validated. The obtained results show that the proposed scheme outperforms the standard cuckoo search algorithm in terms of convergence characteristic while preserving the fascinating features of the original method.

  15. Modification of MSDR algorithm and ITS implementation on graph clustering

    Science.gov (United States)

    Prastiwi, D.; Sugeng, K. A.; Siswantining, T.

    2017-07-01

    Maximum Standard Deviation Reduction (MSDR) is a graph clustering algorithm to minimize the distance variation within a cluster. In this paper we propose a modified MSDR by replacing one technical step in MSDR which uses polynomial regression, with a new and simpler step. This leads to our new algorithm called Modified MSDR (MMSDR). We implement the new algorithm to separate a domestic flight network of an Indonesian airline into two large clusters. Further analysis allows us to discover a weak link in the network, which should be improved by adding more flights.

  16. A parallel algorithm for the non-symmetric eigenvalue problem

    International Nuclear Information System (INIS)

    Sidani, M.M.

    1991-01-01

    An algorithm is presented for the solution of the non-symmetric eigenvalue problem. The algorithm is based on a divide-and-conquer procedure that provides initial approximations to the eigenpairs, which are then refined using Newton iterations. Since the smaller subproblems can be solved independently, and since Newton iterations with different initial guesses can be started simultaneously, the algorithm - unlike the standard QR method - is ideal for parallel computers. The author also reports on his investigation of deflation methods designed to obtain further eigenpairs if needed. Numerical results from implementations on a host of parallel machines (distributed and shared-memory) are presented

  17. Recognition algorithms in knot theory

    International Nuclear Information System (INIS)

    Dynnikov, I A

    2003-01-01

    In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory

  18. Fast algorithm for Morphological Filters

    International Nuclear Information System (INIS)

    Lou Shan; Jiang Xiangqian; Scott, Paul J

    2011-01-01

    In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.

  19. A flexible fuzzy regression algorithm for forecasting oil consumption estimation

    International Nuclear Information System (INIS)

    Azadeh, A.; Khakestani, M.; Saberi, M.

    2009-01-01

    Oil consumption plays a vital role in socio-economic development of most countries. This study presents a flexible fuzzy regression algorithm for forecasting oil consumption based on standard economic indicators. The standard indicators are annual population, cost of crude oil import, gross domestic production (GDP) and annual oil production in the last period. The proposed algorithm uses analysis of variance (ANOVA) to select either fuzzy regression or conventional regression for future demand estimation. The significance of the proposed algorithm is three fold. First, it is flexible and identifies the best model based on the results of ANOVA and minimum absolute percentage error (MAPE), whereas previous studies consider the best fitted fuzzy regression model based on MAPE or other relative error results. Second, the proposed model may identify conventional regression as the best model for future oil consumption forecasting because of its dynamic structure, whereas previous studies assume that fuzzy regression always provide the best solutions and estimation. Third, it utilizes the most standard independent variables for the regression models. To show the applicability and superiority of the proposed flexible fuzzy regression algorithm the data for oil consumption in Canada, United States, Japan and Australia from 1990 to 2005 are used. The results show that the flexible algorithm provides accurate solution for oil consumption estimation problem. The algorithm may be used by policy makers to accurately foresee the behavior of oil consumption in various regions.

  20. To develop a universal gamut mapping algorithm

    International Nuclear Information System (INIS)

    Morovic, J.

    1998-10-01

    using various algorithms was also looked at and a strong and positive correlation was found between these two properties. It was also shown that the reproductions made with GCUSP were pleasant in isolation. which makes it a very good candidate for a standard universal gamut mapping algorithm. (author)

  1. Contour Error Map Algorithm

    Science.gov (United States)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  2. Algorithmic Relative Complexity

    Directory of Open Access Journals (Sweden)

    Daniele Cerra

    2011-04-01

    Full Text Available Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.

  3. Fatigue evaluation algorithms: Review

    Energy Technology Data Exchange (ETDEWEB)

    Passipoularidis, V.A.; Broendsted, P.

    2009-11-15

    A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)

  4. Online Planning Algorithm

    Science.gov (United States)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  5. Implementation and Evaluation of Pinhole SPECT

    International Nuclear Information System (INIS)

    MacArtain Anne Marie

    2002-08-01

    The aim of this work was to implement Pinhole SPECT into a working Nuclear Medicine department. It has been reported that pinhole SPECT has been successfully performed to visualise pathology in ankle bones using gamma camera and the images were constructed using a standard filtered back-projection algorithm (Bahk YW, 1998). The objective of this study was to produce and evaluate this technique with the equipment available in the nuclear medicine department. The system performance was assessed using both the low-energy high resolution and the pinhole collimators. Phantoms constructed using capillary tubes, filled with technetium 99m (pertechnetate) were imaged in different arrays to identify possible limitations in the reconstruction software. A thyroid phantom with hot and cold inserts was also imaged. Data was acquired in ''tep-and-shoot'' mode as the camera was rotated 180 degrees or 360 degrees around the phantom. Images were reconstructed using standard parallel back-projection algorithm and a weighted backprojection algorithm (Nowak). An attempt was made to process images of the phantom in Matlab using the Iradon function modified by application of a cone-beam type algorithm (Feldkamp L, 1984). Visual comparison of static images between the pinhole and the LEHR collimators showed the expected improved spatial resolution of the pinhole images. Pinhole SPECT images should be reconstructed using the appropriate cone beam algorithm. However, it was established that reconstructing pinhole SPECT images using a standard parallel backprojection algorithm yielded results which were deemed to be clinically useful. The Nowak algorithm results were a distinct improvement on those achieved with the parallel backprojection algorithm. Likewise the results from the cone beam algorithm were better than the former but not as good as those obtained from the Nowak algorithm. This was due to the fact that the cone beam algorithm did not include a weighting factor. Implementation

  6. A Cooperative Framework for Fireworks Algorithm.

    Science.gov (United States)

    Zheng, Shaoqiu; Li, Junzhi; Janecek, Andreas; Tan, Ying

    2017-01-01

    This paper presents a cooperative framework for fireworks algorithm (CoFFWA). A detailed analysis of existing fireworks algorithm (FWA) and its recently developed variants has revealed that ( i) the current selection strategy has the drawback that the contribution of the firework with the best fitness (denoted as core firework) overwhelms the contributions of all other fireworks (non-core fireworks) in the explosion operator, ( ii) the Gaussian mutation operator is not as effective as it is designed to be. To overcome these limitations, the CoFFWA is proposed, which significantly improves the exploitation capability by using an independent selection method and also increases the exploration capability by incorporating a crowdness-avoiding cooperative strategy among the fireworks. Experimental results on the CEC2013 benchmark functions indicate that CoFFWA outperforms the state-of-the-art FWA variants, artificial bee colony, differential evolution, and the standard particle swarm optimization SPSO2007/SPSO2011 in terms of convergence performance.

  7. Automated Spectroscopic Analysis Using the Particle Swarm Optimization Algorithm: Implementing a Guided Search Algorithm to Autofit

    Science.gov (United States)

    Ervin, Katherine; Shipman, Steven

    2017-06-01

    While rotational spectra can be rapidly collected, their analysis (especially for complex systems) is seldom straightforward, leading to a bottleneck. The AUTOFIT program was designed to serve that need by quickly matching rotational constants to spectra with little user input and supervision. This program can potentially be improved by incorporating an optimization algorithm in the search for a solution. The Particle Swarm Optimization Algorithm (PSO) was chosen for implementation. PSO is part of a family of optimization algorithms called heuristic algorithms, which seek approximate best answers. This is ideal for rotational spectra, where an exact match will not be found without incorporating distortion constants, etc., which would otherwise greatly increase the size of the search space. PSO was tested for robustness against five standard fitness functions and then applied to a custom fitness function created for rotational spectra. This talk will explain the Particle Swarm Optimization algorithm and how it works, describe how Autofit was modified to use PSO, discuss the fitness function developed to work with spectroscopic data, and show our current results. Seifert, N.A., Finneran, I.A., Perez, C., Zaleski, D.P., Neill, J.L., Steber, A.L., Suenram, R.D., Lesarri, A., Shipman, S.T., Pate, B.H., J. Mol. Spec. 312, 13-21 (2015)

  8. RoPEUS: A New Robust Algorithm for Static Positioning in Ultrasonic Systems

    Directory of Open Access Journals (Sweden)

    Christophe Croux

    2009-06-01

    Full Text Available A well known problem for precise positioning in real environments is the presence of outliers in the measurement sample. Its importance is even bigger in ultrasound based systems since this technology needs a direct line of sight between emitters and receivers. Standard techniques for outlier detection in range based systems do not usually employ robust algorithms, failing when multiple outliers are present. The direct application of standard robust regression algorithms fails in static positioning (where only the current measurement sample is considered in real ultrasound based systems mainly due to the limited number of measurements and the geometry effects. This paper presents a new robust algorithm, called RoPEUS, based on MM estimation, that follows a typical two-step strategy: 1 a high breakdown point algorithm to obtain a clean sample, and 2 a refinement algorithm to increase the accuracy of the solution. The main modifications proposed to the standard MM robust algorithm are a built in check of partial solutions in the first step (rejecting bad geometries and the off-line calculation of the scale of the measurements. The algorithm is tested with real samples obtained with the 3D-LOCUS ultrasound localization system in an ideal environment without obstacles. These measurements are corrupted with typical outlying patterns to numerically evaluate the algorithm performance with respect to the standard parity space algorithm. The algorithm proves to be robust under single or multiple outliers, providing similar accuracy figures in all cases.

  9. Applications of algorithmic differentiation to phase retrieval algorithms.

    Science.gov (United States)

    Jurling, Alden S; Fienup, James R

    2014-07-01

    In this paper, we generalize the techniques of reverse-mode algorithmic differentiation to include elementary operations on multidimensional arrays of complex numbers. We explore the application of the algorithmic differentiation to phase retrieval error metrics and show that reverse-mode algorithmic differentiation provides a framework for straightforward calculation of gradients of complicated error metrics without resorting to finite differences or laborious symbolic differentiation.

  10. An accurate projection algorithm for array processor based SPECT systems

    International Nuclear Information System (INIS)

    King, M.A.; Schwinger, R.B.; Cool, S.L.

    1985-01-01

    A data re-projection algorithm has been developed for use in single photon emission computed tomography (SPECT) on an array processor based computer system. The algorithm makes use of an accurate representation of pixel activity (uniform square pixel model of intensity distribution), and is rapidly performed due to the efficient handling of an array based algorithm and the Fast Fourier Transform (FFT) on parallel processing hardware. The algorithm consists of using a pixel driven nearest neighbour projection operation to an array of subdivided projection bins. This result is then convolved with the projected uniform square pixel distribution before being compressed to original bin size. This distribution varies with projection angle and is explicitly calculated. The FFT combined with a frequency space multiplication is used instead of a spatial convolution for more rapid execution. The new algorithm was tested against other commonly used projection algorithms by comparing the accuracy of projections of a simulated transverse section of the abdomen against analytically determined projections of that transverse section. The new algorithm was found to yield comparable or better standard error and yet result in easier and more efficient implementation on parallel hardware. Applications of the algorithm include iterative reconstruction and attenuation correction schemes and evaluation of regions of interest in dynamic and gated SPECT

  11. A Hybrid Evolutionary Algorithm for Wheat Blending Problem

    Directory of Open Access Journals (Sweden)

    Xiang Li

    2014-01-01

    Full Text Available This paper presents a hybrid evolutionary algorithm to deal with the wheat blending problem. The unique constraints of this problem make many existing algorithms fail: either they do not generate acceptable results or they are not able to complete optimization within the required time. The proposed algorithm starts with a filtering process that follows predefined rules to reduce the search space. Then the linear-relaxed version of the problem is solved using a standard linear programming algorithm. The result is used in conjunction with a solution generated by a heuristic method to generate an initial solution. After that, a hybrid of an evolutionary algorithm, a heuristic method, and a linear programming solver is used to improve the quality of the solution. A local search based posttuning method is also incorporated into the algorithm. The proposed algorithm has been tested on artificial test cases and also real data from past years. Results show that the algorithm is able to find quality results in all cases and outperforms the existing method in terms of both quality and speed.

  12. Mapping robust parallel multigrid algorithms to scalable memory architectures

    Science.gov (United States)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  13. A Hybrid Evolutionary Algorithm for Wheat Blending Problem

    Science.gov (United States)

    Bonyadi, Mohammad Reza; Michalewicz, Zbigniew; Barone, Luigi

    2014-01-01

    This paper presents a hybrid evolutionary algorithm to deal with the wheat blending problem. The unique constraints of this problem make many existing algorithms fail: either they do not generate acceptable results or they are not able to complete optimization within the required time. The proposed algorithm starts with a filtering process that follows predefined rules to reduce the search space. Then the linear-relaxed version of the problem is solved using a standard linear programming algorithm. The result is used in conjunction with a solution generated by a heuristic method to generate an initial solution. After that, a hybrid of an evolutionary algorithm, a heuristic method, and a linear programming solver is used to improve the quality of the solution. A local search based posttuning method is also incorporated into the algorithm. The proposed algorithm has been tested on artificial test cases and also real data from past years. Results show that the algorithm is able to find quality results in all cases and outperforms the existing method in terms of both quality and speed. PMID:24707222

  14. Algorithms and their others: Algorithmic culture in context

    Directory of Open Access Journals (Sweden)

    Paul Dourish

    2016-08-01

    Full Text Available Algorithms, once obscure objects of technical art, have lately been subject to considerable popular and scholarly scrutiny. What does it mean to adopt the algorithm as an object of analytic attention? What is in view, and out of view, when we focus on the algorithm? Using Niklaus Wirth's 1975 formulation that “algorithms + data structures = programs” as a launching-off point, this paper examines how an algorithmic lens shapes the way in which we might inquire into contemporary digital culture.

  15. Fighting Censorship with Algorithms

    Science.gov (United States)

    Mahdian, Mohammad

    In countries such as China or Iran where Internet censorship is prevalent, users usually rely on proxies or anonymizers to freely access the web. The obvious difficulty with this approach is that once the address of a proxy or an anonymizer is announced for use to the public, the authorities can easily filter all traffic to that address. This poses a challenge as to how proxy addresses can be announced to users without leaking too much information to the censorship authorities. In this paper, we formulate this question as an interesting algorithmic problem. We study this problem in a static and a dynamic model, and give almost tight bounds on the number of proxy servers required to give access to n people k of whom are adversaries. We will also discuss how trust networks can be used in this context.

  16. Algorithmic Reflections on Choreography

    Directory of Open Access Journals (Sweden)

    Pablo Ventura

    2016-11-01

    Full Text Available In 1996, Pablo Ventura turned his attention to the choreography software Life Forms to find out whether the then-revolutionary new tool could lead to new possibilities of expression in contemporary dance. During the next 2 decades, he devised choreographic techniques and custom software to create dance works that highlight the operational logic of computers, accompanied by computer-generated dance and media elements. This article provides a firsthand account of how Ventura’s engagement with algorithmic concepts guided and transformed his choreographic practice. The text describes the methods that were developed to create computer-aided dance choreographies. Furthermore, the text illustrates how choreography techniques can be applied to correlate formal and aesthetic aspects of movement, music, and video. Finally, the text emphasizes how Ventura’s interest in the wider conceptual context has led him to explore with choreographic means fundamental issues concerning the characteristics of humans and machines and their increasingly profound interdependencies.

  17. World Competitive Contests (WCC algorithm: A novel intelligent optimization algorithm for biological and non-biological problems

    Directory of Open Access Journals (Sweden)

    Yosef Masoudi-Sobhanzadeh

    Full Text Available Since different sciences face lots of problems which cannot be solved in reasonable time order, we need new methods and algorithms for getting acceptable answers in proper time order. In the present study, a novel intelligent optimization algorithm, known as WCC (World Competitive Contests, has been proposed and applied to find the transcriptional factor binding sites (TFBS and eight benchmark functions discovery processes. We recognize the need to introduce an intelligent optimization algorithm because the TFBS discovery is a biological and an NP-Hard problem. Although there are some intelligent algorithms for the purpose of solving the above-mentioned problems, an optimization algorithm with good and acceptable performance, which is based on the real parameters, is essential. Like the other optimization algorithms, the proposed algorithm starts with the first population of teams. After teams are put into different groups, they will begin competing against their rival teams. The highly qualified teams will ascend to the elimination stage and will play each other in the next rounds. The other teams will wait for a new season to start. In this paper, we’re going to implement our proposed algorithm and compare it with five famous optimization algorithms from the perspective of the following: the obtained results, stability, convergence, standard deviation and elapsed time, which are applied to the real and randomly created datasets with different motif sizes. According to our obtained results, in many cases, the WCC׳s performance is better than the other algorithms’. Keywords: The motif discovery, Transcriptional factor binding sites, Optimization algorithms, World Competitive Contests

  18. An overview of smart grid routing algorithms

    Science.gov (United States)

    Wang, Junsheng; OU, Qinghai; Shen, Haijuan

    2017-08-01

    This paper summarizes the typical routing algorithm in smart grid by analyzing the communication business and communication requirements of intelligent grid. Mainly from the two kinds of routing algorithm is analyzed, namely clustering routing algorithm and routing algorithm, analyzed the advantages and disadvantages of two kinds of typical routing algorithm in routing algorithm and applicability.

  19. Genetic Algorithms in Noisy Environments

    OpenAIRE

    THEN, T. W.; CHONG, EDWIN K. P.

    1993-01-01

    Genetic Algorithms (GA) have been widely used in the areas of searching, function optimization, and machine learning. In many of these applications, the effect of noise is a critical factor in the performance of the genetic algorithms. While it hals been shown in previous siiudies that genetic algorithms are still able to perform effectively in the presence of noise, the problem of locating the global optimal solution at the end of the search has never been effectively addressed. Furthermore,...

  20. Mao-Gilles Stabilization Algorithm

    OpenAIRE

    Jérôme Gilles

    2013-01-01

    Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...

  1. Mao-Gilles Stabilization Algorithm

    Directory of Open Access Journals (Sweden)

    Jérôme Gilles

    2013-07-01

    Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.

  2. Unsupervised Classification Using Immune Algorithm

    OpenAIRE

    Al-Muallim, M. T.; El-Kouatly, R.

    2012-01-01

    Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...

  3. Fuzzy HRRN CPU Scheduling Algorithm

    OpenAIRE

    Bashir Alam; R. Biswas; M. Alam

    2011-01-01

    There are several scheduling algorithms like FCFS, SRTN, RR, priority etc. Scheduling decisions of these algorithms are based on parameters which are assumed to be crisp. However, in many circumstances these parameters are vague. The vagueness of these parameters suggests that scheduler should use fuzzy technique in scheduling the jobs. In this paper we have proposed a novel CPU scheduling algorithm Fuzzy HRRN that incorporates fuzziness in basic HRRN using fuzzy Technique FIS.

  4. Implementation and evaluation of an ordered subsets reconstruction algorithm for transmission PET studies using median root prior and inter-update median filtering.

    Science.gov (United States)

    Bettinardi, V; Alenius, S; Numminen, P; Teräs, M; Gilardi, M C; Fazio, F; Ruotsalainen, U

    2003-02-01

    An ordered subsets (OS) reconstruction algorithm based on the median root prior (MRP) and inter-update median filtering was implemented for the reconstruction of low count statistics transmission (TR) scans. The OS-MRP-TR algorithm was evaluated using an experimental phantom, simulating positron emission tomography (PET) whole-body (WB) studies, as well as patient data. Various experimental conditions, in terms of TR scan time (from 1 h to 1 min), covering a wide range of TR count statistics were evaluated. The performance of the algorithm was assessed by comparing the mean value of the attenuation coefficient (MVAC) of known tissue types and the coefficient of variation (CV) for low-count TR images, reconstructed with the OS-MRP-TR algorithm, with reference values obtained from high-count TR images reconstructed with a filtered back-projection (FBP) algorithm. The reconstructed OS-MRP-TR images were then used for attenuation correction of the corresponding emission (EM) data. EM images reconstructed with attenuation correction generated by OS-MRP-TR images, of low count statistics, were compared with the EM images corrected for attenuation using reference (high statistics) TR data. In all the experimental situations considered, the OS-MRP-TR algorithm showed: (1) a tendency towards a stable solution in terms of MVAC; (2) a difference in the MVAC of within 5% for a TR scan of 1 min reconstructed with the OS-MRP-TR and a TR scan of 1 h reconstructed with the FBP algorithm; (3) effectiveness in noise reduction, particularly for low count statistics data [using a specific parameter configuration the TR images reconstructed with OS-MRP-TR(1 min) had a lower CV than the corresponding TR images of a 1-h scan reconstructed with the FBP algorithm]; (4) a difference of within 3% between the mean counts in the EM images attenuation corrected using the OS-MRP-TR images of 1 min and the mean counts in the EM images attenuation corrected using the OS-MRP-TR images of 1 h; (5

  5. Implementation and evaluation of an ordered subsets reconstruction algorithm for transmission PET studies using median root prior and inter-update median filtering

    International Nuclear Information System (INIS)

    Bettinardi, V.; Gilardi, M.C.; Fazio, F.; Alenius, S.; Ruotsalainen, U.; Numminen, P.; Teraes, M.

    2003-01-01

    An ordered subsets (OS) reconstruction algorithm based on the median root prior (MRP) and inter-update median filtering was implemented for the reconstruction of low count statistics transmission (TR) scans. The OS-MRP-TR algorithm was evaluated using an experimental phantom, simulating positron emission tomography (PET) whole-body (WB) studies, as well as patient data. Various experimental conditions, in terms of TR scan time (from 1 h to 1 min), covering a wide range of TR count statistics were evaluated. The performance of the algorithm was assessed by comparing the mean value of the attenuation coefficient (MVAC) of known tissue types and the coefficient of variation (CV) for low-count TR images, reconstructed with the OS-MRP-TR algorithm, with reference values obtained from high-count TR images reconstructed with a filtered back-projection (FBP) algorithm. The reconstructed OS-MRP-TR images were then used for attenuation correction of the corresponding emission (EM) data. EM images reconstructed with attenuation correction generated by OS-MRP-TR images, of low count statistics, were compared with the EM images corrected for attenuation using reference (high statistics) TR data. In all the experimental situations considered, the OS-MRP-TR algorithm showed: (1) a tendency towards a stable solution in terms of MVAC; (2) a difference in the MVAC of within 5% for a TR scan of 1 min reconstructed with the OS-MRP-TR and a TR scan of 1 h reconstructed with the FBP algorithm; (3) effectiveness in noise reduction, particularly for low count statistics data [using a specific parameter configuration the TR images reconstructed with OS-MRP-TR(1 min) had a lower CV than the corresponding TR images of a 1-h scan reconstructed with the FBP algorithm]; (4) a difference of within 3% between the mean counts in the EM images attenuation corrected using the OS-MRP-TR images of 1 min and the mean counts in the EM images attenuation corrected using the OS-MRP-TR images of 1 h; (5

  6. Machine Learning an algorithmic perspective

    CERN Document Server

    Marsland, Stephen

    2009-01-01

    Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le

  7. Diversity-Guided Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Ursem, Rasmus Kjær

    2002-01-01

    Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...

  8. The Dynamics of Standardization

    DEFF Research Database (Denmark)

    Brunsson, Nils; Rasche, Andreas; Seidl, David

    2012-01-01

    This paper suggests that when the phenomenon of standards and standardization is examined from the perspective of organization studies, three aspects stand out: the standardization of organizations, standardization by organizations and standardization as (a form of) organization. Following a comp...

  9. An attenuated projector-backprojector for iterative SPECT reconstruction

    International Nuclear Information System (INIS)

    Gullberg, G.T.; Pelc, N.J.; Huesman, R.H.; Budinger, T.F.; Malko, J.A.

    1985-01-01

    A new ray-driven projector-backprojector which can easily be adapted for hardware implementation is described and simulated in software. The projector-backprojector discretely models the attenuated Radon transform of a source distributed within an attenuating medium as line integrals of discrete pixels, obtained using the standard sampling technique of averaging the emission source or attenuation distribution over small square regions. Attenuation factors are calculated for each pixel during the projection and backprojection operations instead of using precalculated values. The calculation of the factors requires a specification of the attenuation distribution, estimated either from an assumed constant distribution and an approximate body outline or from transmission measurements. The distribution of attenuation coefficients is stored in memory for efficient access during the projection and backprojection operations. The reconstruction of the source distribution is obtained by using a conjugate gradient or SIRT type iterative algorithm which requires one projection and one backprojection operation for each iteration. (author)

  10. Mean field theory of the swap Monte Carlo algorithm.

    Science.gov (United States)

    Ikeda, Harukuni; Zamponi, Francesco; Ikeda, Atsushi

    2017-12-21

    The swap Monte Carlo algorithm combines the translational motion with the exchange of particle species and is unprecedentedly efficient for some models of glass former. In order to clarify the physics underlying this acceleration, we study the problem within the mean field replica liquid theory. We extend the Gaussian Ansatz so as to take into account the exchange of particles of different species, and we calculate analytically the dynamical glass transition points corresponding to the swap and standard Monte Carlo algorithms. We show that the system evolved with the standard Monte Carlo algorithm exhibits the dynamical transition before that of the swap Monte Carlo algorithm. We also test the result by performing computer simulations of a binary mixture of the Mari-Kurchan model, both with standard and swap Monte Carlo. This scenario provides a possible explanation for the efficiency of the swap Monte Carlo algorithm. Finally, we discuss how the thermodynamic theory of the glass transition should be modified based on our results.

  11. Backtrack Orbit Search Algorithm

    Science.gov (United States)

    Knowles, K.; Swick, R.

    2002-12-01

    A Mathematical Solution to a Mathematical Problem. With the dramatic increase in satellite-born sensor resolution traditional methods of spatially searching for orbital data have become inadequate. As data volumes increase end-users of the data have become increasingly intolerant of false positives. And, as computing power rapidly increases end-users have come to expect equally rapid search speeds. Meanwhile data archives have an interest in delivering the minimum amount of data that meets users' needs. This keeps their costs down and allows them to serve more users in a more timely manner. Many methods of spatial search for orbital data have been tried in the past and found wanting. The ever popular lat/lon bounding box on a flat Earth is highly inaccurate. Spatial search based on nominal "orbits" is somewhat more accurate at much higher implementation cost and slower performance. Spatial search of orbital data based on predict orbit models are very accurate at a much higher maintenance cost and slower performance. This poster describes the Backtrack Orbit Search Algorithm--an alternative spatial search method for orbital data. Backtrack has a degree of accuracy that rivals predict methods while being faster, less costly to implement, and less costly to maintain than other methods.

  12. Diagnostic algorithm for syncope.

    Science.gov (United States)

    Mereu, Roberto; Sau, Arunashis; Lim, Phang Boon

    2014-09-01

    Syncope is a common symptom with many causes. Affecting a large proportion of the population, both young and old, it represents a significant healthcare burden. The diagnostic approach to syncope should be focused on the initial evaluation, which includes a detailed clinical history, physical examination and 12-lead electrocardiogram. Following the initial evaluation, patients should be risk-stratified into high or low-risk groups in order to guide further investigations and management. Patients with high-risk features should be investigated further to exclude significant structural heart disease or arrhythmia. The ideal currently-available investigation should allow ECG recording during a spontaneous episode of syncope, and when this is not possible, an implantable loop recorder may be considered. In the emergency room setting, acute causes of syncope must also be considered including severe cardiovascular compromise due to pulmonary, cardiac or vascular pathology. While not all patients will receive a conclusive diagnosis, risk-stratification in patients to guide appropriate investigations in the context of a diagnostic algorithm should allow a benign prognosis to be maintained. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Toward an Algorithmic Pedagogy

    Directory of Open Access Journals (Sweden)

    Holly Willis

    2007-01-01

    Full Text Available The demand for an expanded definition of literacy to accommodate visual and aural media is not particularly new, but it gains urgency as college students transform, becoming producers of media in many of their everyday social activities. The response among those who grapple with these issues as instructors has been to advocate for new definitions of literacy and particularly, an understanding of visual literacy. These efforts are exemplary, and promote a much needed rethinking of literacy and models of pedagogy. However, in what is more akin to a manifesto than a polished argument, this essay argues that we need to push farther: What if we moved beyond visual rhetoric, as well as a game-based pedagogy and the adoption of a broad range of media tools on campus, toward a pedagogy grounded fundamentally in a media ecology? Framing this investigation in terms of a media ecology allows us to take account of the multiply determining relationships wrought not just by individual media, but by the interrelationships, dependencies and symbioses that take place within the dynamic system that is today’s high-tech university. An ecological approach allows us to examine what happens when new media practices collide with computational models, providing a glimpse of possible transformations not only ways of being but ways of teaching and learning. How, then, may pedagogical practices be transformed computationally or algorithmically and to what ends?

  14. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    Science.gov (United States)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  15. THE QUASIPERIODIC AUTOMATED TRANSIT SEARCH ALGORITHM

    International Nuclear Information System (INIS)

    Carter, Joshua A.; Agol, Eric

    2013-01-01

    We present a new algorithm for detecting transiting extrasolar planets in time-series photometry. The Quasiperiodic Automated Transit Search (QATS) algorithm relaxes the usual assumption of strictly periodic transits by permitting a variable, but bounded, interval between successive transits. We show that this method is capable of detecting transiting planets with significant transit timing variations without any loss of significance— s mearing — as would be incurred with traditional algorithms; however, this is at the cost of a slightly increased stochastic background. The approximate times of transit are standard products of the QATS search. Despite the increased flexibility, we show that QATS has a run-time complexity that is comparable to traditional search codes and is comparably easy to implement. QATS is applicable to data having a nearly uninterrupted, uniform cadence and is therefore well suited to the modern class of space-based transit searches (e.g., Kepler, CoRoT). Applications of QATS include transiting planets in dynamically active multi-planet systems and transiting planets in stellar binary systems.

  16. Streaming Algorithms for Line Simplification

    DEFF Research Database (Denmark)

    Abam, Mohammad; de Berg, Mark; Hachenberger, Peter

    2010-01-01

    this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our...

  17. Echo Cancellation I: Algorithms Simulation

    Directory of Open Access Journals (Sweden)

    P. Sovka

    2000-04-01

    Full Text Available Echo cancellation system used in mobile communications is analyzed.Convergence behavior and misadjustment of several LMS algorithms arecompared. The misadjustment means errors in filter weight estimation.The resulting echo suppression for discussed algorithms with simulatedas well as rela speech signals is evaluated. The optional echocancellation configuration is suggested.

  18. Look-ahead fermion algorithm

    International Nuclear Information System (INIS)

    Grady, M.

    1986-01-01

    I describe a fast fermion algorithm which utilizes pseudofermion fields but appears to have little or no systematic error. Test simulations on two-dimensional gauge theories are described. A possible justification for the algorithm being exact is discussed. 8 refs

  19. Global alignment algorithms implementations | Fatumo ...

    African Journals Online (AJOL)

    In this paper, we implemented the two routes for sequence comparison, that is; the dotplot and Needleman-wunsch algorithm for global sequence alignment. Our algorithms were implemented in python programming language and were tested on Linux platform 1.60GHz, 512 MB of RAM SUSE 9.2 and 10.1 versions.

  20. Recovery Rate of Clustering Algorithms

    NARCIS (Netherlands)

    Li, Fajie; Klette, Reinhard; Wada, T; Huang, F; Lin, S

    2009-01-01

    This article provides a simple and general way for defining the recovery rate of clustering algorithms using a given family of old clusters for evaluating the performance of the algorithm when calculating a family of new clusters. Under the assumption of dealing with simulated data (i.e., known old

  1. Diversity-Guided Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Ursem, Rasmus Kjær

    2002-01-01

    Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few algorit...

  2. Quantum algorithms and learning theory

    NARCIS (Netherlands)

    Arunachalam, S.

    2018-01-01

    This thesis studies strengths and weaknesses of quantum computers. In the first part we present three contributions to quantum algorithms. 1) consider a search space of N elements. One of these elements is "marked" and our goal is to find this. We describe a quantum algorithm to solve this problem

  3. Where are the parallel algorithms?

    Science.gov (United States)

    Voigt, R. G.

    1985-01-01

    Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.

  4. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  5. Algorithms in combinatorial design theory

    CERN Document Server

    Colbourn, CJ

    1985-01-01

    The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.

  6. Executable Pseudocode for Graph Algorithms

    NARCIS (Netherlands)

    B. Ó Nualláin (Breanndán)

    2015-01-01

    textabstract Algorithms are written in pseudocode. However the implementation of an algorithm in a conventional, imperative programming language can often be scattered over hundreds of lines of code thus obscuring its essence. This can lead to difficulties in understanding or verifying the

  7. On exact algorithms for treewidth

    NARCIS (Netherlands)

    Bodlaender, H.L.; Fomin, F.V.; Koster, A.M.C.A.; Kratsch, D.; Thilikos, D.M.

    2006-01-01

    We give experimental and theoretical results on the problem of computing the treewidth of a graph by exact exponential time algorithms using exponential space or using only polynomial space. We first report on an implementation of a dynamic programming algorithm for computing the treewidth of a

  8. Cascade Error Projection Learning Algorithm

    Science.gov (United States)

    Duong, T. A.; Stubberud, A. R.; Daud, T.

    1995-01-01

    A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.

  9. Error Estimation for the Linearized Auto-Localization Algorithm

    Directory of Open Access Journals (Sweden)

    Fernando Seco

    2012-02-01

    Full Text Available The Linearized Auto-Localization (LAL algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs, using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL, the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method.

  10. Packing Boxes into Multiple Containers Using Genetic Algorithm

    Science.gov (United States)

    Menghani, Deepak; Guha, Anirban

    2016-07-01

    Container loading problems have been studied extensively in the literature and various analytical, heuristic and metaheuristic methods have been proposed. This paper presents two different variants of a genetic algorithm framework for the three-dimensional container loading problem for optimally loading boxes into multiple containers with constraints. The algorithms are designed so that it is easy to incorporate various constraints found in real life problems. The algorithms are tested on data of standard test cases from literature and are found to compare well with the benchmark algorithms in terms of utilization of containers. This, along with the ability to easily incorporate a wide range of practical constraints, makes them attractive for implementation in real life scenarios.

  11. Optimizing graph algorithms on pregel-like systems

    KAUST Repository

    Salihoglu, Semih

    2014-03-01

    We study the problem of implementing graph algorithms efficiently on Pregel-like systems, which can be surprisingly challenging. Standard graph algorithms in this setting can incur unnecessary inefficiencies such as slow convergence or high communication or computation cost, typically due to structural properties of the input graphs such as large diameters or skew in component sizes. We describe several optimization techniques to address these inefficiencies. Our most general technique is based on the idea of performing some serial computation on a tiny fraction of the input graph, complementing Pregel\\'s vertex-centric parallelism. We base our study on thorough implementations of several fundamental graph algorithms, some of which have, to the best of our knowledge, not been implemented on Pregel-like systems before. The algorithms and optimizations we describe are fully implemented in our open-source Pregel implementation. We present detailed experiments showing that our optimization techniques improve runtime significantly on a variety of very large graph datasets.

  12. Novel medical image enhancement algorithms

    Science.gov (United States)

    Agaian, Sos; McClendon, Stephen A.

    2010-01-01

    In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.

  13. Elementary functions algorithms and implementation

    CERN Document Server

    Muller, Jean-Michel

    2016-01-01

    This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...

  14. A Cooperative Harmony Search Algorithm for Function Optimization

    Directory of Open Access Journals (Sweden)

    Gang Li

    2014-01-01

    Full Text Available Harmony search algorithm (HS is a new metaheuristic algorithm which is inspired by a process involving musical improvisation. HS is a stochastic optimization technique that is similar to genetic algorithms (GAs and particle swarm optimizers (PSOs. It has been widely applied in order to solve many complex optimization problems, including continuous and discrete problems, such as structure design, and function optimization. A cooperative harmony search algorithm (CHS is developed in this paper, with cooperative behavior being employed as a significant improvement to the performance of the original algorithm. Standard HS just uses one harmony memory and all the variables of the object function are improvised within the harmony memory, while the proposed algorithm CHS uses multiple harmony memories, so that each harmony memory can optimize different components of the solution vector. The CHS was then applied to function optimization problems. The results of the experiment show that CHS is capable of finding better solutions when compared to HS and a number of other algorithms, especially in high-dimensional problems.

  15. Swarm algorithms with chaotic jumps for optimization of multimodal functions

    Science.gov (United States)

    Krohling, Renato A.; Mendel, Eduardo; Campos, Mauro

    2011-11-01

    In this article, the use of some well-known versions of particle swarm optimization (PSO) namely the canonical PSO, the bare bones PSO (BBPSO) and the fully informed particle swarm (FIPS) is investigated on multimodal optimization problems. A hybrid approach which consists of swarm algorithms combined with a jump strategy in order to escape from local optima is developed and tested. The jump strategy is based on the chaotic logistic map. The hybrid algorithm was tested for all three versions of PSO and simulation results show that the addition of the jump strategy improves the performance of swarm algorithms for most of the investigated optimization problems. Comparison with the off-the-shelf PSO with local topology (l best model) has also been performed and indicates the superior performance of the standard PSO with chaotic jump over the standard both using local topology (l best model).

  16. Machine-Learning Algorithms to Code Public Health Spending Accounts.

    Science.gov (United States)

    Brady, Eoghan S; Leider, Jonathon P; Resnick, Beth A; Alfonso, Y Natalia; Bishai, David

    Government public health expenditure data sets require time- and labor-intensive manipulation to summarize results that public health policy makers can use. Our objective was to compare the performances of machine-learning algorithms with manual classification of public health expenditures to determine if machines could provide a faster, cheaper alternative to manual classification. We used machine-learning algorithms to replicate the process of manually classifying state public health expenditures, using the standardized public health spending categories from the Foundational Public Health Services model and a large data set from the US Census Bureau. We obtained a data set of 1.9 million individual expenditure items from 2000 to 2013. We collapsed these data into 147 280 summary expenditure records, and we followed a standardized method of manually classifying each expenditure record as public health, maybe public health, or not public health. We then trained 9 machine-learning algorithms to replicate the manual process. We calculated recall, precision, and coverage rates to measure the performance of individual and ensembled algorithms. Compared with manual classification, the machine-learning random forests algorithm produced 84% recall and 91% precision. With algorithm ensembling, we achieved our target criterion of 90% recall by using a consensus ensemble of ≥6 algorithms while still retaining 93% coverage, leaving only 7% of the summary expenditure records unclassified. Machine learning can be a time- and cost-saving tool for estimating public health spending in the United States. It can be used with standardized public health spending categories based on the Foundational Public Health Services model to help parse public health expenditure information from other types of health-related spending, provide data that are more comparable across public health organizations, and evaluate the impact of evidence-based public health resource allocation.

  17. A kind of iteration algorithm for fast wave heating

    International Nuclear Information System (INIS)

    Zhu Xueguang; Kuang Guangli; Zhao Yanping; Li Youyi; Xie Jikang

    1998-03-01

    The standard normal distribution for particles in Tokamak geometry is usually assumed in fast wave heating. In fact, due to the quasi-linear diffusion effect, the parallel and vertical temperature of resonant particles is not equal, so, this will bring some error. For this case, the Fokker-Planck equation is introduced, and iteration algorithm is adopted to solve the problem well

  18. An efficient modified Elliptic Curve Digital Signature Algorithm | Kiros ...

    African Journals Online (AJOL)

    Many digital signatures which are based on Elliptic Curves Cryptography (ECC) have been proposed. Among these digital signatures, the Elliptic Curve Digital Signature Algorithm (ECDSA) is the widely standardized one. However, the verification process of ECDSA is slower than the signature generation process. Hence ...

  19. An enhanced fractal image denoising algorithm

    International Nuclear Information System (INIS)

    Lu Jian; Ye Zhongxing; Zou Yuru; Ye Ruisong

    2008-01-01

    In recent years, there has been a significant development in image denoising using fractal-based method. This paper presents an enhanced fractal predictive denoising algorithm for denoising the images corrupted by an additive white Gaussian noise (AWGN) by using quadratic gray-level function. Meanwhile, a quantization method for the fractal gray-level coefficients of the quadratic function is proposed to strictly guarantee the contractivity requirement of the enhanced fractal coding, and in terms of the quality of the fractal representation measured by PSNR, the enhanced fractal image coding using quadratic gray-level function generally performs better than the standard fractal coding using linear gray-level function. Based on this enhanced fractal coding, the enhanced fractal image denoising is implemented by estimating the fractal gray-level coefficients of the quadratic function of the noiseless image from its noisy observation. Experimental results show that, compared with other standard fractal-based image denoising schemes using linear gray-level function, the enhanced fractal denoising algorithm can improve the quality of the restored image efficiently

  20. Periprosthetic joint infections: a clinical practice algorithm.

    Science.gov (United States)

    Volpe, Luigi; Indelli, Pier Francesco; Latella, Leonardo; Poli, Paolo; Yakupoglu, Jale; Marcucci, Massimiliano

    2014-01-01

    periprosthetic joint infection (PJI) accounts for 25% of failed total knee arthroplasties (TKAs) and 15% of failed total hip arthroplasties (THAs). The purpose of the present study was to design a multidisciplinary diagnostic algorithm to detect a PJI as cause of a painful TKA or THA. from April 2010 to October 2012, 111 patients with suspected PJI were evaluated. The study group comprised 75 females and 36 males with an average age of 71 years (range, 48 to 94 years). Eighty-four patients had a painful THA, while 27 reported a painful TKA. The stepwise diagnostic algorithm, applied in all the patients, included: measurement of serum C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR) levels; imaging studies, including standard radiological examination, standard technetium-99m-methylene diphosphonate (MDP) bone scan (if positive, confirmation by LeukoScan was obtained); and joint aspiration with analysis of synovial fluid. following application of the stepwise diagnostic algorithm, 24 out of our 111 screened patients were classified as having a suspected PJI (21.7%). CRP and ESR levels were negative in 84 and positive in 17 cases; 93.7% of the patients had a positive technetium-labeled bone scan, and 23% a positive LeukoScan. Preoperative synovial fluid analysis was positive in 13.5%; analysis of synovial fluid obtained by preoperative aspiration showed a leucocyte count of > 3000 cells μ/l in 52% of the patients. the present study showed that the diagnosis of PJI requires the application of a multimodal diagnostic protocol in order to avoid complications related to surgical revision of a misdiagnosed "silent" PJI. Level IV, therapeutic case series.

  1. Electronic circuits, systems and standards the best of EDN

    CERN Document Server

    Hickman, Ian

    2013-01-01

    Electronic Circuits, Systems and Standards: The Best of EDN is a collection of 66 EDN articles. The topics covered in this collection are diverse but all are relevant to controlled circulation electronics. The coverage of the text includes topics about software and algorithms, such as simple random number algorithm; simple log algorithm; and efficient algorithm for repeated FFTs. The book also tackles measurement related topics, including test for identifying a Gaussian noise source; enhancing product reliability; and amplitude-locked loop speeds filter test. The text will be useful to student

  2. Portable Health Algorithms Test System

    Science.gov (United States)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  3. A parallel version of a multigrid algorithm for isotropic transport equations

    International Nuclear Information System (INIS)

    Manteuffel, T.; McCormick, S.; Yang, G.; Morel, J.; Oliveira, S.

    1994-01-01

    The focus of this paper is on a parallel algorithm for solving the transport equations in a slab geometry using multigrid. The spatial discretization scheme used is a finite element method called the modified linear discontinuous (MLD) scheme. The MLD scheme represents a lumped version of the standard linear discontinuous (LD) scheme. The parallel algorithm was implemented on the Connection Machine 2 (CM2). Convergence rates and timings for this algorithm on the CM2 and Cray-YMP are shown

  4. Learning from nature: Nature-inspired algorithms

    DEFF Research Database (Denmark)

    Albeanu, Grigore; Madsen, Henrik; Popentiu-Vladicescu, Florin

    2016-01-01

    During last decade, the nature has inspired researchers to develop new algorithms. The largest collection of nature-inspired algorithms is biology-inspired: swarm intelligence (particle swarm optimization, ant colony optimization, cuckoo search, bees' algorithm, bat algorithm, firefly algorithm etc...

  5. Complex networks an algorithmic perspective

    CERN Document Server

    Erciyes, Kayhan

    2014-01-01

    Network science is a rapidly emerging field of study that encompasses mathematics, computer science, physics, and engineering. A key issue in the study of complex networks is to understand the collective behavior of the various elements of these networks.Although the results from graph theory have proven to be powerful in investigating the structures of complex networks, few books focus on the algorithmic aspects of complex network analysis. Filling this need, Complex Networks: An Algorithmic Perspective supplies the basic theoretical algorithmic and graph theoretic knowledge needed by every r

  6. An investigation of genetic algorithms

    International Nuclear Information System (INIS)

    Douglas, S.R.

    1995-04-01

    Genetic algorithms mimic biological evolution by natural selection in their search for better individuals within a changing population. they can be used as efficient optimizers. This report discusses the developing field of genetic algorithms. It gives a simple example of the search process and introduces the concept of schema. It also discusses modifications to the basic genetic algorithm that result in species and niche formation, in machine learning and artificial evolution of computer programs, and in the streamlining of human-computer interaction. (author). 3 refs., 1 tab., 2 figs

  7. Instance-specific algorithm configuration

    CERN Document Server

    Malitsky, Yuri

    2014-01-01

    This book presents a modular and expandable technique in the rapidly emerging research area of automatic configuration and selection of the best algorithm for the instance at hand. The author presents the basic model behind ISAC and then details a number of modifications and practical applications. In particular, he addresses automated feature generation, offline algorithm configuration for portfolio generation, algorithm selection, adaptive solvers, online tuning, and parallelization.    The author's related thesis was honorably mentioned (runner-up) for the ACP Dissertation Award in 2014,

  8. Quantum Computations: Fundamentals and Algorithms

    International Nuclear Information System (INIS)

    Duplij, S.A.; Shapoval, I.I.

    2007-01-01

    Basic concepts of quantum information theory, principles of quantum calculations and the possibility of creation on this basis unique on calculation power and functioning principle device, named quantum computer, are concerned. The main blocks of quantum logic, schemes of quantum calculations implementation, as well as some known today effective quantum algorithms, called to realize advantages of quantum calculations upon classical, are presented here. Among them special place is taken by Shor's algorithm of number factorization and Grover's algorithm of unsorted database search. Phenomena of decoherence, its influence on quantum computer stability and methods of quantum errors correction are described

  9. Algorithms Design Techniques and Analysis

    CERN Document Server

    Alsuwaiyel, M H

    1999-01-01

    Problem solving is an essential part of every scientific discipline. It has two components: (1) problem identification and formulation, and (2) solution of the formulated problem. One can solve a problem on its own using ad hoc techniques or follow those techniques that have produced efficient solutions to similar problems. This requires the understanding of various algorithm design techniques, how and when to use them to formulate solutions and the context appropriate for each of them. This book advocates the study of algorithm design techniques by presenting most of the useful algorithm desi

  10. Subcubic Control Flow Analysis Algorithms

    DEFF Research Database (Denmark)

    Midtgaard, Jan; Van Horn, David

    We give the first direct subcubic algorithm for performing control flow analysis of higher-order functional programs. Despite the long held belief that inclusion-based flow analysis could not surpass the ``cubic bottleneck, '' we apply known set compression techniques to obtain an algorithm...... that runs in time O(n^3/log n) on a unit cost random-access memory model machine. Moreover, we refine the initial flow analysis into two more precise analyses incorporating notions of reachability. We give subcubic algorithms for these more precise analyses and relate them to an existing analysis from...

  11. A voting-based star identification algorithm utilizing local and global distribution

    Science.gov (United States)

    Fan, Qiaoyun; Zhong, Xuyang; Sun, Junhua

    2018-03-01

    A novel star identification algorithm based on voting scheme is presented in this paper. In the proposed algorithm, the global distribution and local distribution of sensor stars are fully utilized, and the stratified voting scheme is adopted to obtain the candidates for sensor stars. The database optimization is employed to reduce its memory requirement and improve the robustness of the proposed algorithm. The simulation shows that the proposed algorithm exhibits 99.81% identification rate with 2-pixel standard deviations of positional noises and 0.322-Mv magnitude noises. Compared with two similar algorithms, the proposed algorithm is more robust towards noise, and the average identification time and required memory is less. Furthermore, the real sky test shows that the proposed algorithm performs well on the real star images.

  12. An Improved SPEA2 Algorithm with Adaptive Selection of Evolutionary Operators Scheme for Multiobjective Optimization Problems

    Directory of Open Access Journals (Sweden)

    Fuqing Zhao

    2016-01-01

    Full Text Available A fixed evolutionary mechanism is usually adopted in the multiobjective evolutionary algorithms and their operators are static during the evolutionary process, which causes the algorithm not to fully exploit the search space and is easy to trap in local optima. In this paper, a SPEA2 algorithm which is based on adaptive selection evolution operators (AOSPEA is proposed. The proposed algorithm can adaptively select simulated binary crossover, polynomial mutation, and differential evolution operator during the evolutionary process according to their contribution to the external archive. Meanwhile, the convergence performance of the proposed algorithm is analyzed with Markov chain. Simulation results on the standard benchmark functions reveal that the performance of the proposed algorithm outperforms the other classical multiobjective evolutionary algorithms.

  13. Active Semisupervised Clustering Algorithm with Label Propagation for Imbalanced and Multidensity Datasets

    Directory of Open Access Journals (Sweden)

    Mingwei Leng

    2013-01-01

    Full Text Available The accuracy of most of the existing semisupervised clustering algorithms based on small size of labeled dataset is low when dealing with multidensity and imbalanced datasets, and labeling data is quite expensive and time consuming in many real-world applications. This paper focuses on active data selection and semisupervised clustering algorithm in multidensity and imbalanced datasets and proposes an active semisupervised clustering algorithm. The proposed algorithm uses an active mechanism for data selection to minimize the amount of labeled data, and it utilizes multithreshold to expand labeled datasets on multidensity and imbalanced datasets. Three standard datasets and one synthetic dataset are used to demonstrate the proposed algorithm, and the experimental results show that the proposed semisupervised clustering algorithm has a higher accuracy and a more stable performance in comparison to other clustering and semisupervised clustering algorithms, especially when the datasets are multidensity and imbalanced.

  14. Hybrid fuzzy charged system search algorithm based state estimation in distribution networks

    Directory of Open Access Journals (Sweden)

    Sachidananda Prasad

    2017-06-01

    Full Text Available This paper proposes a new hybrid charged system search (CSS algorithm based state estimation in radial distribution networks in fuzzy framework. The objective of the optimization problem is to minimize the weighted square of the difference between the measured and the estimated quantity. The proposed method of state estimation considers bus voltage magnitude and phase angle as state variable along with some equality and inequality constraints for state estimation in distribution networks. A rule based fuzzy inference system has been designed to control the parameters of the CSS algorithm to achieve better balance between the exploration and exploitation capability of the algorithm. The efficiency of the proposed fuzzy adaptive charged system search (FACSS algorithm has been tested on standard IEEE 33-bus system and Indian 85-bus practical radial distribution system. The obtained results have been compared with the conventional CSS algorithm, weighted least square (WLS algorithm and particle swarm optimization (PSO for feasibility of the algorithm.

  15. A Branch-and-bound Algorithm for the Network Diversion Problem

    National Research Council Canada - National Science Library

    Erken, Ozgur

    2002-01-01

    ...). We develop and test a specialized branch-and-hound algorithm for this NP-complete problem. The algorithm is based on partitioning the solution space with respect to edges in certain s-t cuts and yields a non- standard, non-binary enumeration tree...

  16. Algorithm for output of floating-point numbers in fixed-point form ...

    African Journals Online (AJOL)

    Presented in this paper is an algorithm with which floating-point numbers can be converted to their American Standard Code for Information Interchange (ASCII) equivalents in fixed-point form ready for output. The algorithm is so written that it can be implemented easily and requires just the address of the buffer to contain ...

  17. Determination of the Three-Dimensional Rate of Cancer Cell Rotation in an Optically-Induced Electrokinetics Chip Using an Optical Flow Algorithm

    Directory of Open Access Journals (Sweden)

    Yuliang Zhao

    2018-03-01

    Full Text Available Our group has reported that Melan-A cells and lymphocytes undergo self-rotation in a homogeneous AC electric field, and found that the rotation velocity of these cells is a key indicator to characterize their physical properties. However, the determination of the rotation properties of a cell by human eyes is both gruesome and time consuming, and not always accurate. In this paper, a method is presented to more accurately determine the 3D cell rotation velocity and axis from a 2D image sequence captured by a single camera. Using the optical flow method, we obtained the 2D motion field data from the image sequence and back-project it onto a 3D sphere model, and then the rotation axis and velocity of the cell were calculated. After testing the algorithm on animated image sequences, experiments were also performed on image sequences of real rotating cells. All of these results indicate that this method is accurate, practical, and useful. Furthermore, the method presented there can also be used to determine the 3D rotation velocity of other types of spherical objects that are commonly used in microfluidic applications, such as beads and microparticles.

  18. Adaptive Maneuvering Target Tracking Algorithm

    Directory of Open Access Journals (Sweden)

    Chunling Wu

    2014-07-01

    Full Text Available Based on the current statistical model, a new adaptive maneuvering target tracking algorithm, CS-MSTF, is presented. The new algorithm keep the merits of high tracking precision that the current statistical model and strong tracking filter (STF have in tracking maneuvering target, and made the modifications as such: First, STF has the defect that it achieves the perfect performance in maneuvering segment at a cost of the precision in non-maneuvering segment, so the new algorithm modified the prediction error covariance matrix and the fading factor to improve the tracking precision both of the maneuvering segment and non-maneuvering segment; The estimation error covariance matrix was calculated using the Joseph form, which is more stable and robust in numerical. The Monte- Carlo simulation shows that the CS-MSTF algorithm has a more excellent performance than CS-STF and can estimate efficiently.

  19. Recursive Algorithm For Linear Regression

    Science.gov (United States)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  20. Designing algorithms using CAD technologies

    Directory of Open Access Journals (Sweden)

    Alin IORDACHE

    2008-01-01

    Full Text Available A representative example of eLearning-platform modular application, ‘Logical diagrams’, is intended to be a useful learning and testing tool for the beginner programmer, but also for the more experienced one. The problem this application is trying to solve concerns young programmers who forget about the fundamentals of this domain, algorithmic. Logical diagrams are a graphic representation of an algorithm, which uses different geometrical figures (parallelograms, rectangles, rhombuses, circles with particular meaning that are called blocks and connected between them to reveal the flow of the algorithm. The role of this application is to help the user build the diagram for the algorithm and then automatically generate the C code and test it.

  1. A quantum causal discovery algorithm

    Science.gov (United States)

    Giarmatzi, Christina; Costa, Fabio

    2018-03-01

    Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.

  2. Multiagent scheduling models and algorithms

    CERN Document Server

    Agnetis, Alessandro; Gawiejnowicz, Stanisław; Pacciarelli, Dario; Soukhal, Ameur

    2014-01-01

    This book presents multi-agent scheduling models in which subsets of jobs sharing the same resources are evaluated by different criteria. It discusses complexity results, approximation schemes, heuristics and exact algorithms.

  3. Efficient Algorithms for Subgraph Listing

    Directory of Open Access Journals (Sweden)

    Niklas Zechner

    2014-05-01

    Full Text Available Subgraph isomorphism is a fundamental problem in graph theory. In this paper we focus on listing subgraphs isomorphic to a given pattern graph. First, we look at the algorithm due to Chiba and Nishizeki for listing complete subgraphs of fixed size, and show that it cannot be extended to general subgraphs of fixed size. Then, we consider the algorithm due to Ga̧sieniec et al. for finding multiple witnesses of a Boolean matrix product, and use it to design a new output-sensitive algorithm for listing all triangles in a graph. As a corollary, we obtain an output-sensitive algorithm for listing subgraphs and induced subgraphs isomorphic to an arbitrary fixed pattern graph.

  4. Predicting a single HIV drug resistance measure from three international interpretation gold standards.

    Science.gov (United States)

    Yashik, Singh; Maurice, Mars

    2012-07-01

    To investigate the possibility of combining the interpretation of three gold standard interpretation algorithms using weighted heuristics in order to produce a single resistance measure. The outputs of HIVdb, Rega, ANRS were combined to obtain a single resistance profile using the equally weighted voting algorithm, accuracy based weighing voting algorithm and the Bayesian based weighted voting algorithm techniques. The Bayesian based voting combination increased the accuracy of the resistance profile prediction compared to phenotype, from 58% to 69%. The equal weighted voting algorithm and the accuracy based algorithm both increased the prediction accuracy to 60%. From the result obtained it is evident that combining the gold standard interpretation algorithms may increase the predictive ability of the individual interpretation algorithms. Copyright © 2012 Hainan Medical College. Published by Elsevier B.V. All rights reserved.

  5. A retrodictive stochastic simulation algorithm

    International Nuclear Information System (INIS)

    Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.

    2010-01-01

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  6. Autonomous algorithms for image restoration

    OpenAIRE

    Griniasty, Meir

    1994-01-01

    We describe a general theoretical framework for algorithms that adaptively tune all their parameters during the restoration of a noisy image. The adaptation procedure is based on a mean field approach which is known as ``Deterministic Annealing'', and is reminiscent of the ``Deterministic Bolzmann Machiné'. The algorithm is less time consuming in comparison with its simulated annealing alternative. We apply the theory to several architectures and compare their performances.

  7. New algorithms for parallel MRI

    International Nuclear Information System (INIS)

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  8. When the greedy algorithm fails

    OpenAIRE

    Bang-Jensen, Jørgen; Gutin, Gregory; Yeo, Anders

    2004-01-01

    We provide a characterization of the cases when the greedy algorithm may produce the unique worst possible solution for the problem of finding a minimum weight base in an independence system when the weights are taken from a finite range. We apply this theorem to TSP and the minimum bisection problem. The practical message of this paper is that the greedy algorithm should be used with great care, since for many optimization problems its usage seems impractical even for generating a starting s...

  9. A* Algorithm for Graphics Processors

    OpenAIRE

    Inam, Rafia; Cederman, Daniel; Tsigas, Philippas

    2010-01-01

    Today's computer games have thousands of agents moving at the same time in areas inhabited by a large number of obstacles. In such an environment it is important to be able to calculate multiple shortest paths concurrently in an efficient manner. The highly parallel nature of the graphics processor suits this scenario perfectly. We have implemented a graphics processor based version of the A* path finding algorithm together with three algorithmic improvements that allow it to work faster and ...

  10. Algorithm for programming function generators

    International Nuclear Information System (INIS)

    Bozoki, E.

    1981-01-01

    The present paper deals with a mathematical problem, encountered when driving a fully programmable μ-processor controlled function generator. An algorithm is presented to approximate a desired function by a set of straight segments in such a way that additional restrictions (hardware imposed) are also satisfied. A computer program which incorporates this algorithm and automatically generates the necessary input for the function generator for a broad class of desired functions is also described

  11. Cascade Error Projection: A New Learning Algorithm

    Science.gov (United States)

    Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.

    1995-01-01

    A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.

  12. Standard random number generation for MBASIC

    Science.gov (United States)

    Tausworthe, R. C.

    1976-01-01

    A machine-independent algorithm is presented and analyzed for generating pseudorandom numbers suitable for the standard MBASIC system. The algorithm used is the polynomial congruential or linear recurrence modulo 2 method. Numbers, formed as nonoverlapping adjacent 28-bit words taken from the bit stream produced by the formula a sub m + 532 = a sub m + 37 + a sub m (modulo 2), do not repeat within the projected age of the solar system, show no ensemble correlation, exhibit uniform distribution of adjacent numbers up to 19 dimensions, and do not deviate from random runs-up and runs-down behavior.

  13. Rotational Invariant Dimensionality Reduction Algorithms.

    Science.gov (United States)

    Lai, Zhihui; Xu, Yong; Yang, Jian; Shen, Linlin; Zhang, David

    2017-11-01

    A common intrinsic limitation of the traditional subspace learning methods is the sensitivity to the outliers and the image variations of the object since they use the norm as the metric. In this paper, a series of methods based on the -norm are proposed for linear dimensionality reduction. Since the -norm based objective function is robust to the image variations, the proposed algorithms can perform robust image feature extraction for classification. We use different ideas to design different algorithms and obtain a unified rotational invariant (RI) dimensionality reduction framework, which extends the well-known graph embedding algorithm framework to a more generalized form. We provide the comprehensive analyses to show the essential properties of the proposed algorithm framework. This paper indicates that the optimization problems have global optimal solutions when all the orthogonal projections of the data space are computed and used. Experimental results on popular image datasets indicate that the proposed RI dimensionality reduction algorithms can obtain competitive performance compared with the previous norm based subspace learning algorithms.

  14. Artificial Flora (AF Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Long Cheng

    2018-02-01

    Full Text Available Inspired by the process of migration and reproduction of flora, this paper proposes a novel artificial flora (AF algorithm. This algorithm can be used to solve some complex, non-linear, discrete optimization problems. Although a plant cannot move, it can spread seeds within a certain range to let offspring to find the most suitable environment. The stochastic process is easy to copy, and the spreading space is vast; therefore, it is suitable for applying in intelligent optimization algorithm. First, the algorithm randomly generates the original plant, including its position and the propagation distance. Then, the position and the propagation distance of the original plant as parameters are substituted in the propagation function to generate offspring plants. Finally, the optimal offspring is selected as a new original plant through the selection function. The previous original plant becomes the former plant. The iteration continues until we find out optimal solution. In this paper, six classical evaluation functions are used as the benchmark functions. The simulation results show that proposed algorithm has high accuracy and stability compared with the classical particle swarm optimization and artificial bee colony algorithm.

  15. Standards and Customer Service: Employees Behavior towards Customers

    Directory of Open Access Journals (Sweden)

    Venelin Terziev

    2017-09-01

    Full Text Available Ensuring effective customer service requires targeted efforts in a number of areas, one of which is to develop service standards for each market segment. The development and implementation of standards requires the organization to accurately determine customer service types, the cost of providing alternative services, and measures for measuring and controlling the services provided. At the core of the developed and implemented standards is the development and establishment of the customer service policy, which should start with a consumer demand analysis. The definition of customer service level should allow for quantitative measurement because the vague and quantifiable policy does not provide opportunities for evaluation and control of the activities and expenses of customer service. When developing service standards, it is appropriate to apply an algorithm that focuses primarily on standards related to employee behavior towards customers. This paper explores the need and capability to develop customer service standards and provides an algorithm for developing standards for employee behavior toward customers.

  16. Standards for Standardized Logistic Regression Coefficients

    Science.gov (United States)

    Menard, Scott

    2011-01-01

    Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…

  17. Belief Bisimulation for Hidden Markov Models Logical Characterisation and Decision Algorithm

    DEFF Research Database (Denmark)

    Jansen, David N.; Nielson, Flemming; Zhang, Lijun

    2012-01-01

    This paper establishes connections between logical equivalences and bisimulation relations for hidden Markov models (HMM). Both standard and belief state bisimulations are considered. We also present decision algorithms for the bisimilarities. For standard bisimilarity, an extension of the usual...... partition refinement algorithm is enough. Belief bisimilarity, being a relation on the continuous space of belief states, cannot be described directly. Instead, we show how to generate a linear equation system in time cubic in the number of states....

  18. Compressive Sensing Image Fusion Based on Particle Swarm Optimization Algorithm

    Science.gov (United States)

    Li, X.; Lv, J.; Jiang, S.; Zhou, H.

    2017-09-01

    In order to solve the problem that the spatial matching is difficult and the spectral distortion is large in traditional pixel-level image fusion algorithm. We propose a new method of image fusion that utilizes HIS transformation and the recently developed theory of compressive sensing that is called HIS-CS image fusion. In this algorithm, the particle swarm optimization algorithm is used to select the fusion coefficient ω. In the iterative process, the image fusion coefficient ω is taken as particle, and the optimal value is obtained by combining the optimal objective function. Then we use the compression-aware weighted fusion algorithm for remote sensing image fusion, taking the coefficient ω as the weight value. The algorithm ensures the optimal selection of fusion effect with a certain degree of self-adaptability. To evaluate the fused images, this paper uses five kinds of index parameters such as Entropy, Standard Deviation, Average Gradient, Degree of Distortion and Peak Signal-to-Noise Ratio. The experimental results show that the image fusion effect of the algorithm in this paper is better than that of traditional methods.

  19. Algebraic Algorithm Design and Local Search

    National Research Council Canada - National Science Library

    Graham, Robert

    1996-01-01

    .... Algebraic techniques have been applied successfully to algorithm synthesis by the use of algorithm theories and design tactics, an approach pioneered in the Kestrel Interactive Development System (KIDS...

  20. Golden Sine Algorithm: A Novel Math-Inspired Algorithm

    Directory of Open Access Journals (Sweden)

    TANYILDIZI, E.

    2017-05-01

    Full Text Available In this study, Golden Sine Algorithm (Gold-SA is presented as a new metaheuristic method for solving optimization problems. Gold-SA has been developed as a new search algorithm based on population. This math-based algorithm is inspired by sine that is a trigonometric function. In the algorithm, random individuals are created as many as the number of search agents with uniform distribution for each dimension. The Gold-SA operator searches to achieve a better solution in each iteration by trying to bring the current situation closer to the target value. The solution space is narrowed by the golden section so that the areas that are supposed to give only good results are scanned instead of the whole solution space scan. In the tests performed, it is seen that Gold-SA has better results than other population based methods. In addition, Gold-SA has fewer algorithm-dependent parameters and operators than other metaheuristic methods, increasing the importance of this method by providing faster convergence of this new method.

  1. Linking mothers and infants within electronic health records: a comparison of deterministic and probabilistic algorithms.

    Science.gov (United States)

    Baldwin, Eric; Johnson, Karin; Berthoud, Heidi; Dublin, Sascha

    2015-01-01

    To compare probabilistic and deterministic algorithms for linking mothers and infants within electronic health records (EHRs) to support pregnancy outcomes research. The study population was women enrolled in Group Health (Washington State, USA) delivering a liveborn infant from 2001 through 2008 (N = 33,093 deliveries) and infant members born in these years. We linked women to infants by surname, address, and dates of birth and delivery using deterministic and probabilistic algorithms. In a subset previously linked using "gold standard" identifiers (N = 14,449), we assessed each approach's sensitivity and positive predictive value (PPV). For deliveries with no "gold standard" linkage (N = 18,644), we compared the algorithms' linkage proportions. We repeated our analyses in an independent test set of deliveries from 2009 through 2013. We reviewed medical records to validate a sample of pairs apparently linked by one algorithm but not the other (N = 51 or 1.4% of discordant pairs). In the 2001-2008 "gold standard" population, the probabilistic algorithm's sensitivity was 84.1% (95% CI, 83.5-84.7) and PPV 99.3% (99.1-99.4), while the deterministic algorithm had sensitivity 74.5% (73.8-75.2) and PPV 95.7% (95.4-96.0). In the test set, the probabilistic algorithm again had higher sensitivity and PPV. For deliveries in 2001-2008 with no "gold standard" linkage, the probabilistic algorithm found matched infants for 58.3% and the deterministic algorithm, 52.8%. On medical record review, 100% of linked pairs appeared valid. A probabilistic algorithm improved linkage proportion and accuracy compared to a deterministic algorithm. Better linkage methods can increase the value of EHRs for pregnancy outcomes research. Copyright © 2014 John Wiley & Sons, Ltd.

  2. A computational fluid dynamics algorithm on a massively parallel computer

    International Nuclear Information System (INIS)

    Jespersen, D.C.; Levit, C.

    1989-01-01

    The implementation and performance of a finite-difference algorithm for the compressible Navier-Stokes equations in two or three dimensions on the Connection Machine are described. This machine is a single-instruction multiple-data machine with up to 65536 physical processors. The implicit portion of the algorithm is of particular interest. Running times and megadrop rates are given for two- and three-dimensional problems. Included are comparisons with the standard codes on a Cray X-MP/48. 15 refs

  3. A Modified Heat Stress Algorithm for Partially Enclosed Structures

    International Nuclear Information System (INIS)

    Hunter, C.H.

    2002-01-01

    Historical data for wet bulb globe temperature (WBGT) were requested by WSRC Systems Engineering as part of an assessment of climate on loading operations to be conducted at the proposed Low Enriched Uranium (LEU) Loading Station in H-Area. This facility will have an insulated roof and partially enclosed sides to allow cooling inside the facility by natural convection. In 1996, the SRTC Atmospheric Technologies Group developed a computer algorithm to estimate WBGT using standard meteorological measurements. This algorithm assumes exposure to the ambient environment; consequently, modifications were necessary to the simulation

  4. Gravity Aided Navigation Precise Algorithm with Gauss Spline Interpolation

    Directory of Open Access Journals (Sweden)

    WEN Chaobin

    2015-01-01

    Full Text Available The gravity compensation of error equation thoroughly should be solved before the study on gravity aided navigation with high precision. A gravity aided navigation model construction algorithm based on research the algorithm to approximate local grid gravity anomaly filed with the 2D Gauss spline interpolation is proposed. Gravity disturbance vector, standard gravity value error and Eotvos effect are all compensated in this precision model. The experiment result shows that positioning accuracy is raised by 1 times, the attitude and velocity accuracy is raised by 1~2 times and the positional error is maintained from 100~200 m.

  5. Adaptive radiation image enhancement based on different image quality evaluation standards

    International Nuclear Information System (INIS)

    Guo Xiaojing; Wu Zhifang

    2012-01-01

    Genetic algorithm based on incomplete Beta function was realized, and adaptive gray transform based on the said genetic algorithm was implemented, based as such, three image quality evaluation standards were applied in the adaptive gray transform of radiation images, and effects of processing time, stability, generation number and so on of the three standards were compared. The better algorithm scheme was applied in image processing module of container DR/CT inspection system to obtain effective adaptive image enhancement. (authors)

  6. EDITORIAL: Special issue on time scale algorithms

    Science.gov (United States)

    Matsakis, Demetrios; Tavella, Patrizia

    2008-12-01

    This special issue of Metrologia presents selected papers from the Fifth International Time Scale Algorithm Symposium (VITSAS), including some of the tutorials presented on the first day. The symposium was attended by 76 persons, from every continent except Antarctica, by students as well as senior scientists, and hosted by the Real Instituto y Observatorio de la Armada (ROA) in San Fernando, Spain, whose staff further enhanced their nation's high reputation for hospitality. Although a timescale can be simply defined as a weighted average of clocks, whose purpose is to measure time better than any individual clock, timescale theory has long been and continues to be a vibrant field of research that has both followed and helped to create advances in the art of timekeeping. There is no perfect timescale algorithm, because every one embodies a compromise involving user needs. Some users wish to generate a constant frequency, perhaps not necessarily one that is well-defined with respect to the definition of a second. Other users might want a clock which is as close to UTC or a particular reference clock as possible, or perhaps wish to minimize the maximum variation from that standard. In contrast to the steered timescales that would be required by those users, other users may need free-running timescales, which are independent of external information. While no algorithm can meet all these needs, every algorithm can benefit from some form of tuning. The optimal tuning, and even the optimal algorithm, can depend on the noise characteristics of the frequency standards, or of their comparison systems, the most precise and accurate of which are currently Two Way Satellite Time and Frequency Transfer (TWSTFT) and GPS carrier phase time transfer. The interest in time scale algorithms and its associated statistical methodology began around 40 years ago when the Allan variance appeared and when the metrological institutions started realizing ensemble atomic time using more than

  7. Diagnostic accuracy of administrative data algorithms in the diagnosis of osteoarthritis: a systematic review.

    Science.gov (United States)

    Shrestha, Swastina; Dave, Amish J; Losina, Elena; Katz, Jeffrey N

    2016-07-07

    Administrative health care data are frequently used to study disease burden and treatment outcomes in many conditions including osteoarthritis (OA). OA is a chronic condition with significant disease burden affecting over 27 million adults in the US. There are few studies examining the performance of administrative data algorithms to diagnose OA. The purpose of this study is to perform a systematic review of administrative data algorithms for OA diagnosis; and, to evaluate the diagnostic characteristics of algorithms based on restrictiveness and reference standards. Two reviewers independently screened English-language articles published in Medline, Embase, PubMed, and Cochrane databases that used administrative data to identify OA cases. Each algorithm was classified as restrictive or less restrictive based on number and type of administrative codes required to satisfy the case definition. We recorded sensitivity and specificity of algorithms and calculated positive likelihood ratio (LR+) and positive predictive value (PPV) based on assumed OA prevalence of 0.1, 0.25, and 0.50. The search identified 7 studies that used 13 algorithms. Of these 13 algorithms, 5 were classified as restrictive and 8 as less restrictive. Restrictive algorithms had lower median sensitivity and higher median specificity compared to less restrictive algorithms when reference standards were self-report and American college of Rheumatology (ACR) criteria. The algorithms compared to reference standard of physician diagnosis had higher sensitivity and specificity than those compared to self-reported diagnosis or ACR criteria. Restrictive algorithms are more specific for OA diagnosis and can be used to identify cases when false positives have higher costs e.g. interventional studies. Less restrictive algorithms are more sensitive and suited for studies that attempt to identify all cases e.g. screening programs.

  8. Mathematical algorithms for approximate reasoning

    Science.gov (United States)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  9. Malaysian NDT standards

    International Nuclear Information System (INIS)

    Khazali Mohd Zin

    2001-01-01

    In order to become a developed country, Malaysia needs to develop her own national standards. It has been projected that by the year 2020, Malaysia requires about 8,000 standards (Department of Standard Malaysia). Currently more than 2,000 Malaysian Standards have been gazette by the government which considerably too low before tire year 2020. NDT standards have been identified by the standard working group as one of the areas to promote our national standards. In this paper the author describes the steps taken to establish the Malaysian very own NDT standards. The project starts with the establishment of radiographic standards. (Author)

  10. A review on quantum search algorithms

    Science.gov (United States)

    Giri, Pulak Ranjan; Korepin, Vladimir E.

    2017-12-01

    The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.

  11. Semi-flocking algorithm for motion control of mobile sensors in large-scale surveillance systems.

    Science.gov (United States)

    Semnani, Samaneh Hosseini; Basir, Otman A

    2015-01-01

    The ability of sensors to self-organize is an important asset in surveillance sensor networks. Self-organize implies self-control at the sensor level and coordination at the network level. Biologically inspired approaches have recently gained significant attention as a tool to address the issue of sensor control and coordination in sensor networks. These approaches are exemplified by the two well-known algorithms, namely, the Flocking algorithm and the Anti-Flocking algorithm. Generally speaking, although these two biologically inspired algorithms have demonstrated promising performance, they expose deficiencies when it comes to their ability to maintain simultaneous robust dynamic area coverage and target coverage. These two coverage performance objectives are inherently conflicting. This paper presents Semi-Flocking, a biologically inspired algorithm that benefits from key characteristics of both the Flocking and Anti-Flocking algorithms. The Semi-Flocking algorithm approaches the problem by assigning a small flock of sensors to each target, while at the same time leaving some sensors free to explore the environment. This allows the algorithm to strike balance between robust area coverage and target coverage. Such balance is facilitated via flock-sensor coordination. The performance of the proposed Semi-Flocking algorithm is examined and compared with other two flocking-based algorithms once using randomly moving targets and once using a standard walking pedestrian dataset. The results of both experiments show that the Semi-Flocking algorithm outperforms both the Flocking algorithm and the Anti-Flocking algorithm with respect to the area of coverage and the target coverage objectives. Furthermore, the results show that the proposed algorithm demonstrates shorter target detection time and fewer undetected targets than the other two flocking-based algorithms.

  12. A Novel Real-coded Quantum-inspired Genetic Algorithm and Its Application in Data Reconciliation

    Directory of Open Access Journals (Sweden)

    Gao Lin

    2012-06-01

    Full Text Available Traditional quantum-inspired genetic algorithm (QGA has drawbacks such as premature convergence, heavy computational cost, complicated coding and decoding process etc. In this paper, a novel real-coded quantum-inspired genetic algorithm is proposed based on interval division thinking. Detailed comparisons with some similar approaches for some standard benchmark functions test validity of the proposed algorithm. Besides, the proposed algorithm is used in two typical nonlinear data reconciliation problems (distilling process and extraction process and simulation results show its efficiency in nonlinear data reconciliation problems.

  13. Modified Monkey Optimization Algorithm for Solving Optimal Reactive Power Dispatch Problem

    Directory of Open Access Journals (Sweden)

    Kanagasabai Lenin

    2015-04-01

    Full Text Available In this paper, a novel approach Modified Monkey optimization (MMO algorithm for solving optimal reactive power dispatch problem has been presented. MMO is a population based stochastic meta-heuristic algorithm and it is inspired by intelligent foraging behaviour of monkeys. This paper improves both local leader and global leader phases.  The proposed (MMO algorithm has been tested in standard IEEE 30 bus test system and simulation results show the worthy performance of the proposed algorithm in reducing the real power loss.

  14. Analysis of the MPEG-1 Layer III (MP3) Algorithm using MATLAB

    CERN Document Server

    Thiagarajan, Jayaraman

    2011-01-01

    The MPEG-1 Layer III (MP3) algorithm is one of the most successful audio formats for consumer audio storage and for transfer and playback of music on digital audio players. The MP3 compression standard along with the AAC (Advanced Audio Coding) algorithm are associated with the most successful music players of the last decade. This book describes the fundamentals and the MATLAB implementation details of the MP3 algorithm. Several of the tedious processes in MP3 are supported by demonstrations using MATLAB software. The book presents the theoretical concepts and algorithms used in the MP3 stand

  15. Atmosphere Clouds Model Algorithm for Solving Optimal Reactive Power Dispatch Problem

    Directory of Open Access Journals (Sweden)

    Lenin Kanagasabai

    2014-04-01

    Full Text Available In this paper, a new method, called Atmosphere Clouds Model (ACM algorithm, used for solving optimal reactive power dispatch problem. ACM stochastic optimization algorithm stimulated from the behavior of cloud in the natural earth. ACM replicate the generation behavior, shift behavior and extend behavior of cloud. The projected (ACM algorithm has been tested on standard IEEE 30 bus test system and simulation results shows clearly about the superior performance of the proposed algorithm in plummeting the real power loss. Normal 0 false false false EN-IN X-NONE X-NONE

  16. Algorithms, complexity, and the sciences.

    Science.gov (United States)

    Papadimitriou, Christos

    2014-11-11

    Algorithms, perhaps together with Moore's law, compose the engine of the information technology revolution, whereas complexity--the antithesis of algorithms--is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal--and therefore less compelling--than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene's cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution.

  17. SDR Input Power Estimation Algorithms

    Science.gov (United States)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  18. Computational geometry algorithms and applications

    CERN Document Server

    de Berg, Mark; Overmars, Mark; Schwarzkopf, Otfried

    1997-01-01

    Computational geometry emerged from the field of algorithms design and anal­ ysis in the late 1970s. It has grown into a recognized discipline with its own journals, conferences, and a large community of active researchers. The suc­ cess of the field as a research discipline can on the one hand be explained from the beauty of the problems studied and the solutions obtained, and, on the other hand, by the many application domains--computer graphics, geographic in­ formation systems (GIS), robotics, and others-in which geometric algorithms play a fundamental role. For many geometric problems the early algorithmic solutions were either slow or difficult to understand and implement. In recent years a number of new algorithmic techniques have been developed that improved and simplified many of the previous approaches. In this textbook we have tried to make these modem algorithmic solutions accessible to a large audience. The book has been written as a textbook for a course in computational geometry, but it can ...

  19. Research on Modified Root-MUSIC Algorithm of DOA Estimation Based on Covariance Matrix Reconstruction

    Directory of Open Access Journals (Sweden)

    Changgan SHU

    2014-09-01

    Full Text Available In the standard root multiple signal classification algorithm, the performance of direction of arrival estimation will reduce and even lose effect in circumstances that a low signal noise ratio and a small signals interval. By reconstructing and weighting the covariance matrix of received signal, the modified algorithm can provide more accurate estimation results. The computer simulation and performance analysis are given next, which show that under the condition of lower signal noise ratio and stronger correlation between signals, the proposed modified algorithm could provide preferable azimuth estimating performance than the standard method.

  20. RS slope detection algorithm for extraction of heart rate from noisy, multimodal recordings.

    Science.gov (United States)

    Gierałtowski, Jan; Ciuchciński, Kamil; Grzegorczyk, Iga; Kośna, Katarzyna; Soliński, Mateusz; Podziemski, Piotr

    2015-08-01

    Current gold-standard algorithms for heart beat detection do not work properly in the case of high noise levels and do not make use of multichannel data collected by modern patient monitors. The main idea behind the method presented in this paper is to detect the most prominent part of the QRS complex, i.e. the RS slope. We localize the RS slope based on the consistency of its characteristics, i.e. adequate, automatically determined amplitude and duration. It is a very simple and non-standard, yet very effective, solution. Minor data pre-processing and parameter adaptations make our algorithm fast and noise-resistant. As one of a few algorithms in the PhysioNet/Computing in Cardiology Challenge 2014, our algorithm uses more than two channels (i.e. ECG, BP, EEG, EOG and EMG). Simple fundamental working rules make the algorithm universal: it is able to work on all of these channels with no or only little changes. The final result of our algorithm in phase III of the Challenge was 86.38 (88.07 for a 200 record test set), which gave us fourth place. Our algorithm shows that current standards for heart beat detection could be improved significantly by taking a multichannel approach. This is an open-source algorithm available through the PhysioNet library.

  1. An algorithm for compression of bilevel images.

    Science.gov (United States)

    Reavy, M D; Boncelet, C G

    2001-01-01

    This paper presents the block arithmetic coding for image compression (BACIC) algorithm: a new method for lossless bilevel image compression which can replace JBIG, the current standard for bilevel image compression. BACIC uses the block arithmetic coder (BAC): a simple, efficient, easy-to-implement, variable-to-fixed arithmetic coder, to encode images. BACIC models its probability estimates adaptively based on a 12-bit context of previous pixel values; the 12-bit context serves as an index into a probability table whose entries are used to compute p(1) (the probability of a bit equaling one), the probability measure BAC needs to compute a codeword. In contrast, the Joint Bilevel Image Experts Group (JBIG) uses a patented arithmetic coder, the IBM QM-coder, to compress image data and a predetermined probability table to estimate its probability measures. JBIG, though, has not get been commercially implemented; instead, JBIG's predecessor, the Group 3 fax (G3), continues to be used. BACIC achieves compression ratios comparable to JBIG's and is introduced as an alternative to the JBIG and G3 algorithms. BACIC's overall compression ratio is 19.0 for the eight CCITT test images (compared to JBIG's 19.6 and G3's 7.7), is 16.0 for 20 additional business-type documents (compared to JBIG's 16.0 and G3's 6.74), and is 3.07 for halftone images (compared to JBIG's 2.75 and G3's 0.50).

  2. Algorithm for optimisation of paediatric chest radiography

    International Nuclear Information System (INIS)

    Kostova-Lefterova, D.

    2016-01-01

    The purpose of this work is to assess the current practice and patient doses in paediatric chest radiography in a large university hospital. The X-ray unit is used in the paediatric department for respiratory diseases. Another purpose was to recommend and apply optimized protocols to reduce patient dose while maintaining diagnostic image quality for the x-ray images. The practice of two different radiographers was studied. The results were compared with the existing practice in paediatric chest radiography and the opportunities for optimization were identified in order to reduce patient doses. A methodology was developed for optimization of the x-ray examinations by grouping children in age groups or according to other appropriate indication and creating an algorithm for proper selection of the exposure parameters for each group. The algorithm for the optimisation of paediatric chest radiography reduced patient doses (PKA, organ dose, effective dose) between 1.5 and 6 times for the different age groups, the average glandular dose up to 10 times and the dose for the lung between 2 and 5 times. The resulting X-ray images were of good diagnostic quality. The subjectivity in the choice of exposure parameters was reduced and standardization has been achieved in the work of the radiographers. The role of the radiologist, the medical physicist and radiographer in the process of optimization was shown. It was proven the effect of teamwork in reducing patient doses at keeping adequate image quality. Key words: Chest Radiography. Paediatric Radiography. Optimization. Radiation Exposure. Radiation Protection

  3. An Algorithm for Building an Electronic Database.

    Science.gov (United States)

    Cohen, Wess A; Gayle, Lloyd B; Patel, Nima P

    2016-01-01

    We propose an algorithm on how to create a prospectively maintained database, which can then be used to analyze prospective data in a retrospective fashion. Our algorithm provides future researchers a road map on how to set up, maintain, and use an electronic database to improve evidence-based care and future clinical outcomes. The database was created using Microsoft Access and included demographic information, socioeconomic information, and intraoperative and postoperative details via standardized drop-down menus. A printed out form from the Microsoft Access template was given to each surgeon to be completed after each case and a member of the health care team then entered the case information into the database. By utilizing straightforward, HIPAA-compliant data input fields, we permitted data collection and transcription to be easy and efficient. Collecting a wide variety of data allowed us the freedom to evolve our clinical interests, while the platform also permitted new categories to be added at will. We have proposed a reproducible method for institutions to create a database, which will then allow senior and junior surgeons to analyze their outcomes and compare them with others in an effort to improve patient care and outcomes. This is a cost-efficient way to create and maintain a database without additional software.

  4. PDE Nozzle Optimization Using a Genetic Algorithm

    Science.gov (United States)

    Billings, Dana; Turner, James E. (Technical Monitor)

    2000-01-01

    Genetic algorithms, which simulate evolution in natural systems, have been used to find solutions to optimization problems that seem intractable to standard approaches. In this study, the feasibility of using a GA to find an optimum, fixed profile nozzle for a pulse detonation engine (PDE) is demonstrated. The objective was to maximize impulse during the detonation wave passage and blow-down phases of operation. Impulse of each profile variant was obtained by using the CFD code Mozart/2.0 to simulate the transient flow. After 7 generations, the method has identified a nozzle profile that certainly is a candidate for optimum solution. The constraints on the generality of this possible solution remain to be clarified.

  5. Analysis of Population Diversity of Dynamic Probabilistic Particle Swarm Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Qingjian Ni

    2014-01-01

    Full Text Available In evolutionary algorithm, population diversity is an important factor for solving performance. In this paper, combined with some population diversity analysis methods in other evolutionary algorithms, three indicators are introduced to be measures of population diversity in PSO algorithms, which are standard deviation of population fitness values, population entropy, and Manhattan norm of standard deviation in population positions. The three measures are used to analyze the population diversity in a relatively new PSO variant—Dynamic Probabilistic Particle Swarm Optimization (DPPSO. The results show that the three measure methods can fully reflect the evolution of population diversity in DPPSO algorithms from different angles, and we also discuss the impact of population diversity on the DPPSO variants. The relevant conclusions of the population diversity on DPPSO can be used to analyze, design, and improve the DPPSO algorithms, thus improving optimization performance, which could also be beneficial to understand the working mechanism of DPPSO theoretically.

  6. A Novel Preferential Diffusion Recommendation Algorithm Based on User’s Nearest Neighbors

    Directory of Open Access Journals (Sweden)

    Fuguo Zhang

    2017-01-01

    Full Text Available Recommender system is a very efficient way to deal with the problem of information overload for online users. In recent years, network based recommendation algorithms have demonstrated much better performance than the standard collaborative filtering methods. However, most of network based algorithms do not give a high enough weight to the influence of the target user’s nearest neighbors in the resource diffusion process, while a user or an object with high degree will obtain larger influence in the standard mass diffusion algorithm. In this paper, we propose a novel preferential diffusion recommendation algorithm considering the significance of the target user’s nearest neighbors and evaluate it in the three real-world data sets: MovieLens 100k, MovieLens 1M, and Epinions. Experiments results demonstrate that the novel preferential diffusion recommendation algorithm based on user’s nearest neighbors can significantly improve the recommendation accuracy and diversity.

  7. DATA SECURITY IN LOCAL AREA NETWORK BASED ON FAST ENCRYPTION ALGORITHM

    Directory of Open Access Journals (Sweden)

    G. Ramesh

    2010-06-01

    Full Text Available Hacking is one of the greatest problems in the wireless local area networks. Many algorithms have been used to prevent the outside attacks to eavesdrop or prevent the data to be transferred to the end-user safely and correctly. In this paper, a new symmetrical encryption algorithm is proposed that prevents the outside attacks. The new algorithm avoids key exchange between users and reduces the time taken for the encryption and decryption. It operates at high data rate in comparison with The Data Encryption Standard (DES, Triple DES (TDES, Advanced Encryption Standard (AES-256, and RC6 algorithms. The new algorithm is applied successfully on both text file and voice message.

  8. Universal algorithm of time sharing

    International Nuclear Information System (INIS)

    Silin, I.N.; Fedyun'kin, E.D.

    1979-01-01

    Timesharing system algorithm is proposed for the wide class of one- and multiprocessor computer configurations. Dynamical priority is the piece constant function of the channel characteristic and system time quantum. The interactive job quantum has variable length. Characteristic recurrent formula is received. The concept of the background job is introduced. Background job loads processor if high priority jobs are inactive. Background quality function is given on the base of the statistical data received in the timesharing process. Algorithm includes optimal trashing off procedure for the jobs replacements in the memory. Sharing of the system time in proportion to the external priorities is guaranteed for the all active enough computing channels (back-ground too). The fast answer is guaranteed for the interactive jobs, which use small time and memory. The external priority control is saved for the high level scheduler. The experience of the algorithm realization on the BESM-6 computer in JINR is discussed

  9. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  10. Algorithms and Public Service Media

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk; Hutchinson, Jonathon

    2018-01-01

    When Public Service Media (PSM) organisations introduce algorithmic recommender systems to suggest media content to users, fundamental values of PSM are challenged. Beyond being confronted with ubiquitous computer ethics problems of causality and transparency, also the identity of PSM as curator...... and agenda-setter is challenged. The algorithms represents rules for which content to present to whom, and in this sense they may discriminate and bias the exposure of diversity. Furthermore, on a practical level, the introduction of the systems shifts power within the organisations and changes...... the regulatory conditions. In this chapter we analyse two cases - the EBU-members' introduction of recommender systems and the Australian broadcaster ABC's experiences with the use of chatbots. We use these cases to exemplify the challenges that algorithmic systems poses to PSM organisations....

  11. Quantum walks and search algorithms

    CERN Document Server

    Portugal, Renato

    2013-01-01

    This book addresses an interesting area of quantum computation called quantum walks, which play an important role in building quantum algorithms, in particular search algorithms. Quantum walks are the quantum analogue of classical random walks. It is known that quantum computers have great power for searching unsorted databases. This power extends to many kinds of searches, particularly to the problem of finding a specific location in a spatial layout, which can be modeled by a graph. The goal is to find a specific node knowing that the particle uses the edges to jump from one node to the next. This book is self-contained with main topics that include: Grover's algorithm, describing its geometrical interpretation and evolution by means of the spectral decomposition of the evolution operater Analytical solutions of quantum walks on important graphs like line, cycles, two-dimensional lattices, and hypercubes using Fourier transforms Quantum walks on generic graphs, describing methods to calculate the limiting d...

  12. Algorithms for Decision Tree Construction

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    The study of algorithms for decision tree construction was initiated in 1960s. The first algorithms are based on the separation heuristic [13, 31] that at each step tries dividing the set of objects as evenly as possible. Later Garey and Graham [28] showed that such algorithm may construct decision trees whose average depth is arbitrarily far from the minimum. Hyafil and Rivest in [35] proved NP-hardness of DT problem that is constructing a tree with the minimum average depth for a diagnostic problem over 2-valued information system and uniform probability distribution. Cox et al. in [22] showed that for a two-class problem over information system, even finding the root node attribute for an optimal tree is an NP-hard problem. © Springer-Verlag Berlin Heidelberg 2011.

  13. Next Generation Suspension Dynamics Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Higdon, Jonathon [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Chen, Steven [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-12-01

    This research project has the objective to extend the range of application, improve the efficiency and conduct simulations with the Fast Lubrication Dynamics (FLD) algorithm for concentrated particle suspensions in a Newtonian fluid solvent. The research involves a combination of mathematical development, new computational algorithms, and application to processing flows of relevance in materials processing. The mathematical developments clarify the underlying theory, facilitate verification against classic monographs in the field and provide the framework for a novel parallel implementation optimized for an OpenMP shared memory environment. The project considered application to consolidation flows of major interest in high throughput materials processing and identified hitherto unforeseen challenges in the use of FLD in these applications. Extensions to the algorithm have been developed to improve its accuracy in these applications.

  14. Fault Tolerant External Memory Algorithms

    DEFF Research Database (Denmark)

    Jørgensen, Allan Grønlund; Brodal, Gerth Stølting; Mølhave, Thomas

    2009-01-01

    Algorithms dealing with massive data sets are usually designed for I/O-efficiency, often captured by the I/O model by Aggarwal and Vitter. Another aspect of dealing with massive data is how to deal with memory faults, e.g. captured by the adversary based faulty memory RAM by Finocchi and Italiano....... However, current fault tolerant algorithms do not scale beyond the internal memory. In this paper we investigate for the first time the connection between I/O-efficiency in the I/O model and fault tolerance in the faulty memory RAM, and we assume that both memory and disk are unreliable. We show a lower...... bound on the number of I/Os required for any deterministic dictionary that is resilient to memory faults. We design a static and a dynamic deterministic dictionary with optimal query performance as well as an optimal sorting algorithm and an optimal priority queue. Finally, we consider scenarios where...

  15. The International Standards Organisation offshore structures standard

    International Nuclear Information System (INIS)

    Snell, R.O.

    1994-01-01

    The International Standards Organisation has initiated a program to develop a suite of ISO Codes and Standards for the Oil Industry. The Offshore Structures Standard is one of seven topics being addressed. The scope of the standard will encompass fixed steel and concrete structures, floating structures, Arctic structures and the site specific assessment of mobile drilling and accommodation units. The standard will use as base documents the existing recommended practices and standards most frequently used for each type of structure, and will develop them to incorporate best published and recognized practice and knowledge where it provides a significant improvement on the base document. Work on the Code has commenced under the direction of an internationally constituted sub-committee comprising representatives from most of the countries with a substantial offshore oil and gas industry. This paper outlines the background to the code and the format, content and work program

  16. What is "Standard" About the Standard Deviation

    OpenAIRE

    Newberger, Florence; Safer, Alan M.; Watson, Saleem

    2010-01-01

    The choice of the formula for standard deviation is explained in elementary statistics textbooks in various ways. We give an explanation for this formula by representing the data as a vector in $\\mathbb R^n$ and considering its distance from a central tendency vector. In this setting the "standard" formula represents a shortest distance in the standard metric. We also show that different metrics lead to different measures of central tendency.

  17. An Elite Decision Making Harmony Search Algorithm for Optimization Problem

    Directory of Open Access Journals (Sweden)

    Lipu Zhang

    2012-01-01

    Full Text Available This paper describes a new variant of harmony search algorithm which is inspired by a well-known item “elite decision making.” In the new algorithm, the good information captured in the current global best and the second best solutions can be well utilized to generate new solutions, following some probability rule. The generated new solution vector replaces the worst solution in the solution set, only if its fitness is better than that of the worst solution. The generating and updating steps and repeated until the near-optimal solution vector is obtained. Extensive computational comparisons are carried out by employing various standard benchmark optimization problems, including continuous design variables and integer variables minimization problems from the literature. The computational results show that the proposed new algorithm is competitive in finding solutions with the state-of-the-art harmony search variants.

  18. The Algorithm of Habitat Discovery in Bird Migration

    Directory of Open Access Journals (Sweden)

    Wei Zhengzheng

    2017-01-01

    Full Text Available Bird migration has attracted an increasing attention. The study of habitats has played a vital role in the birds migratory. Previous researches, however, have encountered many problems, such as great limitations on research methods, low data utilization rate, statistics-focused and ineffective data processing and analysis methods. In this paper, the algorithm of habitat discovery is put forward by using computer’s data-mining technology based on the spatio-temporal characteristics of bird-watching data. First the algorithm detects and eliminates duplicate data to guarantee data standardization. Then density-based clustering algorithms are used to identify habitats where birds gathered. Finally the habitats of birds migratory are discovered.

  19. Two-phase hybrid cryptography algorithm for wireless sensor networks

    Directory of Open Access Journals (Sweden)

    Rawya Rizk

    2015-12-01

    Full Text Available For achieving security in wireless sensor networks (WSNs, cryptography plays an important role. In this paper, a new security algorithm using combination of both symmetric and asymmetric cryptographic techniques is proposed to provide high security with minimized key maintenance. It guarantees three cryptographic primitives, integrity, confidentiality and authentication. Elliptical Curve Cryptography (ECC and Advanced Encryption Standard (AES are combined to provide encryption. XOR-DUAL RSA algorithm is considered for authentication and Message Digest-5 (MD5 for integrity. The results show that the proposed hybrid algorithm gives better performance in terms of computation time, the size of cipher text, and the energy consumption in WSN. It is also robust against different types of attacks in the case of image encryption.

  20. An Improved FCM Medical Image Segmentation Algorithm Based on MMTD

    Directory of Open Access Journals (Sweden)

    Ningning Zhou

    2014-01-01

    Full Text Available Image segmentation plays an important role in medical image processing. Fuzzy c-means (FCM is one of the popular clustering algorithms for medical image segmentation. But FCM is highly vulnerable to noise due to not considering the spatial information in image segmentation. This paper introduces medium mathematics system which is employed to process fuzzy information for image segmentation. It establishes the medium similarity measure based on the measure of medium truth degree (MMTD and uses the correlation of the pixel and its neighbors to define the medium membership function. An improved FCM medical image segmentation algorithm based on MMTD which takes some spatial features into account is proposed in this paper. The experimental results show that the proposed algorithm is more antinoise than the standard FCM, with more certainty and less fuzziness. This will lead to its practicable and effective applications in medical image segmentation.