WorldWideScience

Sample records for level set algorithm

  1. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    Science.gov (United States)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  2. Level-set-based reconstruction algorithm for EIT lung images: first clinical results.

    Science.gov (United States)

    Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy

    2012-05-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.

  3. Level-set-based reconstruction algorithm for EIT lung images: first clinical results

    International Nuclear Information System (INIS)

    Rahmati, Peyman; Adler, Andy; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz

    2012-01-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure–volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM. (paper)

  4. Algorithms for detecting and analysing autocatalytic sets.

    Science.gov (United States)

    Hordijk, Wim; Smith, Joshua I; Steel, Mike

    2015-01-01

    Autocatalytic sets are considered to be fundamental to the origin of life. Prior theoretical and computational work on the existence and properties of these sets has relied on a fast algorithm for detectingself-sustaining autocatalytic sets in chemical reaction systems. Here, we introduce and apply a modified version and several extensions of the basic algorithm: (i) a modification aimed at reducing the number of calls to the computationally most expensive part of the algorithm, (ii) the application of a previously introduced extension of the basic algorithm to sample the smallest possible autocatalytic sets within a reaction network, and the application of a statistical test which provides a probable lower bound on the number of such smallest sets, (iii) the introduction and application of another extension of the basic algorithm to detect autocatalytic sets in a reaction system where molecules can also inhibit (as well as catalyse) reactions, (iv) a further, more abstract, extension of the theory behind searching for autocatalytic sets. (i) The modified algorithm outperforms the original one in the number of calls to the computationally most expensive procedure, which, in some cases also leads to a significant improvement in overall running time, (ii) our statistical test provides strong support for the existence of very large numbers (even millions) of minimal autocatalytic sets in a well-studied polymer model, where these minimal sets share about half of their reactions on average, (iii) "uninhibited" autocatalytic sets can be found in reaction systems that allow inhibition, but their number and sizes depend on the level of inhibition relative to the level of catalysis. (i) Improvements in the overall running time when searching for autocatalytic sets can potentially be obtained by using a modified version of the algorithm, (ii) the existence of large numbers of minimal autocatalytic sets can have important consequences for the possible evolvability of

  5. An Accurate Fire-Spread Algorithm in the Weather Research and Forecasting Model Using the Level-Set Method

    Science.gov (United States)

    Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.

    2018-04-01

    The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.

  6. Level-set reconstruction algorithm for ultrafast limited-angle X-ray computed tomography of two-phase flows.

    Science.gov (United States)

    Bieberle, M; Hampel, U

    2015-06-13

    Tomographic image reconstruction is based on recovering an object distribution from its projections, which have been acquired from all angular views around the object. If the angular range is limited to less than 180° of parallel projections, typical reconstruction artefacts arise when using standard algorithms. To compensate for this, specialized algorithms using a priori information about the object need to be applied. The application behind this work is ultrafast limited-angle X-ray computed tomography of two-phase flows. Here, only a binary distribution of the two phases needs to be reconstructed, which reduces the complexity of the inverse problem. To solve it, a new reconstruction algorithm (LSR) based on the level-set method is proposed. It includes one force function term accounting for matching the projection data and one incorporating a curvature-dependent smoothing of the phase boundary. The algorithm has been validated using simulated as well as measured projections of known structures, and its performance has been compared to the algebraic reconstruction technique and a binary derivative of it. The validation as well as the application of the level-set reconstruction on a dynamic two-phase flow demonstrated its applicability and its advantages over other reconstruction algorithms. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  7. Microwave imaging of dielectric cylinder using level set method and conjugate gradient algorithm

    International Nuclear Information System (INIS)

    Grayaa, K.; Bouzidi, A.; Aguili, T.

    2011-01-01

    In this paper, we propose a computational method for microwave imaging cylinder and dielectric object, based on combining level set technique and the conjugate gradient algorithm. By measuring the scattered field, we tried to retrieve the shape, localisation and the permittivity of the object. The forward problem is solved by the moment method, while the inverse problem is reformulate in an optimization one and is solved by the proposed scheme. It found that the proposed method is able to give good reconstruction quality in terms of the reconstructed shape and permittivity.

  8. Algorithms over partially ordered sets

    DEFF Research Database (Denmark)

    Baer, Robert M.; Østerby, Ole

    1969-01-01

    in partially ordered sets, answer the combinatorial question of how many maximal chains might exist in a partially ordered set withn elements, and we give an algorithm for enumerating all maximal chains. We give (in § 3) algorithms which decide whether a partially ordered set is a (lower or upper) semi......-lattice, and whether a lattice has distributive, modular, and Boolean properties. Finally (in § 4) we give Algol realizations of the various algorithms....

  9. An integrated extended Kalman filter–implicit level set algorithm for monitoring planar hydraulic fractures

    International Nuclear Information System (INIS)

    Peirce, A; Rochinha, F

    2012-01-01

    We describe a novel approach to the inversion of elasto-static tiltmeter measurements to monitor planar hydraulic fractures propagating within three-dimensional elastic media. The technique combines the extended Kalman filter (EKF), which predicts and updates state estimates using tiltmeter measurement time-series, with a novel implicit level set algorithm (ILSA), which solves the coupled elasto-hydrodynamic equations. The EKF and ILSA are integrated to produce an algorithm to locate the unknown fracture-free boundary. A scaling argument is used to derive a strategy to tune the algorithm parameters to enable measurement information to compensate for unmodeled dynamics. Synthetic tiltmeter data for three numerical experiments are generated by introducing significant changes to the fracture geometry by altering the confining geological stress field. Even though there is no confining stress field in the dynamic model used by the new EKF-ILSA scheme, it is able to use synthetic data to arrive at remarkably accurate predictions of the fracture widths and footprints. These experiments also explore the robustness of the algorithm to noise and to placement of tiltmeter arrays operating in the near-field and far-field regimes. In these experiments, the appropriate parameter choices and strategies to improve the robustness of the algorithm to significant measurement noise are explored. (paper)

  10. Identification of Arbitrary Zonation in Groundwater Parameters using the Level Set Method and a Parallel Genetic Algorithm

    Science.gov (United States)

    Lei, H.; Lu, Z.; Vesselinov, V. V.; Ye, M.

    2017-12-01

    Simultaneous identification of both the zonation structure of aquifer heterogeneity and the hydrogeological parameters associated with these zones is challenging, especially for complex subsurface heterogeneity fields. In this study, a new approach, based on the combination of the level set method and a parallel genetic algorithm is proposed. Starting with an initial guess for the zonation field (including both zonation structure and the hydraulic properties of each zone), the level set method ensures that material interfaces are evolved through the inverse process such that the total residual between the simulated and observed state variables (hydraulic head) always decreases, which means that the inversion result depends on the initial guess field and the minimization process might fail if it encounters a local minimum. To find the global minimum, the genetic algorithm (GA) is utilized to explore the parameters that define initial guess fields, and the minimal total residual corresponding to each initial guess field is considered as the fitness function value in the GA. Due to the expensive evaluation of the fitness function, a parallel GA is adapted in combination with a simulated annealing algorithm. The new approach has been applied to several synthetic cases in both steady-state and transient flow fields, including a case with real flow conditions at the chromium contaminant site at the Los Alamos National Laboratory. The results show that this approach is capable of identifying the arbitrary zonation structures of aquifer heterogeneity and the hydrogeological parameters associated with these zones effectively.

  11. Analysing Music with Point-Set Compression Algorithms

    DEFF Research Database (Denmark)

    Meredith, David

    2016-01-01

    Several point-set pattern-discovery and compression algorithms designed for analysing music are reviewed and evaluated. Each algorithm takes as input a point-set representation of a score in which each note is represented as a point in pitch-time space. Each algorithm computes the maximal...... and sections in pieces of classical music. On the first task, the best-performing algorithms achieved success rates of around 84%. In the second task, the best algorithms achieved mean F1 scores of around 0.49, with scores for individual pieces rising as high as 0.71....

  12. Algorithm for finding minimal cut sets in a fault tree

    International Nuclear Information System (INIS)

    Rosenberg, Ladislav

    1996-01-01

    This paper presents several algorithms that have been used in a computer code for fault-tree analysing by the minimal cut sets method. The main algorithm is the more efficient version of the new CARA algorithm, which finds minimal cut sets with an auxiliary dynamical structure. The presented algorithm for finding the minimal cut sets enables one to do so by defined requirements - according to the order of minimal cut sets, or to the number of minimal cut sets, or both. This algorithm is from three to six times faster when compared with the primary version of the CARA algorithm

  13. Identifying Heterogeneities in Subsurface Environment using the Level Set Method

    Energy Technology Data Exchange (ETDEWEB)

    Lei, Hongzhuan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lu, Zhiming [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Vesselinov, Velimir Valentinov [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-25

    These are slides from a presentation on identifying heterogeneities in subsurface environment using the level set method. The slides start with the motivation, then explain Level Set Method (LSM), the algorithms, some examples are given, and finally future work is explained.

  14. A new level set model for multimaterial flows

    Energy Technology Data Exchange (ETDEWEB)

    Starinshak, David P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Karni, Smadar [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of Mathematics; Roe, Philip L. [Univ. of Michigan, Ann Arbor, MI (United States). Dept. of AerospaceEngineering

    2014-01-08

    We present a new level set model for representing multimaterial flows in multiple space dimensions. Instead of associating a level set function with a specific fluid material, the function is associated with a pair of materials and the interface that separates them. A voting algorithm collects sign information from all level sets and determines material designations. M(M ₋1)/2 level set functions might be needed to represent a general M-material configuration; problems of practical interest use far fewer functions, since not all pairs of materials share an interface. The new model is less prone to producing indeterminate material states, i.e. regions claimed by more than one material (overlaps) or no material at all (vacuums). It outperforms existing material-based level set models without the need for reinitialization schemes, thereby avoiding additional computational costs and preventing excessive numerical diffusion.

  15. Hybrid approach for detection of dental caries based on the methods FCM and level sets

    Science.gov (United States)

    Chaabene, Marwa; Ben Ali, Ramzi; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    This paper presents a new technique for detection of dental caries that is a bacterial disease that destroys the tooth structure. In our approach, we have achieved a new segmentation method that combines the advantages of fuzzy C mean algorithm and level set method. The results obtained by the FCM algorithm will be used by Level sets algorithm to reduce the influence of the noise effect on the working of each of these algorithms, to facilitate level sets manipulation and to lead to more robust segmentation. The sensitivity and specificity confirm the effectiveness of proposed method for caries detection.

  16. CT liver volumetry using geodesic active contour segmentation with a level-set algorithm

    Science.gov (United States)

    Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Obajuluwa, Ademola; Xu, Jianwu; Hori, Masatoshi; Baron, Richard

    2010-03-01

    Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard" manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference (p(F<=f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate way of measuring liver volumes.

  17. Set-Membership Proportionate Affine Projection Algorithms

    Directory of Open Access Journals (Sweden)

    Stefan Werner

    2007-01-01

    Full Text Available Proportionate adaptive filters can improve the convergence speed for the identification of sparse systems as compared to their conventional counterparts. In this paper, the idea of proportionate adaptation is combined with the framework of set-membership filtering (SMF in an attempt to derive novel computationally efficient algorithms. The resulting algorithms attain an attractive faster converge for both situations of sparse and dispersive channels while decreasing the average computational complexity due to the data discerning feature of the SMF approach. In addition, we propose a rule that allows us to automatically adjust the number of past data pairs employed in the update. This leads to a set-membership proportionate affine projection algorithm (SM-PAPA having a variable data-reuse factor allowing a significant reduction in the overall complexity when compared with a fixed data-reuse factor. Reduced-complexity implementations of the proposed algorithms are also considered that reduce the dimensions of the matrix inversions involved in the update. Simulations show good results in terms of reduced number of updates, speed of convergence, and final mean-squared error.

  18. Transport and diffusion of material quantities on propagating interfaces via level set methods

    CERN Document Server

    Adalsteinsson, D

    2003-01-01

    We develop theory and numerical algorithms to apply level set methods to problems involving the transport and diffusion of material quantities in a level set framework. Level set methods are computational techniques for tracking moving interfaces; they work by embedding the propagating interface as the zero level set of a higher dimensional function, and then approximate the solution of the resulting initial value partial differential equation using upwind finite difference schemes. The traditional level set method works in the trace space of the evolving interface, and hence disregards any parameterization in the interface description. Consequently, material quantities on the interface which themselves are transported under the interface motion are not easily handled in this framework. We develop model equations and algorithmic techniques to extend the level set method to include these problems. We demonstrate the accuracy of our approach through a series of test examples and convergence studies.

  19. Transport and diffusion of material quantities on propagating interfaces via level set methods

    International Nuclear Information System (INIS)

    Adalsteinsson, David; Sethian, J.A.

    2003-01-01

    We develop theory and numerical algorithms to apply level set methods to problems involving the transport and diffusion of material quantities in a level set framework. Level set methods are computational techniques for tracking moving interfaces; they work by embedding the propagating interface as the zero level set of a higher dimensional function, and then approximate the solution of the resulting initial value partial differential equation using upwind finite difference schemes. The traditional level set method works in the trace space of the evolving interface, and hence disregards any parameterization in the interface description. Consequently, material quantities on the interface which themselves are transported under the interface motion are not easily handled in this framework. We develop model equations and algorithmic techniques to extend the level set method to include these problems. We demonstrate the accuracy of our approach through a series of test examples and convergence studies

  20. Parallel clustering algorithm for large-scale biological data sets.

    Science.gov (United States)

    Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang

    2014-01-01

    Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies.

  1. On multiple level-set regularization methods for inverse problems

    International Nuclear Information System (INIS)

    DeCezaro, A; Leitão, A; Tai, X-C

    2009-01-01

    We analyze a multiple level-set method for solving inverse problems with piecewise constant solutions. This method corresponds to an iterated Tikhonov method for a particular Tikhonov functional G α based on TV–H 1 penalization. We define generalized minimizers for our Tikhonov functional and establish an existence result. Moreover, we prove convergence and stability results of the proposed Tikhonov method. A multiple level-set algorithm is derived from the first-order optimality conditions for the Tikhonov functional G α , similarly as the iterated Tikhonov method. The proposed multiple level-set method is tested on an inverse potential problem. Numerical experiments show that the method is able to recover multiple objects as well as multiple contrast levels

  2. ANALYSIS OF PARAMETERIZATION VALUE REDUCTION OF SOFT SETS AND ITS ALGORITHM

    Directory of Open Access Journals (Sweden)

    Mohammed Adam Taheir Mohammed

    2016-02-01

    Full Text Available In this paper, the parameterization value reduction of soft sets and its algorithm in decision making are studied and described. It is based on parameterization reduction of soft sets. The purpose of this study is to investigate the inherited disadvantages of parameterization reduction of soft sets and its algorithm. The algorithms presented in this study attempt to reduce the value of least parameters from soft set. Through the analysis, two techniques have been described. Through this study, it is found that parameterization reduction of soft sets and its algorithm has yielded a different and inconsistency in suboptimal result.

  3. A local level set method based on a finite element method for unstructured meshes

    International Nuclear Information System (INIS)

    Ngo, Long Cu; Choi, Hyoung Gwon

    2016-01-01

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time

  4. A local level set method based on a finite element method for unstructured meshes

    Energy Technology Data Exchange (ETDEWEB)

    Ngo, Long Cu; Choi, Hyoung Gwon [School of Mechanical Engineering, Seoul National University of Science and Technology, Seoul (Korea, Republic of)

    2016-12-15

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time.

  5. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi [Department of Radiology, University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637 (United States)

    2010-05-15

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less

  6. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    International Nuclear Information System (INIS)

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi

    2010-01-01

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F≤f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time

  7. A Memory and Computation Efficient Sparse Level-Set Method

    NARCIS (Netherlands)

    Laan, Wladimir J. van der; Jalba, Andrei C.; Roerdink, Jos B.T.M.

    Since its introduction, the level set method has become the favorite technique for capturing and tracking moving interfaces, and found applications in a wide variety of scientific fields. In this paper we present efficient data structures and algorithms for tracking dynamic interfaces through the

  8. Semiautomated hybrid algorithm for estimation of three-dimensional liver surface in CT using dynamic cellular automata and level-sets.

    Science.gov (United States)

    Dakua, Sarada Prasad; Abinahed, Julien; Al-Ansari, Abdulla

    2015-04-01

    Liver segmentation continues to remain a major challenge, largely due to its intense complexity with surrounding anatomical structures (stomach, kidney, and heart), high noise level and lack of contrast in pathological computed tomography (CT) data. We present an approach to reconstructing the liver surface in low contrast CT. The main contributions are: (1) a stochastic resonance-based methodology in discrete cosine transform domain is developed to enhance the contrast of pathological liver images, (2) a new formulation is proposed to prevent the object boundary, resulting from the cellular automata method, from leaking into the surrounding areas of similar intensity, and (3) a level-set method is suggested to generate intermediate segmentation contours from two segmented slices distantly located in a subject sequence. We have tested the algorithm on real datasets obtained from two sources, Hamad General Hospital and medical image computing and computer-assisted interventions grand challenge workshop. Various parameters in the algorithm, such as [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text], play imperative roles, thus their values are precisely selected. Both qualitative and quantitative evaluation performed on liver data show promising segmentation accuracy when compared with ground truth data reflecting the potential of the proposed method.

  9. Scalable Algorithms for Clustering Large Geospatiotemporal Data Sets on Manycore Architectures

    Science.gov (United States)

    Mills, R. T.; Hoffman, F. M.; Kumar, J.; Sreepathi, S.; Sripathi, V.

    2016-12-01

    The increasing availability of high-resolution geospatiotemporal data sets from sources such as observatory networks, remote sensing platforms, and computational Earth system models has opened new possibilities for knowledge discovery using data sets fused from disparate sources. Traditional algorithms and computing platforms are impractical for the analysis and synthesis of data sets of this size; however, new algorithmic approaches that can effectively utilize the complex memory hierarchies and the extremely high levels of available parallelism in state-of-the-art high-performance computing platforms can enable such analysis. We describe a massively parallel implementation of accelerated k-means clustering and some optimizations to boost computational intensity and utilization of wide SIMD lanes on state-of-the art multi- and manycore processors, including the second-generation Intel Xeon Phi ("Knights Landing") processor based on the Intel Many Integrated Core (MIC) architecture, which includes several new features, including an on-package high-bandwidth memory. We also analyze the code in the context of a few practical applications to the analysis of climatic and remotely-sensed vegetation phenology data sets, and speculate on some of the new applications that such scalable analysis methods may enable.

  10. Considerations and Algorithms for Compression of Sets

    DEFF Research Database (Denmark)

    Larsson, Jesper

    We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...... compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques....

  11. Solution Algorithm for a New Bi-Level Discrete Network Design Problem

    Directory of Open Access Journals (Sweden)

    Qun Chen

    2013-12-01

    Full Text Available A new discrete network design problem (DNDP was pro-posed in this paper, where the variables can be a series of integers rather than just 0-1. The new DNDP can determine both capacity improvement grades of reconstruction roads and locations and capacity grades of newly added roads, and thus complies with the practical projects where road capacity can only be some discrete levels corresponding to the number of lanes of roads. This paper designed a solution algorithm combining branch-and-bound with Hooke-Jeeves algorithm, where feasible integer solutions are recorded in searching the process of Hooke-Jeeves algorithm, lend -ing itself to determine the upper bound of the upper-level problem. The thresholds for branch cutting and ending were set for earlier convergence. Numerical examples are given to demonstrate the efficiency of the proposed algorithm.

  12. Structural level set inversion for microwave breast screening

    International Nuclear Information System (INIS)

    Irishina, Natalia; Álvarez, Diego; Dorn, Oliver; Moscoso, Miguel

    2010-01-01

    We present a new inversion strategy for the early detection of breast cancer from microwave data which is based on a new multiphase level set technique. This novel structural inversion method uses a modification of the color level set technique adapted to the specific situation of structural breast imaging taking into account the high complexity of the breast tissue. We only use data of a few microwave frequencies for detecting the tumors hidden in this complex structure. Three level set functions are employed for describing four different types of breast tissue, where each of these four regions is allowed to have a complicated topology and to have an interior structure which needs to be estimated from the data simultaneously with the region interfaces. The algorithm consists of several stages of increasing complexity. In each stage more details about the anatomical structure of the breast interior is incorporated into the inversion model. The synthetic breast models which are used for creating simulated data are based on real MRI images of the breast and are therefore quite realistic. Our results demonstrate the potential and feasibility of the proposed level set technique for detecting, locating and characterizing a small tumor in its early stage of development embedded in such a realistic breast model. Both the data acquisition simulation and the inversion are carried out in 2D

  13. Application of the artificial bee colony algorithm for solving the set covering problem.

    Science.gov (United States)

    Crawford, Broderick; Soto, Ricardo; Cuesta, Rodrigo; Paredes, Fernando

    2014-01-01

    The set covering problem is a formal model for many practical optimization problems. In the set covering problem the goal is to choose a subset of the columns of minimal cost that covers every row. Here, we present a novel application of the artificial bee colony algorithm to solve the non-unicost set covering problem. The artificial bee colony algorithm is a recent swarm metaheuristic technique based on the intelligent foraging behavior of honey bees. Experimental results show that our artificial bee colony algorithm is competitive in terms of solution quality with other recent metaheuristic approaches for the set covering problem.

  14. Comparison of Co-Temporal Modeling Algorithms on Sparse Experimental Time Series Data Sets.

    Science.gov (United States)

    Allen, Edward E; Norris, James L; John, David J; Thomas, Stan J; Turkett, William H; Fetrow, Jacquelyn S

    2010-01-01

    Multiple approaches for reverse-engineering biological networks from time-series data have been proposed in the computational biology literature. These approaches can be classified by their underlying mathematical algorithms, such as Bayesian or algebraic techniques, as well as by their time paradigm, which includes next-state and co-temporal modeling. The types of biological relationships, such as parent-child or siblings, discovered by these algorithms are quite varied. It is important to understand the strengths and weaknesses of the various algorithms and time paradigms on actual experimental data. We assess how well the co-temporal implementations of three algorithms, continuous Bayesian, discrete Bayesian, and computational algebraic, can 1) identify two types of entity relationships, parent and sibling, between biological entities, 2) deal with experimental sparse time course data, and 3) handle experimental noise seen in replicate data sets. These algorithms are evaluated, using the shuffle index metric, for how well the resulting models match literature models in terms of siblings and parent relationships. Results indicate that all three co-temporal algorithms perform well, at a statistically significant level, at finding sibling relationships, but perform relatively poorly in finding parent relationships.

  15. An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs

    Directory of Open Access Journals (Sweden)

    Kishore R. Mosaliganti

    2013-12-01

    Full Text Available In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse and grid representations (point, mesh, and image-based. Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g. gradient and Hessians across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a

  16. An application of the maximal independent set algorithm to course ...

    African Journals Online (AJOL)

    In this paper, we demonstrated one of the many applications of the Maximal Independent Set Algorithm in the area of course allocation. A program was developed in Pascal and used in implementing a modified version of the algorithm to assign teaching courses to available lecturers in any academic environment and it ...

  17. Examination of a recommended algorithm for eliminating nonsystematic delay discounting response sets.

    Science.gov (United States)

    White, Thomas J; Redner, Ryan; Skelly, Joan M; Higgins, Stephen T

    2015-09-01

    To examine (1) whether use of a recommended algorithm (Johnson and Bickel, 2008) improves upon conventional statistical model fit (R(2)) for identifying nonsystematic response sets in delay discounting (DD) data, (2) whether removing such data meaningfully effects research outcomes, and (3) to identify participant characteristics associated with nonsystematic response sets. Discounting of hypothetical monetary rewards was assessed among 349 pregnant women (231 smokers and 118 recent quitters) via a computerized task comparing $1000 at seven future time points with smaller values available immediately. Nonsystematic response sets were identified using the algorithm and conventional statistical model fit (R(2)). The association between DD and quitting was analyzed with and without nonsystematic response sets to examine whether the inclusion or exclusion impacts this relationship. Logistic regression was used to examine whether participant sociodemographics were associated with nonsystematic response sets. The algorithm excluded fewer cases than the R(2) method (14% vs. 16%), and was not correlated with logk as is R(2). The relationship between logk and the clinical outcome (spontaneous quitting) was unaffected by exclusion methods; however, other variables in the model were affected. Lower educational attainment and younger age were associated with nonsystematic response sets. The algorithm eliminated data that were inconsistent with the nature of discounting and retained data that were orderly. Neither method impacted the smoking/DD relationship in this data set. Nonsystematic response sets are more likely among younger and less educated participants, who may need extra training or support in DD studies. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Efficient Algorithms for Segmentation of Item-Set Time Series

    Science.gov (United States)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  19. Fuzzy algorithms to generate level controllers for nuclear power plant steam generators

    International Nuclear Information System (INIS)

    Moon, Byung Soo; Park, Jae Chang; Kim, Dong Hwa; Kim, Byung Koo

    1993-01-01

    In this paper, we present two sets of fuzzy algorithms for the steam generater level control; one for the high power operations where the flow error is available and the other for the low power operations where the flow error is not available. These are converted to a PID type controller for the high power case and to a quadratic function form of a controller for the low power case. These controllers are implemented on the Compact Nuclear Simulator at Korea Atomic Energy Research Institute and tested by a set of four simulation experiments for each. For both cases, the results show that the total variation of the level error and of the flow error are about 50% of those by the PI controllers with about one half of the control action. For the high power case, this is mainly due to the fact that a combination of two PD type controllers in the velocity algorithm form rather than a combination of two PI type controllers in the position algorithm form is used. For the low power case, the controller is essentially a PID type with a very small integral component where the average values for the derivative component input and for the controller output are used. (Author)

  20. Generalization of some hidden subgroup algorithms for input sets of arbitrary size

    Science.gov (United States)

    Poslu, Damla; Say, A. C. Cem

    2006-05-01

    We consider the problem of generalizing some quantum algorithms so that they will work on input domains whose cardinalities are not necessarily powers of two. When analyzing the algorithms we assume that generating superpositions of arbitrary subsets of basis states whose cardinalities are not necessarily powers of two perfectly is possible. We have taken Ballhysa's model as a template and have extended it to Chi, Kim and Lee's generalizations of the Deutsch-Jozsa algorithm and to Simon's algorithm. With perfectly equal superpositions of input sets of arbitrary size, Chi, Kim and Lee's generalized Deutsch-Jozsa algorithms, both for evenly-distributed and evenly-balanced functions, worked with one-sided error property. For Simon's algorithm the success probability of the generalized algorithm is the same as that of the original for input sets of arbitrary cardinalities with equiprobable superpositions, since the property that the measured strings are all those which have dot product zero with the string we search, for the case where the function is 2-to-1, is not lost.

  1. Level-0 trigger algorithms for the ALICE PHOS detector

    CERN Document Server

    Wang, D; Wang, Y P; Huang, G M; Kral, J; Yin, Z B; Zhou, D C; Zhang, F; Ullaland, K; Muller, H; Liu, L J

    2011-01-01

    The PHOS level-0 trigger provides a minimum bias trigger for p-p collisions and information for a level-1 trigger at both p-p and Pb-Pb collisions. There are two level-0 trigger generating algorithms under consideration: the Direct Comparison algorithm and the Weighted Sum algorithm. In order to study trigger algorithms via simulation, a simplified equivalent model is extracted from the trigger electronics to derive the waveform function of the Analog-or signal as input to the trigger algorithms. Simulations shown that the Weighted Sum algorithm can achieve higher trigger efficiency and provide more precise single channel energy information than the direct compare algorithm. An energy resolution of 9.75 MeV can be achieved with the Weighted Sum algorithm at a sampling rate of 40 Msps (mega samples per second) at 1 GeV. The timing performance at a sampling rate of 40 Msps with the Weighted Sum algorithm is better than that at a sampling rate of 20 Msps with both algorithms. The level-0 trigger can be delivered...

  2. The Sum-Product Algorithm for Degree-2 Check Nodes and Trapping Sets

    OpenAIRE

    Brevik, John O.; O'Sullivan, Michael E.

    2014-01-01

    The sum-product algorithm for decoding of binary codes is analyzed for bipartite graphs in which the check nodes all have degree $2$. The algorithm simplifies dramatically and may be expressed using linear algebra. Exact results about the convergence of the algorithm are derived and applied to trapping sets.

  3. A highly efficient 3D level-set grain growth algorithm tailored for ccNUMA architecture

    Science.gov (United States)

    Mießen, C.; Velinov, N.; Gottstein, G.; Barrales-Mora, L. A.

    2017-12-01

    A highly efficient simulation model for 2D and 3D grain growth was developed based on the level-set method. The model introduces modern computational concepts to achieve excellent performance on parallel computer architectures. Strong scalability was measured on cache-coherent non-uniform memory access (ccNUMA) architectures. To achieve this, the proposed approach considers the application of local level-set functions at the grain level. Ideal and non-ideal grain growth was simulated in 3D with the objective to study the evolution of statistical representative volume elements in polycrystals. In addition, microstructure evolution in an anisotropic magnetic material affected by an external magnetic field was simulated.

  4. Aeon: Synthesizing Scheduling Algorithms from High-Level Models

    Science.gov (United States)

    Monette, Jean-Noël; Deville, Yves; van Hentenryck, Pascal

    This paper describes the aeon system whose aim is to synthesize scheduling algorithms from high-level models. A eon, which is entirely written in comet, receives as input a high-level model for a scheduling application which is then analyzed to generate a dedicated scheduling algorithm exploiting the structure of the model. A eon provides a variety of synthesizers for generating complete or heuristic algorithms. Moreover, synthesizers are compositional, making it possible to generate complex hybrid algorithms naturally. Preliminary experimental results indicate that this approach may be competitive with state-of-the-art search algorithms.

  5. Mapping topographic structure in white matter pathways with level set trees.

    Directory of Open Access Journals (Sweden)

    Brian P Kent

    Full Text Available Fiber tractography on diffusion imaging data offers rich potential for describing white matter pathways in the human brain, but characterizing the spatial organization in these large and complex data sets remains a challenge. We show that level set trees--which provide a concise representation of the hierarchical mode structure of probability density functions--offer a statistically-principled framework for visualizing and analyzing topography in fiber streamlines. Using diffusion spectrum imaging data collected on neurologically healthy controls (N = 30, we mapped white matter pathways from the cortex into the striatum using a deterministic tractography algorithm that estimates fiber bundles as dimensionless streamlines. Level set trees were used for interactive exploration of patterns in the endpoint distributions of the mapped fiber pathways and an efficient segmentation of the pathways that had empirical accuracy comparable to standard nonparametric clustering techniques. We show that level set trees can also be generalized to model pseudo-density functions in order to analyze a broader array of data types, including entire fiber streamlines. Finally, resampling methods show the reliability of the level set tree as a descriptive measure of topographic structure, illustrating its potential as a statistical descriptor in brain imaging analysis. These results highlight the broad applicability of level set trees for visualizing and analyzing high-dimensional data like fiber tractography output.

  6. A new level set model for cell image segmentation

    Science.gov (United States)

    Ma, Jing-Feng; Hou, Kai; Bao, Shang-Lian; Chen, Chun

    2011-02-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing.

  7. A level-set method for two-phase flows with soluble surfactant

    Science.gov (United States)

    Xu, Jian-Jun; Shi, Weidong; Lai, Ming-Chih

    2018-01-01

    A level-set method is presented for solving two-phase flows with soluble surfactant. The Navier-Stokes equations are solved along with the bulk surfactant and the interfacial surfactant equations. In particular, the convection-diffusion equation for the bulk surfactant on the irregular moving domain is solved by using a level-set based diffusive-domain method. A conservation law for the total surfactant mass is derived, and a re-scaling procedure for the surfactant concentrations is proposed to compensate for the surfactant mass loss due to numerical diffusion. The whole numerical algorithm is easy for implementation. Several numerical simulations in 2D and 3D show the effects of surfactant solubility on drop dynamics under shear flow.

  8. Validating hierarchical verbal autopsy expert algorithms in a large data set with known causes of death.

    Science.gov (United States)

    Kalter, Henry D; Perin, Jamie; Black, Robert E

    2016-06-01

    Physician assessment historically has been the most common method of analyzing verbal autopsy (VA) data. Recently, the World Health Organization endorsed two automated methods, Tariff 2.0 and InterVA-4, which promise greater objectivity and lower cost. A disadvantage of the Tariff method is that it requires a training data set from a prior validation study, while InterVA relies on clinically specified conditional probabilities. We undertook to validate the hierarchical expert algorithm analysis of VA data, an automated, intuitive, deterministic method that does not require a training data set. Using Population Health Metrics Research Consortium study hospital source data, we compared the primary causes of 1629 neonatal and 1456 1-59 month-old child deaths from VA expert algorithms arranged in a hierarchy to their reference standard causes. The expert algorithms were held constant, while five prior and one new "compromise" neonatal hierarchy, and three former child hierarchies were tested. For each comparison, the reference standard data were resampled 1000 times within the range of cause-specific mortality fractions (CSMF) for one of three approximated community scenarios in the 2013 WHO global causes of death, plus one random mortality cause proportions scenario. We utilized CSMF accuracy to assess overall population-level validity, and the absolute difference between VA and reference standard CSMFs to examine particular causes. Chance-corrected concordance (CCC) and Cohen's kappa were used to evaluate individual-level cause assignment. Overall CSMF accuracy for the best-performing expert algorithm hierarchy was 0.80 (range 0.57-0.96) for neonatal deaths and 0.76 (0.50-0.97) for child deaths. Performance for particular causes of death varied, with fairly flat estimated CSMF over a range of reference values for several causes. Performance at the individual diagnosis level was also less favorable than that for overall CSMF (neonatal: best CCC = 0.23, range 0

  9. A lossless one-pass sorting algorithm for symmetric three-dimensional gamma-ray data sets

    International Nuclear Information System (INIS)

    Brinkman, M.J.; Manatt, D.R.; Becker, J.A.; Henry, E.A.

    1992-01-01

    An algorithm for three-dimensional sorting and storing of the large data sets expected from the next generation of large gamma-ray detector arrays (i.e., EUROGAM, GAMMASPHERE) is presented. The algorithm allows the storage of realistic data sets on standard mass storage media. A discussion of an efficient implementation of the algorithm is provided with a proposed technique for exploiting its inherently parallel nature. (author). 5 refs., 2 figs

  10. A lossless one-pass sorting algorithm for symmetric three-dimensional gamma-ray data sets

    Energy Technology Data Exchange (ETDEWEB)

    Brinkman, M J; Manatt, D R; Becker, J A; Henry, E A [Lawrence Livermore National Lab., CA (United States)

    1992-08-01

    An algorithm for three-dimensional sorting and storing of the large data sets expected from the next generation of large gamma-ray detector arrays (i.e., EUROGAM, GAMMASPHERE) is presented. The algorithm allows the storage of realistic data sets on standard mass storage media. A discussion of an efficient implementation of the algorithm is provided with a proposed technique for exploiting its inherently parallel nature. (author). 5 refs., 2 figs.

  11. Surface-to-surface registration using level sets

    DEFF Research Database (Denmark)

    Hansen, Mads Fogtmann; Erbou, Søren G.; Vester-Christensen, Martin

    2007-01-01

    This paper presents a general approach for surface-to-surface registration (S2SR) with the Euclidean metric using signed distance maps. In addition, the method is symmetric such that the registration of a shape A to a shape B is identical to the registration of the shape B to the shape A. The S2SR...... problem can be approximated by the image registration (IR) problem of the signed distance maps (SDMs) of the surfaces confined to some narrow band. By shrinking the narrow bands around the zero level sets the solution to the IR problem converges towards the S2SR problem. It is our hypothesis...... that this approach is more robust and less prone to fall into local minima than ordinary surface-to-surface registration. The IR problem is solved using the inverse compositional algorithm. In this paper, a set of 40 pelvic bones of Duroc pigs are registered to each other w.r.t. the Euclidean transformation...

  12. Upper-Lower Bounds Candidate Sets Searching Algorithm for Bayesian Network Structure Learning

    Directory of Open Access Journals (Sweden)

    Guangyi Liu

    2014-01-01

    Full Text Available Bayesian network is an important theoretical model in artificial intelligence field and also a powerful tool for processing uncertainty issues. Considering the slow convergence speed of current Bayesian network structure learning algorithms, a fast hybrid learning method is proposed in this paper. We start with further analysis of information provided by low-order conditional independence testing, and then two methods are given for constructing graph model of network, which is theoretically proved to be upper and lower bounds of the structure space of target network, so that candidate sets are given as a result; after that a search and scoring algorithm is operated based on the candidate sets to find the final structure of the network. Simulation results show that the algorithm proposed in this paper is more efficient than similar algorithms with the same learning precision.

  13. Fuzzy Sets-based Control Rules for Terminating Algorithms

    Directory of Open Access Journals (Sweden)

    Jose L. VERDEGAY

    2002-01-01

    Full Text Available In this paper some problems arising in the interface between two different areas, Decision Support Systems and Fuzzy Sets and Systems, are considered. The Model-Base Management System of a Decision Support System which involves some fuzziness is considered, and in that context the questions on the management of the fuzziness in some optimisation models, and then of using fuzzy rules for terminating conventional algorithms are presented, discussed and analyzed. Finally, for the concrete case of the Travelling Salesman Problem, and as an illustration of determination, management and using the fuzzy rules, a new algorithm easy to implement in the Model-Base Management System of any oriented Decision Support System is shown.

  14. A variational approach to multi-phase motion of gas, liquid and solid based on the level set method

    Science.gov (United States)

    Yokoi, Kensuke

    2009-07-01

    We propose a simple and robust numerical algorithm to deal with multi-phase motion of gas, liquid and solid based on the level set method [S. Osher, J.A. Sethian, Front propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulation, J. Comput. Phys. 79 (1988) 12; M. Sussman, P. Smereka, S. Osher, A level set approach for capturing solution to incompressible two-phase flow, J. Comput. Phys. 114 (1994) 146; J.A. Sethian, Level Set Methods and Fast Marching Methods, Cambridge University Press, 1999; S. Osher, R. Fedkiw, Level Set Methods and Dynamics Implicit Surface, Applied Mathematical Sciences, vol. 153, Springer, 2003]. In Eulerian framework, to simulate interaction between a moving solid object and an interfacial flow, we need to define at least two functions (level set functions) to distinguish three materials. In such simulations, in general two functions overlap and/or disagree due to numerical errors such as numerical diffusion. In this paper, we resolved the problem using the idea of the active contour model [M. Kass, A. Witkin, D. Terzopoulos, Snakes: active contour models, International Journal of Computer Vision 1 (1988) 321; V. Caselles, R. Kimmel, G. Sapiro, Geodesic active contours, International Journal of Computer Vision 22 (1997) 61; G. Sapiro, Geometric Partial Differential Equations and Image Analysis, Cambridge University Press, 2001; R. Kimmel, Numerical Geometry of Images: Theory, Algorithms, and Applications, Springer-Verlag, 2003] introduced in the field of image processing.

  15. Classification of Normal and Apoptotic Cells from Fluorescence Microscopy Images Using Generalized Polynomial Chaos and Level Set Function.

    Science.gov (United States)

    Du, Yuncheng; Budman, Hector M; Duever, Thomas A

    2016-06-01

    Accurate automated quantitative analysis of living cells based on fluorescence microscopy images can be very useful for fast evaluation of experimental outcomes and cell culture protocols. In this work, an algorithm is developed for fast differentiation of normal and apoptotic viable Chinese hamster ovary (CHO) cells. For effective segmentation of cell images, a stochastic segmentation algorithm is developed by combining a generalized polynomial chaos expansion with a level set function-based segmentation algorithm. This approach provides a probabilistic description of the segmented cellular regions along the boundary, from which it is possible to calculate morphological changes related to apoptosis, i.e., the curvature and length of a cell's boundary. These features are then used as inputs to a support vector machine (SVM) classifier that is trained to distinguish between normal and apoptotic viable states of CHO cell images. The use of morphological features obtained from the stochastic level set segmentation of cell images in combination with the trained SVM classifier is more efficient in terms of differentiation accuracy as compared with the original deterministic level set method.

  16. General Algorithm (High level)

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. General Algorithm (High level). Iteratively. Use Tightness Property to remove points of P1,..,Pi. Use random sampling to get a Random Sample (of enough points) from the next largest cluster, Pi+1. Use the Random Sampling Procedure to approximate ci+1 using the ...

  17. Wavelet compression of multichannel ECG data by enhanced set partitioning in hierarchical trees algorithm.

    Science.gov (United States)

    Sharifahmadian, Ershad

    2006-01-01

    The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.

  18. A new level set model for cell image segmentation

    International Nuclear Information System (INIS)

    Ma Jing-Feng; Chen Chun; Hou Kai; Bao Shang-Lian

    2011-01-01

    In this paper we first determine three phases of cell images: background, cytoplasm and nucleolus according to the general physical characteristics of cell images, and then develop a variational model, based on these characteristics, to segment nucleolus and cytoplasm from their relatively complicated backgrounds. In the meantime, the preprocessing obtained information of cell images using the OTSU algorithm is used to initialize the level set function in the model, which can speed up the segmentation and present satisfactory results in cell image processing. (cross-disciplinary physics and related areas of science and technology)

  19. A Negative Selection Algorithm Based on Hierarchical Clustering of Self Set and its Application in Anomaly Detection

    Directory of Open Access Journals (Sweden)

    Wen Chen

    2011-08-01

    Full Text Available A negative selection algorithm based on the hierarchical clustering of self set HC-RNSA is introduced in this paper. Several strategies are applied to improve the algorithm performance. First, the self data set is replaced by the self cluster centers to compare with the detector candidates in each cluster level. As the number of self clusters is much less than the self set size, the detector generation efficiency is improved. Second, during the detector generation process, the detector candidates are restricted to the lower coverage space to reduce detector redundancy. In the article, the problem that the distances between antigens coverage to a constant value in the high dimensional space is analyzed, accordingly the Principle Component Analysis (PCA method is used to reduce the data dimension, and the fractional distance function is employed to enhance the distinctiveness between the self and non-self antigens. The detector generation procedure is terminated when the expected non-self coverage is reached. The theory analysis and experimental results demonstrate that the detection rate of HC-RNSA is higher than that of the traditional negative selection algorithms while the false alarm rate and time cost are reduced.

  20. A hospital-level cost-effectiveness analysis model for toxigenic Clostridium difficile detection algorithms.

    Science.gov (United States)

    Verhoye, E; Vandecandelaere, P; De Beenhouwer, H; Coppens, G; Cartuyvels, R; Van den Abeele, A; Frans, J; Laffut, W

    2015-10-01

    Despite thorough analyses of the analytical performance of Clostridium difficile tests and test algorithms, the financial impact at hospital level has not been well described. Such a model should take institution-specific variables into account, such as incidence, request behaviour and infection control policies. To calculate the total hospital costs of different test algorithms, accounting for days on which infected patients with toxigenic strains were not isolated and therefore posed an infectious risk for new/secondary nosocomial infections. A mathematical algorithm was developed to gather the above parameters using data from seven Flemish hospital laboratories (Bilulu Microbiology Study Group) (number of tests, local prevalence and hospital hygiene measures). Measures of sensitivity and specificity for the evaluated tests were taken from the literature. List prices and costs of assays were provided by the manufacturer or the institutions. The calculated cost included reagent costs, personnel costs and the financial burden following due and undue isolations and antibiotic therapies. Five different test algorithms were compared. A dynamic calculation model was constructed to evaluate the cost:benefit ratio of each algorithm for a set of institution- and time-dependent inputted variables (prevalence, cost fluctuations and test performances), making it possible to choose the most advantageous algorithm for its setting. A two-step test algorithm with concomitant glutamate dehydrogenase and toxin testing, followed by a rapid molecular assay was found to be the most cost-effective algorithm. This enabled resolution of almost all cases on the day of arrival, minimizing the number of unnecessary or missing isolations. Copyright © 2015 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  1. A linear-time algorithm for Euclidean feature transform sets

    NARCIS (Netherlands)

    Hesselink, Wim H.

    2007-01-01

    The Euclidean distance transform of a binary image is the function that assigns to every pixel the Euclidean distance to the background. The Euclidean feature transform is the function that assigns to every pixel the set of background pixels with this distance. We present an algorithm to compute the

  2. Quality-aware features-based noise level estimator for block matching and three-dimensional filtering algorithm

    Science.gov (United States)

    Xu, Shaoping; Hu, Lingyan; Yang, Xiaohui

    2016-01-01

    The performance of conventional denoising algorithms is usually controlled by one or several parameters whose optimal settings depend on the contents of the processed images and the characteristics of the noises. Among these parameters, noise level is a fundamental parameter that is always assumed to be known by most of the existing denoising algorithms (so-called nonblind denoising algorithms), which largely limits the applicability of these nonblind denoising algorithms in many applications. Moreover, these nonblind algorithms do not always achieve the best denoised images in visual quality even when fed with the actual noise level parameter. To address these shortcomings, in this paper we propose a new quality-aware features-based noise level estimator (NLE), which consists of quality-aware features extraction and optimal noise level parameter prediction. First, considering that image local contrast features convey important structural information that is closely related to image perceptual quality, we utilize the marginal statistics of two local contrast operators, i.e., the gradient magnitude and the Laplacian of Gaussian (LOG), to extract quality-aware features. The proposed quality-aware features have very low computational complexity, making them well suited for time-constrained applications. Then we propose a learning-based framework where the noise level parameter is estimated based on the quality-aware features. Based on the proposed NLE, we develop a blind block matching and three-dimensional filtering (BBM3D) denoising algorithm which is capable of effectively removing additive white Gaussian noise, even coupled with impulse noise. The noise level parameter of the BBM3D algorithm is automatically tuned according to the quality-aware features, guaranteeing the best performance. As such, the classical block matching and three-dimensional algorithm can be transformed into a blind one in an unsupervised manner. Experimental results demonstrate that the

  3. Variational Level Set Method for Two-Stage Image Segmentation Based on Morphological Gradients

    Directory of Open Access Journals (Sweden)

    Zemin Ren

    2014-01-01

    Full Text Available We use variational level set method and transition region extraction techniques to achieve image segmentation task. The proposed scheme is done by two steps. We first develop a novel algorithm to extract transition region based on the morphological gradient. After this, we integrate the transition region into a variational level set framework and develop a novel geometric active contour model, which include an external energy based on transition region and fractional order edge indicator function. The external energy is used to drive the zero level set toward the desired image features, such as object boundaries. Due to this external energy, the proposed model allows for more flexible initialization. The fractional order edge indicator function is incorporated into the length regularization term to diminish the influence of noise. Moreover, internal energy is added into the proposed model to penalize the deviation of the level set function from a signed distance function. The results evolution of the level set function is the gradient flow that minimizes the overall energy functional. The proposed model has been applied to both synthetic and real images with promising results.

  4. Application of Multiple-Population Genetic Algorithm in Optimizing the Train-Set Circulation Plan Problem

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2017-01-01

    Full Text Available The train-set circulation plan problem (TCPP belongs to the rolling stock scheduling (RSS problem and is similar to the aircraft routing problem (ARP in airline operations and the vehicle routing problem (VRP in the logistics field. However, TCPP involves additional complexity due to the maintenance constraint of train-sets: train-sets must conduct maintenance tasks after running for a certain time and distance. The TCPP is nondeterministic polynomial hard (NP-hard. There is no available algorithm that can obtain the optimal global solution, and many factors such as the utilization mode and the maintenance mode impact the solution of the TCPP. This paper proposes a train-set circulation optimization model to minimize the total connection time and maintenance costs and describes the design of an efficient multiple-population genetic algorithm (MPGA to solve this model. A realistic high-speed railway (HSR case is selected to verify our model and algorithm, and, then, a comparison of different algorithms is carried out. Furthermore, a new maintenance mode is proposed, and related implementation requirements are discussed.

  5. A deterministic algorithm for fitting a step function to a weighted point-set

    KAUST Repository

    Fournier, Hervé

    2013-02-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance to the input points. It matches the expected time bound of the best known randomized algorithm for this problem. Our approach relies on Coles improved parametric searching technique. As a direct application, our result yields the first O(nlogn)-time algorithm for computing a k-center of a set of n weighted points on the real line. © 2012 Elsevier B.V.

  6. Chaotic logic gate: A new approach in set and design by genetic algorithm

    International Nuclear Information System (INIS)

    Beyki, Mahmood; Yaghoobi, Mahdi

    2015-01-01

    How to reconfigure a logic gate is an attractive subject for different applications. Chaotic systems can yield a wide variety of patterns and here we use this feature to produce a logic gate. This feature forms the basis for designing a dynamical computing device that can be rapidly reconfigured to become any wanted logical operator. This logic gate that can reconfigure to any logical operator when placed in its chaotic state is called chaotic logic gate. The reconfiguration realize by setting the parameter values of chaotic logic gate. In this paper we present mechanisms about how to produce a logic gate based on the logistic map in its chaotic state and genetic algorithm is used to set the parameter values. We use three well-known selection methods used in genetic algorithm: tournament selection, Roulette wheel selection and random selection. The results show the tournament selection method is the best method for set the parameter values. Further, genetic algorithm is a powerful tool to set the parameter values of chaotic logic gate

  7. Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem

    Science.gov (United States)

    Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang

    2015-09-01

    A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.

  8. A Variational Level Set Model Combined with FCMS for Image Clustering Segmentation

    Directory of Open Access Journals (Sweden)

    Liming Tang

    2014-01-01

    Full Text Available The fuzzy C means clustering algorithm with spatial constraint (FCMS is effective for image segmentation. However, it lacks essential smoothing constraints to the cluster boundaries and enough robustness to the noise. Samson et al. proposed a variational level set model for image clustering segmentation, which can get the smooth cluster boundaries and closed cluster regions due to the use of level set scheme. However it is very sensitive to the noise since it is actually a hard C means clustering model. In this paper, based on Samson’s work, we propose a new variational level set model combined with FCMS for image clustering segmentation. Compared with FCMS clustering, the proposed model can get smooth cluster boundaries and closed cluster regions due to the use of level set scheme. In addition, a block-based energy is incorporated into the energy functional, which enables the proposed model to be more robust to the noise than FCMS clustering and Samson’s model. Some experiments on the synthetic and real images are performed to assess the performance of the proposed model. Compared with some classical image segmentation models, the proposed model has a better performance for the images contaminated by different noise levels.

  9. A robust algorithm to solve the signal setting problem considering different traffic assignment approaches

    Directory of Open Access Journals (Sweden)

    Adacher Ludovica

    2017-12-01

    Full Text Available In this paper we extend a stochastic discrete optimization algorithm so as to tackle the signal setting problem. Signalized junctions represent critical points of an urban transportation network, and the efficiency of their traffic signal setting influences the overall network performance. Since road congestion usually takes place at or close to junction areas, an improvement in signal settings contributes to improving travel times, drivers’ comfort, fuel consumption efficiency, pollution and safety. In a traffic network, the signal control strategy affects the travel time on the roads and influences drivers’ route choice behavior. The paper presents an algorithm for signal setting optimization of signalized junctions in a congested road network. The objective function used in this work is a weighted sum of delays caused by the signalized intersections. We propose an iterative procedure to solve the problem by alternately updating signal settings based on fixed flows and traffic assignment based on fixed signal settings. To show the robustness of our method, we consider two different assignment methods: one based on user equilibrium assignment, well established in the literature as well as in practice, and the other based on a platoon simulation model with vehicular flow propagation and spill-back. Our optimization algorithm is also compared with others well known in the literature for this problem. The surrogate method (SM, particle swarm optimization (PSO and the genetic algorithm (GA are compared for a combined problem of global optimization of signal settings and traffic assignment (GOSSTA. Numerical experiments on a real test network are reported.

  10. A level set method for cupping artifact correction in cone-beam CT

    International Nuclear Information System (INIS)

    Xie, Shipeng; Li, Haibo; Ge, Qi; Li, Chunming

    2015-01-01

    Purpose: To reduce cupping artifacts and improve the contrast-to-noise ratio in cone-beam computed tomography (CBCT). Methods: A level set method is proposed to reduce cupping artifacts in the reconstructed image of CBCT. The authors derive a local intensity clustering property of the CBCT image and define a local clustering criterion function of the image intensities in a neighborhood of each point. This criterion function defines an energy in terms of the level set functions, which represent a segmentation result and the cupping artifacts. The cupping artifacts are estimated as a result of minimizing this energy. Results: The cupping artifacts in CBCT are reduced by an average of 90%. The results indicate that the level set-based algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. Conclusions: The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented, and provides cupping correction through a single-scan acquisition. The experimental results demonstrate that the proposed method successfully reduces the cupping artifacts

  11. GPU accelerated edge-region based level set evolution constrained by 2D gray-scale histogram.

    Science.gov (United States)

    Balla-Arabé, Souleymane; Gao, Xinbo; Wang, Bin

    2013-07-01

    Due to its intrinsic nature which allows to easily handle complex shapes and topological changes, the level set method (LSM) has been widely used in image segmentation. Nevertheless, LSM is computationally expensive, which limits its applications in real-time systems. For this purpose, we propose a new level set algorithm, which uses simultaneously edge, region, and 2D histogram information in order to efficiently segment objects of interest in a given scene. The computational complexity of the proposed LSM is greatly reduced by using the highly parallelizable lattice Boltzmann method (LBM) with a body force to solve the level set equation (LSE). The body force is the link with image data and is defined from the proposed LSE. The proposed LSM is then implemented using an NVIDIA graphics processing units to fully take advantage of the LBM local nature. The new algorithm is effective, robust against noise, independent to the initial contour, fast, and highly parallelizable. The edge and region information enable to detect objects with and without edges, and the 2D histogram information enable the effectiveness of the method in a noisy environment. Experimental results on synthetic and real images demonstrate subjectively and objectively the performance of the proposed method.

  12. Application of Fuzzy Sets for the Improvement of Routing Optimization Heuristic Algorithms

    Directory of Open Access Journals (Sweden)

    Mattas Konstantinos

    2016-12-01

    Full Text Available The determination of the optimal circular path has become widely known for its difficulty in producing a solution and for the numerous applications in the scope of organization and management of passenger and freight transport. It is a mathematical combinatorial optimization problem for which several deterministic and heuristic models have been developed in recent years, applicable to route organization issues, passenger and freight transport, storage and distribution of goods, waste collection, supply and control of terminals, as well as human resource management. Scope of the present paper is the development, with the use of fuzzy sets, of a practical, comprehensible and speedy heuristic algorithm for the improvement of the ability of the classical deterministic algorithms to identify optimum, symmetrical or non-symmetrical, circular route. The proposed fuzzy heuristic algorithm is compared to the corresponding deterministic ones, with regard to the deviation of the proposed solution from the best known solution and the complexity of the calculations needed to obtain this solution. It is shown that the use of fuzzy sets reduced up to 35% the deviation of the solution identified by the classical deterministic algorithms from the best known solution.

  13. An algorithmic decomposition of claw-free graphs leading to an O(n^3) algorithm for the weighted stable set problem

    OpenAIRE

    Faenza, Y.; Oriolo, G.; Stauffer, G.

    2011-01-01

    We propose an algorithm for solving the maximum weighted stable set problem on claw-free graphs that runs in O(n^3)-time, drastically improving the previous best known complexity bound. This algorithm is based on a novel decomposition theorem for claw-free graphs, which is also intioduced in the present paper. Despite being weaker than the well-known structure result for claw-free graphs given by Chudnovsky and Seymour, our decomposition theorem is, on the other hand, algorithmic, i.e. it is ...

  14. An Adaptive Algorithm for Finding Frequent Sets in Landmark Windows

    DEFF Research Database (Denmark)

    Dang, Xuan-Hong; Ong, Kok-Leong; Lee, Vincent

    2012-01-01

    We consider a CPU constrained environment for finding approximation of frequent sets in data streams using the landmark window. Our algorithm can detect overload situations, i.e., breaching the CPU capacity, and sheds data in the stream to “keep up”. This is done within a controlled error threshold...

  15. The algorithmic level is the bridge between computation and brain.

    Science.gov (United States)

    Love, Bradley C

    2015-04-01

    Every scientist chooses a preferred level of analysis and this choice shapes the research program, even determining what counts as evidence. This contribution revisits Marr's (1982) three levels of analysis (implementation, algorithmic, and computational) and evaluates the prospect of making progress at each individual level. After reviewing limitations of theorizing within a level, two strategies for integration across levels are considered. One is top-down in that it attempts to build a bridge from the computational to algorithmic level. Limitations of this approach include insufficient theoretical constraint at the computation level to provide a foundation for integration, and that people are suboptimal for reasons other than capacity limitations. Instead, an inside-out approach is forwarded in which all three levels of analysis are integrated via the algorithmic level. This approach maximally leverages mutual data constraints at all levels. For example, algorithmic models can be used to interpret brain imaging data, and brain imaging data can be used to select among competing models. Examples of this approach to integration are provided. This merging of levels raises questions about the relevance of Marr's tripartite view. Copyright © 2015 Cognitive Science Society, Inc.

  16. Fast parallel DNA-based algorithms for molecular computation: the set-partition problem.

    Science.gov (United States)

    Chang, Weng-Long

    2007-12-01

    This paper demonstrates that basic biological operations can be used to solve the set-partition problem. In order to achieve this, we propose three DNA-based algorithms, a signed parallel adder, a signed parallel subtractor and a signed parallel comparator, that formally verify our designed molecular solutions for solving the set-partition problem.

  17. Measurement of thermally ablated lesions in sonoelastographic images using level set methods

    Science.gov (United States)

    Castaneda, Benjamin; Tamez-Pena, Jose Gerardo; Zhang, Man; Hoyt, Kenneth; Bylund, Kevin; Christensen, Jared; Saad, Wael; Strang, John; Rubens, Deborah J.; Parker, Kevin J.

    2008-03-01

    The capability of sonoelastography to detect lesions based on elasticity contrast can be applied to monitor the creation of thermally ablated lesion. Currently, segmentation of lesions depicted in sonoelastographic images is performed manually which can be a time consuming process and prone to significant intra- and inter-observer variability. This work presents a semi-automated segmentation algorithm for sonoelastographic data. The user starts by planting a seed in the perceived center of the lesion. Fast marching methods use this information to create an initial estimate of the lesion. Subsequently, level set methods refine its final shape by attaching the segmented contour to edges in the image while maintaining smoothness. The algorithm is applied to in vivo sonoelastographic images from twenty five thermal ablated lesions created in porcine livers. The estimated area is compared to results from manual segmentation and gross pathology images. Results show that the algorithm outperforms manual segmentation in accuracy, inter- and intra-observer variability. The processing time per image is significantly reduced.

  18. The gradient boosting algorithm and random boosting for genome-assisted evaluation in large data sets.

    Science.gov (United States)

    González-Recio, O; Jiménez-Montero, J A; Alenda, R

    2013-01-01

    In the next few years, with the advent of high-density single nucleotide polymorphism (SNP) arrays and genome sequencing, genomic evaluation methods will need to deal with a large number of genetic variants and an increasing sample size. The boosting algorithm is a machine-learning technique that may alleviate the drawbacks of dealing with such large data sets. This algorithm combines different predictors in a sequential manner with some shrinkage on them; each predictor is applied consecutively to the residuals from the committee formed by the previous ones to form a final prediction based on a subset of covariates. Here, a detailed description is provided and examples using a toy data set are included. A modification of the algorithm called "random boosting" was proposed to increase predictive ability and decrease computation time of genome-assisted evaluation in large data sets. Random boosting uses a random selection of markers to add a subsequent weak learner to the predictive model. These modifications were applied to a real data set composed of 1,797 bulls genotyped for 39,714 SNP. Deregressed proofs of 4 yield traits and 1 type trait from January 2009 routine evaluations were used as dependent variables. A 2-fold cross-validation scenario was implemented. Sires born before 2005 were used as a training sample (1,576 and 1,562 for production and type traits, respectively), whereas younger sires were used as a testing sample to evaluate predictive ability of the algorithm on yet-to-be-observed phenotypes. Comparison with the original algorithm was provided. The predictive ability of the algorithm was measured as Pearson correlations between observed and predicted responses. Further, estimated bias was computed as the average difference between observed and predicted phenotypes. The results showed that the modification of the original boosting algorithm could be run in 1% of the time used with the original algorithm and with negligible differences in accuracy

  19. PENGGUNAAN HIBRIDISASI GENETICS ALGORITHMS DAN FUZZY SETS UNTUK MEMPRODUKSI PAKET SOAL

    Directory of Open Access Journals (Sweden)

    Rolly Intan

    2005-01-01

    Full Text Available At least, two important factors, discrimination and difficulty, should be considered in determing whether a problem should be in a packet of problems produced for students entrance examination at a university. The higher the discrimination degree of a problem, the better the problem is used to make a selection of participants based on their intellectual capability. How to provide a packet of entrance examination problems satisfying a determined pattern of discrimination and difficulty is a major problem in this paper for which an algorithm, it can be proved that the beneficiary of applying fuzzy sets and fuzzy relation in determing the first chromosome in the process of GA is that the process can reach tolerable solutions faster. Maximum number of generation is still needed as a threshold to overcome the problem of run time system overflow. Generally, the problems in the form of passages tend to have lower fitnest cost. Abstract in Bahasa Indonesia : Proses penyusunan paket soal (misalnya soal untuk test seleksi masuk universitas yang diambil dari suatu bank soal, minimal harus memperhatikan dua aspek penting yaitu: tingkat kesulitan dan tingkat diskriminan soal. Semakin tinggi tingkat diskriminan suatu soal, semakin baik soal tersebut dipakai untuk menyeleksi kemampuan peserta test. Permasalahan yang dihadapi adalah bagaimana agar pembuat soal dapat memilih dan menentukan kombinasi soal-soal yang tepat (optimum sehingga dapat memenuhi tingkat kesulitan dan diskriminan yang dikehendaki. Untuk menyelesaikan masalah ini, diperkenalkan suatu algoritma yang disusun dengan menggunakan hibridisasi metode Genetics Algorithm dan fuzzy sets. Dari hasil pengujian, didapatkan bahwa penggunaan fuzzy sets dan fuzzy relations dalam pemilihan kromoson awal akan lebih mempercepat pencapaian tolerable solutions. Tetap dibutuhkan treshould maksimum jumlah generasi yang dilakukan untuk mencegah run time sistem overflow. Soal bacaan cenderung memiliki nilai Fitnest

  20. Strong convergence of an extragradient-type algorithm for the multiple-sets split equality problem.

    Science.gov (United States)

    Zhao, Ying; Shi, Luoyi

    2017-01-01

    This paper introduces a new extragradient-type method to solve the multiple-sets split equality problem (MSSEP). Under some suitable conditions, the strong convergence of an algorithm can be verified in the infinite-dimensional Hilbert spaces. Moreover, several numerical results are given to show the effectiveness of our algorithm.

  1. Transport equations, Level Set and Eulerian mechanics. Application to fluid-structure coupling

    International Nuclear Information System (INIS)

    Maitre, E.

    2008-11-01

    My works were devoted to numerical analysis of non-linear elliptic-parabolic equations, to neutron transport equation and to the simulation of fabrics draping. More recently I developed an Eulerian method based on a level set formulation of the immersed boundary method to deal with fluid-structure coupling problems arising in bio-mechanics. Some of the more efficient algorithms to solve the neutron transport equation make use of the splitting of the transport operator taking into account its characteristics. In the present work we introduced a new algorithm based on this splitting and an adaptation of minimal residual methods to infinite dimensional case. We present the case where the velocity space is of dimension 1 (slab geometry) and 2 (plane geometry) because the splitting is simpler in the former

  2. Research and Setting the Modified Algorithm "Predator-Prey" in the Problem of the Multi-Objective Optimization

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2016-01-01

    Full Text Available We consider a class of algorithms for multi-objective optimization - Pareto-approximation algorithms, which suppose a preliminary building of finite-dimensional approximation of a Pareto set, thereby also a Pareto front of the problem. The article gives an overview of population and non-population algorithms of the Pareto-approximation, identifies their strengths and weaknesses, and presents a canonical algorithm "predator-prey", showing its shortcomings. We offer a number of modifications of the canonical algorithm "predator-prey" with the aim to overcome the drawbacks of this algorithm, present the results of a broad study of the efficiency of these modifications of the algorithm. The peculiarity of the study is the use of the quality indicators of the Pareto-approximation, which previous publications have not used. In addition, we present the results of the meta-optimization of the modified algorithm, i.e. determining the optimal values of some free parameters of the algorithm. The study of efficiency of the modified algorithm "predator-prey" has shown that the proposed modifications allow us to improve the following indicators of the basic algorithm: cardinality of a set of the archive solutions, uniformity of archive solutions, and computation time. By and large, the research results have shown that the modified and meta-optimized algorithm enables achieving exactly the same approximation as the basic algorithm, but with the number of preys being one order less. Computational costs are proportionally reduced.

  3. Improved Genetic Algorithm with Two-Level Approximation for Truss Optimization by Using Discrete Shape Variables

    Directory of Open Access Journals (Sweden)

    Shen-yan Chen

    2015-01-01

    Full Text Available This paper presents an Improved Genetic Algorithm with Two-Level Approximation (IGATA to minimize truss weight by simultaneously optimizing size, shape, and topology variables. On the basis of a previously presented truss sizing/topology optimization method based on two-level approximation and genetic algorithm (GA, a new method for adding shape variables is presented, in which the nodal positions are corresponding to a set of coordinate lists. A uniform optimization model including size/shape/topology variables is established. First, a first-level approximate problem is constructed to transform the original implicit problem to an explicit problem. To solve this explicit problem which involves size/shape/topology variables, GA is used to optimize individuals which include discrete topology variables and shape variables. When calculating the fitness value of each member in the current generation, a second-level approximation method is used to optimize the continuous size variables. With the introduction of shape variables, the original optimization algorithm was improved in individual coding strategy as well as GA execution techniques. Meanwhile, the update strategy of the first-level approximation problem was also improved. The results of numerical examples show that the proposed method is effective in dealing with the three kinds of design variables simultaneously, and the required computational cost for structural analysis is quite small.

  4. Adaptable Value-Set Analysis for Low-Level Code

    OpenAIRE

    Brauer, Jörg; Hansen, René Rydhof; Kowalewski, Stefan; Larsen, Kim G.; Olesen, Mads Chr.

    2012-01-01

    This paper presents a framework for binary code analysis that uses only SAT-based algorithms. Within the framework, incremental SAT solving is used to perform a form of weakly relational value-set analysis in a novel way, connecting the expressiveness of the value sets to computational complexity. Another key feature of our framework is that it translates the semantics of binary code into an intermediate representation. This allows for a straightforward translation of the program semantics in...

  5. A Fast Logdet Divergence Based Metric Learning Algorithm for Large Data Sets Classification

    Directory of Open Access Journals (Sweden)

    Jiangyuan Mei

    2014-01-01

    the basis of classifiers, for example, the k-nearest neighbors classifier. Experiments on benchmark data sets demonstrate that the proposed algorithm compares favorably with the state-of-the-art methods.

  6. Pediatric chest HRCT using the iDose4 Hybrid Iterative Reconstruction Algorithm: Which iDose level to choose?

    International Nuclear Information System (INIS)

    Smarda, M; Alexopoulou, E; Mazioti, A; Kordolaimi, S; Ploussi, A; Efstathopoulos, E; Priftis, K

    2015-01-01

    Purpose of the study is to determine the appropriate iterative reconstruction (IR) algorithm level that combines image quality and diagnostic confidence, for pediatric patients undergoing high-resolution computed tomography (HRCT). During the last 2 years, a total number of 20 children up to 10 years old with a clinical presentation of chronic bronchitis underwent HRCT in our department's 64-detector row CT scanner using the iDose IR algorithm, with almost similar image settings (80kVp, 40-50 mAs). CT images were reconstructed with all iDose levels (level 1 to 7) as well as with filtered-back projection (FBP) algorithm. Subjective image quality was evaluated by 2 experienced radiologists in terms of image noise, sharpness, contrast and diagnostic acceptability using a 5-point scale (1=excellent image, 5=non-acceptable image). Artifacts existance was also pointed out. All mean scores from both radiologists corresponded to satisfactory image quality (score ≤3), even with the FBP algorithm use. Almost excellent (score <2) overall image quality was achieved with iDose levels 5 to 7, but oversmoothing artifacts appearing with iDose levels 6 and 7 affected the diagnostic confidence. In conclusion, the use of iDose level 5 enables almost excellent image quality without considerable artifacts affecting the diagnosis. Further evaluation is needed in order to draw more precise conclusions. (paper)

  7. An evolutionary algorithm for tomographic reconstructions in limited data sets problems

    International Nuclear Information System (INIS)

    Turcanu, Catrinel; Craciunescu, Teddy

    2000-01-01

    The paper proposes a new method for tomographic reconstructions. Unlike nuclear medicine applications, in physical science problems we are often confronted with limited data sets: constraints in the number of projections or limited angle views. The problem of image reconstruction from projections may be considered as a problem of finding an image (solution) having projections that match the experimental ones. In our approach, we choose a statistical correlation coefficient to evaluate the fitness of any potential solution. The optimization process is carried out by an evolutionary algorithm. Our algorithm has some problem-oriented characteristics. One of them is that a chromosome, representing a potential solution, is not linear but coded as a matrix of pixels corresponding to a two-dimensional image. This kind of internal representation reflects the genuine manifestation and slight differences between two points situated in the original problem space give rise to similar differences once they become coded. Another particular feature is a newly built crossover operator: the grid-based crossover, suitable for high dimension two-dimensional chromosomes. Except for the population size and the dimension of the cutting grid for the grid-based crossover, all the other parameters of the algorithm are independent of the geometry of the tomographic reconstruction. The performances of the method are evaluated in comparison with a traditional tomographic method, based on the maximization of the entropy of the image, that proved to work well with limited data sets. The test phantom is typical for an application with limited data sets: the determination of the neutron energy spectra with time resolution in case of short-pulsed neutron emission. The qualitative judgement and also the quantitative one, based on some figures of merit, point out that the proposed method ensures an improved reconstruction of shapes, sizes and resolution in the image, even in the presence of noise

  8. SMOS/SMAP Synergy for SMAP Level 2 Soil Moisture Algorithm Evaluation

    Science.gov (United States)

    Bindlish, Rajat; Jackson, Thomas J.; Zhao, Tianjie; Cosh, Michael; Chan, Steven; O'Neill, Peggy; Njoku, Eni; Colliander, Andreas; Kerr, Yann

    2011-01-01

    ancillary data) were used to correct for surface temperature effects and to derive microwave emissivity. ECMWF data were also used for precipitation forecasts, presence of snow, and frozen ground. Vegetation options are described below. One year of soil moisture observations from a set of four watersheds in the U.S. were used to evaluate four different retrieval methodologies: (1) SMOS soil moisture estimates (version 400), (2) SeA soil moisture estimates using the SMOS/SMAP data with SMOS estimated vegetation optical depth, which is part of the SMOS level 2 product, (3) SeA soil moisture estimates using the SMOS/SMAP data and the MODIS-based vegetation climatology data, and (4) SeA soil moisture estimates using the SMOS/SMAP data and actual MODIS observations. The use of SMOS real-world global microwave observations and the analyses described here will help in the development and selection of different land surface parameters and ancillary observations needed for the SMAP soil moisture algorithms. These investigations will greatly improve the quality and reliability of this SMAP product at launch.

  9. The experimental results on the quality of clustering diverse set of data using a modified algorithm chameleon

    Directory of Open Access Journals (Sweden)

    Татьяна Борисовна Шатовская

    2015-03-01

    Full Text Available In this work results of modified Chameleon algorithm are discussed. Hierarchical multilevel algorithms consist of several stages: building the graph, coarsening, partitioning, recovering. Exploring of clustering quality for different data sets with different combinations of algorithms on different stages of the algorithm is the main aim of the article. And also aim is improving the construction phase through the optimization algorithm of choice k in the building the graph k-nearest neighbors

  10. Efficient frequent pattern mining algorithm based on node sets in cloud computing environment

    Science.gov (United States)

    Billa, V. N. Vinay Kumar; Lakshmanna, K.; Rajesh, K.; Reddy, M. Praveen Kumar; Nagaraja, G.; Sudheer, K.

    2017-11-01

    The ultimate goal of Data Mining is to determine the hidden information which is useful in making decisions using the large databases collected by an organization. This Data Mining involves many tasks that are to be performed during the process. Mining frequent itemsets is the one of the most important tasks in case of transactional databases. These transactional databases contain the data in very large scale where the mining of these databases involves the consumption of physical memory and time in proportion to the size of the database. A frequent pattern mining algorithm is said to be efficient only if it consumes less memory and time to mine the frequent itemsets from the given large database. Having these points in mind in this thesis we proposed a system which mines frequent itemsets in an optimized way in terms of memory and time by using cloud computing as an important factor to make the process parallel and the application is provided as a service. A complete framework which uses a proven efficient algorithm called FIN algorithm. FIN algorithm works on Nodesets and POC (pre-order coding) tree. In order to evaluate the performance of the system we conduct the experiments to compare the efficiency of the same algorithm applied in a standalone manner and in cloud computing environment on a real time data set which is traffic accidents data set. The results show that the memory consumption and execution time taken for the process in the proposed system is much lesser than those of standalone system.

  11. Setting value optimization method in integration for relay protection based on improved quantum particle swarm optimization algorithm

    Science.gov (United States)

    Yang, Guo Sheng; Wang, Xiao Yang; Li, Xue Dong

    2018-03-01

    With the establishment of the integrated model of relay protection and the scale of the power system expanding, the global setting and optimization of relay protection is an extremely difficult task. This paper presents a kind of application in relay protection of global optimization improved particle swarm optimization algorithm and the inverse time current protection as an example, selecting reliability of the relay protection, selectivity, quick action and flexibility as the four requires to establish the optimization targets, and optimizing protection setting values of the whole system. Finally, in the case of actual power system, the optimized setting value results of the proposed method in this paper are compared with the particle swarm algorithm. The results show that the improved quantum particle swarm optimization algorithm has strong search ability, good robustness, and it is suitable for optimizing setting value in the relay protection of the whole power system.

  12. Design Approach and Implementation of Application Specific Instruction Set Processor for SHA-3 BLAKE Algorithm

    Science.gov (United States)

    Zhang, Yuli; Han, Jun; Weng, Xinqian; He, Zhongzhu; Zeng, Xiaoyang

    This paper presents an Application Specific Instruction-set Processor (ASIP) for the SHA-3 BLAKE algorithm family by instruction set extensions (ISE) from an RISC (reduced instruction set computer) processor. With a design space exploration for this ASIP to increase the performance and reduce the area cost, we accomplish an efficient hardware and software implementation of BLAKE algorithm. The special instructions and their well-matched hardware function unit improve the calculation of the key section of the algorithm, namely G-functions. Also, relaxing the time constraint of the special function unit can decrease its hardware cost, while keeping the high data throughput of the processor. Evaluation results reveal the ASIP achieves 335Mbps and 176Mbps for BLAKE-256 and BLAKE-512. The extra area cost is only 8.06k equivalent gates. The proposed ASIP outperforms several software approaches on various platforms in cycle per byte. In fact, both high throughput and low hardware cost achieved by this programmable processor are comparable to that of ASIC implementations.

  13. Demons versus level-set motion registration for coronary 18F-sodium fluoride PET

    Science.gov (United States)

    Rubeaux, Mathieu; Joshi, Nikhil; Dweck, Marc R.; Fletcher, Alison; Motwani, Manish; Thomson, Louise E.; Germano, Guido; Dey, Damini; Berman, Daniel S.; Newby, David E.; Slomka, Piotr J.

    2016-03-01

    Ruptured coronary atherosclerotic plaques commonly cause acute myocardial infarction. It has been recently shown that active microcalcification in the coronary arteries, one of the features that characterizes vulnerable plaques at risk of rupture, can be imaged using cardiac gated 18F-sodium fluoride (18F-NaF) PET. We have shown in previous work that a motion correction technique applied to cardiac-gated 18F-NaF PET images can enhance image quality and improve uptake estimates. In this study, we further investigated the applicability of different algorithms for registration of the coronary artery PET images. In particular, we aimed to compare demons vs. level-set nonlinear registration techniques applied for the correction of cardiac motion in coronary 18F-NaF PET. To this end, fifteen patients underwent 18F-NaF PET and prospective coronary CT angiography (CCTA). PET data were reconstructed in 10 ECG gated bins; subsequently these gated bins were registered using demons and level-set methods guided by the extracted coronary arteries from CCTA, to eliminate the effect of cardiac motion on PET images. Noise levels, target-to-background ratios (TBR) and global motion were compared to assess image quality. Compared to the reference standard of using only diastolic PET image (25% of the counts from PET acquisition), cardiac motion registration using either level-set or demons techniques almost halved image noise due to the use of counts from the full PET acquisition and increased TBR difference between 18F-NaF positive and negative lesions. The demons method produces smoother deformation fields, exhibiting no singularities (which reflects how physically plausible the registration deformation is), as compared to the level-set method, which presents between 4 and 8% of singularities, depending on the coronary artery considered. In conclusion, the demons method produces smoother motion fields as compared to the level-set method, with a motion that is physiologically

  14. constNJ: an algorithm to reconstruct sets of phylogenetic trees satisfying pairwise topological constraints.

    Science.gov (United States)

    Matsen, Frederick A

    2010-06-01

    This article introduces constNJ (constrained neighbor-joining), an algorithm for phylogenetic reconstruction of sets of trees with constrained pairwise rooted subtree-prune-regraft (rSPR) distance. We are motivated by the problem of constructing sets of trees that must fit into a recombination, hybridization, or similar network. Rather than first finding a set of trees that are optimal according to a phylogenetic criterion (e.g., likelihood or parsimony) and then attempting to fit them into a network, constNJ estimates the trees while enforcing specified rSPR distance constraints. The primary input for constNJ is a collection of distance matrices derived from sequence blocks which are assumed to have evolved in a tree-like manner, such as blocks of an alignment which do not contain any recombination breakpoints. The other input is a set of rSPR constraint inequalities for any set of pairs of trees. constNJ is consistent and a strict generalization of the neighbor-joining algorithm; it uses the new notion of maximum agreement partitions (MAPs) to assure that the resulting trees satisfy the given rSPR distance constraints.

  15. An ILP based memetic algorithm for finding minimum positive influence dominating sets in social networks

    Science.gov (United States)

    Lin, Geng; Guan, Jian; Feng, Huibin

    2018-06-01

    The positive influence dominating set problem is a variant of the minimum dominating set problem, and has lots of applications in social networks. It is NP-hard, and receives more and more attention. Various methods have been proposed to solve the positive influence dominating set problem. However, most of the existing work focused on greedy algorithms, and the solution quality needs to be improved. In this paper, we formulate the minimum positive influence dominating set problem as an integer linear programming (ILP), and propose an ILP based memetic algorithm (ILPMA) for solving the problem. The ILPMA integrates a greedy randomized adaptive construction procedure, a crossover operator, a repair operator, and a tabu search procedure. The performance of ILPMA is validated on nine real-world social networks with nodes up to 36,692. The results show that ILPMA significantly improves the solution quality, and is robust.

  16. Comparison of different reconstruction algorithms for three-dimensional ultrasound imaging in a neurosurgical setting.

    Science.gov (United States)

    Miller, D; Lippert, C; Vollmer, F; Bozinov, O; Benes, L; Schulte, D M; Sure, U

    2012-09-01

    Freehand three-dimensional ultrasound imaging (3D-US) is increasingly used in image-guided surgery. During image acquisition, a set of B-scans is acquired that is distributed in a non-parallel manner over the area of interest. Reconstructing these images into a regular array allows 3D visualization. However, the reconstruction process may introduce artefacts and may therefore reduce image quality. The aim of the study is to compare different algorithms with respect to image quality and diagnostic value for image guidance in neurosurgery. 3D-US data sets were acquired during surgery of various intracerebral lesions using an integrated ultrasound-navigation device. They were stored for post-hoc evaluation. Five different reconstruction algorithms, a standard multiplanar reconstruction with interpolation (MPR), a pixel nearest neighbour method (PNN), a voxel nearest neighbour method (VNN) and two voxel based distance-weighted algorithms (VNN2 and DW) were tested with respect to image quality and artefact formation. The capability of the algorithm to fill gaps within the sample volume was investigated and a clinical evaluation with respect to the diagnostic value of the reconstructed images was performed. MPR was significantly worse than the other algorithms in filling gaps. In an image subtraction test, VNN2 and DW reliably reconstructed images even if large amounts of data were missing. However, the quality of the reconstruction improved, if data acquisition was performed in a structured manner. When evaluating the diagnostic value of reconstructed axial, sagittal and coronal views, VNN2 and DW were judged to be significantly better than MPR and VNN. VNN2 and DW could be identified as robust algorithms that generate reconstructed US images with a high diagnostic value. These algorithms improve the utility and reliability of 3D-US imaging during intraoperative navigation. Copyright © 2012 John Wiley & Sons, Ltd.

  17. Some free boundary problems in potential flow regime usinga based level set method

    Energy Technology Data Exchange (ETDEWEB)

    Garzon, M.; Bobillo-Ares, N.; Sethian, J.A.

    2008-12-09

    Recent advances in the field of fluid mechanics with moving fronts are linked to the use of Level Set Methods, a versatile mathematical technique to follow free boundaries which undergo topological changes. A challenging class of problems in this context are those related to the solution of a partial differential equation posed on a moving domain, in which the boundary condition for the PDE solver has to be obtained from a partial differential equation defined on the front. This is the case of potential flow models with moving boundaries. Moreover the fluid front will possibly be carrying some material substance which will diffuse in the front and be advected by the front velocity, as for example the use of surfactants to lower surface tension. We present a Level Set based methodology to embed this partial differential equations defined on the front in a complete Eulerian framework, fully avoiding the tracking of fluid particles and its known limitations. To show the advantages of this approach in the field of Fluid Mechanics we present in this work one particular application: the numerical approximation of a potential flow model to simulate the evolution and breaking of a solitary wave propagating over a slopping bottom and compare the level set based algorithm with previous front tracking models.

  18. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set-Effect of Pasteurization.

    Science.gov (United States)

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-02-26

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.

  19. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    Energy Technology Data Exchange (ETDEWEB)

    Hosntalab, Mohammad [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Aghaeizadeh Zoroofi, Reza [University of Tehran, Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, Tehran (Iran); Abbaspour Tehrani-Fard, Ali [Islamic Azad University, Faculty of Engineering, Science and Research Branch, Tehran (Iran); Sharif University of Technology, Department of Electrical Engineering, Tehran (Iran); Shirani, Gholamreza [Faculty of Dentistry Medical Science of Tehran University, Oral and Maxillofacial Surgery Department, Tehran (Iran)

    2008-09-15

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  20. Segmentation of teeth in CT volumetric dataset by panoramic projection and variational level set

    International Nuclear Information System (INIS)

    Hosntalab, Mohammad; Aghaeizadeh Zoroofi, Reza; Abbaspour Tehrani-Fard, Ali; Shirani, Gholamreza

    2008-01-01

    Quantification of teeth is of clinical importance for various computer assisted procedures such as dental implant, orthodontic planning, face, jaw and cosmetic surgeries. In this regard, segmentation is a major step. In this paper, we propose a method for segmentation of teeth in volumetric computed tomography (CT) data using panoramic re-sampling of the dataset in the coronal view and variational level set. The proposed method consists of five steps as follows: first, we extract a mask in a CT images using Otsu thresholding. Second, the teeth are segmented from other bony tissues by utilizing anatomical knowledge of teeth in the jaws. Third, the proposed method is followed by estimating the arc of the upper and lower jaws and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the horizontal and vertical projections of the panoramic dataset, respectively. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a Variational level set to refine initial teeth boundaries to final contours. The proposed algorithm was evaluated in the presence of 30 multi-slice CT datasets including 3,600 images. Experimental results reveal the effectiveness of the proposed method. In the proposed algorithm, the variational level set technique was utilized to trace the contour of the teeth. In view of the fact that, this technique is based on the characteristic of the overall region of the teeth image, it is possible to extract a very smooth and accurate tooth contour using this technique. In the presence of the available datasets, the proposed technique was successful in teeth segmentation compared to previous techniques. (orig.)

  1. SeaWiFS technical report series. Volume 32: Level-3 SeaWiFS data products. Spatial and temporal binning algorithms

    Science.gov (United States)

    Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Acker, James G. (Editor); Campbell, Janet W.; Blaisdell, John M.; Darzi, Michael

    1995-01-01

    The level-3 data products from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) are statistical data sets derived from level-2 data. Each data set will be based on a fixed global grid of equal-area bins that are approximately 9 x 9 sq km. Statistics available for each bin include the sum and sum of squares of the natural logarithm of derived level-2 geophysical variables where sums are accumulated over a binning period. Operationally, products with binning periods of 1 day, 8 days, 1 month, and 1 year will be produced and archived. From these accumulated values and for each bin, estimates of the mean, standard deviation, median, and mode may be derived for each geophysical variable. This report contains two major parts: the first (Section 2) is intended as a users' guide for level-3 SeaWiFS data products. It contains an overview of level-0 to level-3 data processing, a discussion of important statistical considerations when using level-3 data, and details of how to use the level-3 data. The second part (Section 3) presents a comparative statistical study of several binning algorithms based on CZCS and moored fluorometer data. The operational binning algorithms were selected based on the results of this study.

  2. DATA-ORIENTED ALGORITHM FOR ROUTE CHOICE SET GENERATION IN A METROPOLITAN AREA WITH MOBILE PHONE GPS DATA

    Directory of Open Access Journals (Sweden)

    T. Nakamura

    2012-07-01

    Full Text Available Nowadays, for the estimation of traffic demand or people flow, modelling route choice activity in road networks is an important task and many algorithms have been developed to generate route choice sets. However, developing an algorithm based on a small amount of data that can be applied generally within a metropolitan area is difficult. This is because the characteristics of road networks vary widely. On the other hand, recently, the collection of people movement data has lately become much easier, especially through mobile phones. Lately, most mobile phones include GPS functionality. Given this background, we propose a data-oriented algorithm to generate route choice sets using mobile phone GPS data. GPS data contain a number of measurement errors; hence, they must be adjusted to account for these errors before use in advanced people movement analysis. However, this is time-consuming and expensive, because an enormous amount of daily data can be obtained. Hence, the objective of this study is to develop an algorithm that can easily manage GPS data. Specifically, at first movement data from all GPS data are selected by calculating the speed. Next, the nearest roads in the road network are selected from the GPS location and count such data for each road. Then An algorithm based on the GSP (Gateway Shortest Path algorithm is proposed, which searches the shortest path through a given gateway. In the proposed algorithm, the road for which the utilization volume calculated by GPS data is large is selected as the gateway. Thus, route choice sets that are based on trends in real GPS data are generated. To evaluate the proposed method, GPS data from 0.7 million people a year in Japan and DRM (Digital Road Map as the road network are used. DRM is one of the most detailed road networks in Japan. Route choice sets using the proposed algorithm are generated and the cover rate of the utilization volume of each road under evaluation is calculated. As a

  3. Rational use of cognitive resources: levels of analysis between the computational and the algorithmic.

    Science.gov (United States)

    Griffiths, Thomas L; Lieder, Falk; Goodman, Noah D

    2015-04-01

    Marr's levels of analysis-computational, algorithmic, and implementation-have served cognitive science well over the last 30 years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The key idea is to push the notion of rationality, often used in defining computational-level models, deeper toward the algorithmic level. We offer a simple recipe for reverse-engineering the mind's cognitive strategies by deriving optimal algorithms for a series of increasingly more realistic abstract computational architectures, which we call "resource-rational analysis." Copyright © 2015 Cognitive Science Society, Inc.

  4. Novel gene sets improve set-level classification of prokaryotic gene expression data.

    Science.gov (United States)

    Holec, Matěj; Kuželka, Ondřej; Železný, Filip

    2015-10-28

    Set-level classification of gene expression data has received significant attention recently. In this setting, high-dimensional vectors of features corresponding to genes are converted into lower-dimensional vectors of features corresponding to biologically interpretable gene sets. The dimensionality reduction brings the promise of a decreased risk of overfitting, potentially resulting in improved accuracy of the learned classifiers. However, recent empirical research has not confirmed this expectation. Here we hypothesize that the reported unfavorable classification results in the set-level framework were due to the adoption of unsuitable gene sets defined typically on the basis of the Gene ontology and the KEGG database of metabolic networks. We explore an alternative approach to defining gene sets, based on regulatory interactions, which we expect to collect genes with more correlated expression. We hypothesize that such more correlated gene sets will enable to learn more accurate classifiers. We define two families of gene sets using information on regulatory interactions, and evaluate them on phenotype-classification tasks using public prokaryotic gene expression data sets. From each of the two gene-set families, we first select the best-performing subtype. The two selected subtypes are then evaluated on independent (testing) data sets against state-of-the-art gene sets and against the conventional gene-level approach. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. The novel gene sets are indeed more correlated than the conventional ones, and lead to significantly more accurate classifiers. Novel gene sets defined on the basis of regulatory interactions improve set-level classification of gene expression data. The experimental scripts and other material needed to reproduce the experiments are available at http://ida.felk.cvut.cz/novelgenesets.tar.gz.

  5. Identifying elementary iterated systems through algorithmic inference: The Cantor set example

    Energy Technology Data Exchange (ETDEWEB)

    Apolloni, Bruno [Dipartimento di Scienze dell' Informazione, Universita degli Studi di Milano, Via Comelico 39/41, 20135 Milan (Italy)]. E-mail: apolloni@dsi.unimi.it; Bassis, Simone [Dipartimento di Scienze dell' Informazione, Universita degli Studi di Milano, Via Comelico 39/41, 20135 Milan (Italy)]. E-mail: bassis@dsi.unimi.it

    2006-10-15

    We come back to the old problem of fractal identification within the new framework of algorithmic Inference. The key points are: (i) to identify sufficient statistics to be put in connection with the unknown values of the fractal parameters, and (ii) to manage the timing of the iterated process through spatial statistics. We fill these tasks successfully with the Cantor sets. We are able to compute confidence intervals for both the scaling parameter {theta} and the iteration number n at which we are observing a set. We both check numerically the coverage of these intervals and delineate a general strategy for affording more complex iterated systems.

  6. LiveWire interactive boundary extraction algorithm based on Haar wavelet transform and control point set direction search

    Science.gov (United States)

    Cheng, Jun; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.

  7. A deterministic algorithm for fitting a step function to a weighted point-set

    KAUST Repository

    Fournier, Hervé ; Vigneron, Antoine E.

    2013-01-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance

  8. A Binary Cat Swarm Optimization Algorithm for the Non-Unicost Set Covering Problem

    Directory of Open Access Journals (Sweden)

    Broderick Crawford

    2015-01-01

    Full Text Available The Set Covering Problem consists in finding a subset of columns in a zero-one matrix such that they cover all the rows of the matrix at a minimum cost. To solve the Set Covering Problem we use a metaheuristic called Binary Cat Swarm Optimization. This metaheuristic is a recent swarm metaheuristic technique based on the cat behavior. Domestic cats show the ability to hunt and are curious about moving objects. Based on this, the cats have two modes of behavior: seeking mode and tracing mode. We are the first ones to use this metaheuristic to solve this problem; our algorithm solves a set of 65 Set Covering Problem instances from OR-Library.

  9. Applications and Benefits for Big Data Sets Using Tree Distances and The T-SNE Algorithm

    Science.gov (United States)

    2016-03-01

    BENEFITS FOR BIG DATA SETS USING TREE DISTANCES AND THE T-SNE ALGORITHM by Suyoung Lee March 2016 Thesis Advisor: Samuel E. Buttrey...REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE APPLICATIONS AND BENEFITS FOR BIG DATA SETS USING TREE DISTANCES AND THE T-SNE...public release; distribution is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words ) Modern data sets often consist of unstructured data

  10. Adaptive discrete-ordinates algorithms and strategies

    International Nuclear Information System (INIS)

    Stone, J.C.; Adams, M.L.

    2005-01-01

    We present our latest algorithms and strategies for adaptively refined discrete-ordinates quadrature sets. In our basic strategy, which we apply here in two-dimensional Cartesian geometry, the spatial domain is divided into regions. Each region has its own quadrature set, which is adapted to the region's angular flux. Our algorithms add a 'test' direction to the quadrature set if the angular flux calculated at that direction differs by more than a user-specified tolerance from the angular flux interpolated from other directions. Different algorithms have different prescriptions for the method of interpolation and/or choice of test directions and/or prescriptions for quadrature weights. We discuss three different algorithms of different interpolation orders. We demonstrate through numerical results that each algorithm is capable of generating solutions with negligible angular discretization error. This includes elimination of ray effects. We demonstrate that all of our algorithms achieve a given level of error with far fewer unknowns than does a standard quadrature set applied to an entire problem. To address a potential issue with other algorithms, we present one algorithm that retains exact integration of high-order spherical-harmonics functions, no matter how much local refinement takes place. To address another potential issue, we demonstrate that all of our methods conserve partial currents across interfaces where quadrature sets change. We conclude that our approach is extremely promising for solving the long-standing problem of angular discretization error in multidimensional transport problems. (authors)

  11. Volume Sculpting Using the Level-Set Method

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas; Christensen, Niels Jørgen

    2002-01-01

    In this paper, we propose the use of the Level--Set Method as the underlying technology of a volume sculpting system. The main motivation is that this leads to a very generic technique for deformation of volumetric solids. In addition, our method preserves a distance field volume representation....... A scaling window is used to adapt the Level--Set Method to local deformations and to allow the user to control the intensity of the tool. Level--Set based tools have been implemented in an interactive sculpting system, and we show sculptures created using the system....

  12. Optimisation of groundwater level monitoring networks using geostatistical modelling based on the Spartan family variogram and a genetic algorithm method

    Science.gov (United States)

    Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.

    2016-04-01

    Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the

  13. An Algorithm for Glaucoma Screening in Clinical Settings and Its Preliminary Performance Profile

    Directory of Open Access Journals (Sweden)

    S-Farzad Mohammadi

    2013-01-01

    Full Text Available Purpose: To devise and evaluate a screening algorithm for glaucoma in clinical settings. Methods: Screening included examination of the optic disc for vertical cupping (≥0.4 and asymmetry (≥0.15, Goldmann applanation tonometry (≥21 mmHg, adjusted or unadjusted for central corneal thickness, and automated perimetry. In the diagnostic step, retinal nerve fiber layer imaging was performed using scanning laser polarimetry. Performance of the screening protocol was assessed in an eye hospital-based program in which 124 non-physician personnel aged 40 years or above were examined. A single ophthalmologist carried out the examinations and in equivocal cases, a glaucoma subspecialist′s opinion was sought. Results: Glaucoma was diagnosed in six cases (prevalence 4.8%; 95% confidence interval, 0.01-0.09 of whom five were new. The likelihood of making a definite diagnosis of glaucoma for those who were screened positively was 8.5 times higher than the estimated baseline risk for the reference population; the positive predictive value of the screening protocol was 30%. Screening excluded 80% of the initial population. Conclusion: Application of a formal screening protocol (such as our algorithm or its equivalent in clinical settings can be helpful in detecting new cases of glaucoma. Preliminary performance assessment of the algorithm showed its applicability and effectiveness in detecting glaucoma among subjects without any visual complaint.

  14. An efficient connected dominating set algorithm in WSNs based on the induced tree of the crossed cube

    Directory of Open Access Journals (Sweden)

    Zhang Jing

    2015-06-01

    Full Text Available The connected dominating set (CDS has become a well-known approach for constructing a virtual backbone in wireless sensor networks. Then traffic can forwarded by the virtual backbone and other nodes turn off their radios to save energy. Furthermore, a smaller CDS incurs fewer interference problems. However, constructing a minimum CDS is an NP-hard problem, and thus most researchers concentrate on how to derive approximate algorithms. In this paper, a novel algorithm based on the induced tree of the crossed cube (ITCC is presented. The ITCC is to find a maximal independent set (MIS, which is based on building an induced tree of the crossed cube network, and then to connect the MIS nodes to form a CDS. The priority of an induced tree is determined according to a new parameter, the degree of the node in the square of a graph. This paper presents the proof that the ITCC generates a CDS with a lower approximation ratio. Furthermore, it is proved that the cardinality of the induced trees is a Fibonacci sequence, and an upper bound to the number of the dominating set is established. The simulations show that the algorithm provides the smallest CDS size compared with some other traditional algorithms.

  15. PixTrig: a Level 2 track finding algorithm based on pixel detector

    CERN Document Server

    Baratella, A; Morettini, P; Parodi, F

    2000-01-01

    This note describes an algorithm for track search at Level 2 based on pixel detector. Using three pixel clusters we can produce a reconstruction of the track parameter in both z and R-phi plane. These track segments can be used as seed for more sophisticated track finding algorithms or used directly, especially when impact parameter resolution is crucial. The algorithm efficiency is close to 90% for pt > 1 GeV/c and the processing time is small enough to allow a complete detector reconstruction (non RoI guided) within the Level 2 processing.

  16. Fast Sparse Level Sets on Graphics Hardware

    NARCIS (Netherlands)

    Jalba, Andrei C.; Laan, Wladimir J. van der; Roerdink, Jos B.T.M.

    The level-set method is one of the most popular techniques for capturing and tracking deformable interfaces. Although level sets have demonstrated great potential in visualization and computer graphics applications, such as surface editing and physically based modeling, their use for interactive

  17. Levels of abstraction in students' understanding of the concept of algorithm : the qualitative perspective

    NARCIS (Netherlands)

    Perrenet, J.C.; Kaasenbrood, E.J.S.

    2006-01-01

    In a former, mainly quantitative, study we defined four levels of abstraction in Computer Science students' thinking about the concept of algorithm. We constructed a list of questions about algorithms to measure the answering level as an indication for the thinking level. The answering level

  18. Automatic segmentation of Leishmania parasite in microscopic images using a modified CV level set method

    Science.gov (United States)

    Farahi, Maria; Rabbani, Hossein; Talebi, Ardeshir; Sarrafzadeh, Omid; Ensafi, Shahab

    2015-12-01

    Visceral Leishmaniasis is a parasitic disease that affects liver, spleen and bone marrow. According to World Health Organization report, definitive diagnosis is possible just by direct observation of the Leishman body in the microscopic image taken from bone marrow samples. We utilize morphological and CV level set method to segment Leishman bodies in digital color microscopic images captured from bone marrow samples. Linear contrast stretching method is used for image enhancement and morphological method is applied to determine the parasite regions and wipe up unwanted objects. Modified global and local CV level set methods are proposed for segmentation and a shape based stopping factor is used to hasten the algorithm. Manual segmentation is considered as ground truth to evaluate the proposed method. This method is tested on 28 samples and achieved 10.90% mean of segmentation error for global model and 9.76% for local model.

  19. Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Aidan Patrick; Schultz, Peter Andrew; Crozier, Paul; Moore, Stan Gerald; Swiler, Laura Painton; Stephens, John Adam; Trott, Christian Robert; Foiles, Stephen Martin; Tucker, Garritt J. (Drexel University)

    2014-09-01

    This report summarizes the result of LDRD project 12-0395, titled "Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations." During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel computers

  20. A Comprehensive Training Data Set for the Development of Satellite-Based Volcanic Ash Detection Algorithms

    Science.gov (United States)

    Schmidl, Marius

    2017-04-01

    We present a comprehensive training data set covering a large range of atmospheric conditions, including disperse volcanic ash and desert dust layers. These data sets contain all information required for the development of volcanic ash detection algorithms based on artificial neural networks, urgently needed since volcanic ash in the airspace is a major concern of aviation safety authorities. Selected parts of the data are used to train the volcanic ash detection algorithm VADUGS. They contain atmospheric and surface-related quantities as well as the corresponding simulated satellite data for the channels in the infrared spectral range of the SEVIRI instrument on board MSG-2. To get realistic results, ECMWF, IASI-based, and GEOS-Chem data are used to calculate all parameters describing the environment, whereas the software package libRadtran is used to perform radiative transfer simulations returning the brightness temperatures for each atmospheric state. As optical properties are a prerequisite for radiative simulations accounting for aerosol layers, the development also included the computation of optical properties for a set of different aerosol types from different sources. A description of the developed software and the used methods is given, besides an overview of the resulting data sets.

  1. A learning algorithm for adaptive canonical correlation analysis of several data sets.

    Science.gov (United States)

    Vía, Javier; Santamaría, Ignacio; Pérez, Jesús

    2007-01-01

    Canonical correlation analysis (CCA) is a classical tool in statistical analysis to find the projections that maximize the correlation between two data sets. In this work we propose a generalization of CCA to several data sets, which is shown to be equivalent to the classical maximum variance (MAXVAR) generalization proposed by Kettenring. The reformulation of this generalization as a set of coupled least squares regression problems is exploited to develop a neural structure for CCA. In particular, the proposed CCA model is a two layer feedforward neural network with lateral connections in the output layer to achieve the simultaneous extraction of all the CCA eigenvectors through deflation. The CCA neural model is trained using a recursive least squares (RLS) algorithm. Finally, the convergence of the proposed learning rule is proved by means of stochastic approximation techniques and their performance is analyzed through simulations.

  2. Using MaxCompiler for High Level Synthesis of Trigger Algorithms

    CERN Document Server

    Summers, Sioni Paris; Sanders, P.

    2017-01-01

    Firmware for FPGA trigger applications at the CMS experiment is conventionally written using hardware description languages such as Verilog and VHDL. MaxCompiler is an alternative, Java based, tool for developing FPGA applications which uses a higher level of abstraction from the hardware than a hardware description language. An implementation of the jet and energy sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.

  3. Identification of immiscible NAPL contaminant sources in aquifers by a modified two-level saturation based imperialist competitive algorithm

    Science.gov (United States)

    Ghafouri, H. R.; Mosharaf-Dehkordi, M.; Afzalan, B.

    2017-07-01

    A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. A model is proposed to identify characteristics of immiscible NAPL contaminant sources. The contaminant is immiscible in water and multi-phase flow is simulated. The model is a multi-level saturation-based optimization algorithm based on ICA. Each answer string in second level is divided into a set of provinces. Each ICA is modified by incorporating a new knock the base model.

  4. Constructing a graph of connections in clustering algorithm of complex objects

    Directory of Open Access Journals (Sweden)

    Татьяна Шатовская

    2015-05-01

    Full Text Available The article describes the results of modifying the algorithm Chameleon. Hierarchical multi-level algorithm consists of several phases: the construction of the count, coarsening, the separation and recovery. Each phase can be used various approaches and algorithms. The main aim of the work is to study the quality of the clustering of different sets of data using a set of algorithms combinations at different stages of the algorithm and improve the stage of construction by the optimization algorithm of k choice in the graph construction of k of nearest neighbors

  5. Exploring the level sets of quantum control landscapes

    International Nuclear Information System (INIS)

    Rothman, Adam; Ho, Tak-San; Rabitz, Herschel

    2006-01-01

    A quantum control landscape is defined by the value of a physical observable as a functional of the time-dependent control field E(t) for a given quantum-mechanical system. Level sets through this landscape are prescribed by a particular value of the target observable at the final dynamical time T, regardless of the intervening dynamics. We present a technique for exploring a landscape level set, where a scalar variable s is introduced to characterize trajectories along these level sets. The control fields E(s,t) accomplishing this exploration (i.e., that produce the same value of the target observable for a given system) are determined by solving a differential equation over s in conjunction with the time-dependent Schroedinger equation. There is full freedom to traverse a level set, and a particular trajectory is realized by making an a priori choice for a continuous function f(s,t) that appears in the differential equation for the control field. The continuous function f(s,t) can assume an arbitrary form, and thus a level set generally contains a family of controls, where each control takes the quantum system to the same final target value, but produces a distinct control mechanism. In addition, although the observable value remains invariant over the level set, other dynamical properties (e.g., the degree of robustness to control noise) are not specifically preserved and can vary greatly. Examples are presented to illustrate the continuous nature of level-set controls and their associated induced dynamical features, including continuously morphing mechanisms for population control in model quantum systems

  6. On the multi-level solution algorithm for Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Horton, G. [Univ. of Erlangen, Nuernberg (Germany)

    1996-12-31

    We discuss the recently introduced multi-level algorithm for the steady-state solution of Markov chains. The method is based on the aggregation principle, which is well established in the literature. Recursive application of the aggregation yields a multi-level method which has been shown experimentally to give results significantly faster than the methods currently in use. The algorithm can be reformulated as an algebraic multigrid scheme of Galerkin-full approximation type. The uniqueness of the scheme stems from its solution-dependent prolongation operator which permits significant computational savings in the evaluation of certain terms. This paper describes the modeling of computer systems to derive information on performance, measured typically as job throughput or component utilization, and availability, defined as the proportion of time a system is able to perform a certain function in the presence of component failures and possibly also repairs.

  7. A quantum causal discovery algorithm

    Science.gov (United States)

    Giarmatzi, Christina; Costa, Fabio

    2018-03-01

    Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.

  8. A Practical Algorithm for Reconstructing Level-1 Phylogenetic Networks

    NARCIS (Netherlands)

    K.T. Huber; L.J.J. van Iersel (Leo); S.M. Kelk (Steven); R. Suchecki

    2010-01-01

    htmlabstractRecently much attention has been devoted to the construction of phylogenetic networks which generalize phylogenetic trees in order to accommodate complex evolutionary processes. Here we present an efficient, practical algorithm for reconstructing level-1 phylogenetic networks - a type of

  9. A practical algorithm for reconstructing level-1 phylogenetic networks

    NARCIS (Netherlands)

    Huber, K.T.; Iersel, van L.J.J.; Kelk, S.M.; Suchecki, R.

    2011-01-01

    Recently, much attention has been devoted to the construction of phylogenetic networks which generalize phylogenetic trees in order to accommodate complex evolutionary processes. Here, we present an efficient, practical algorithm for reconstructing level-1 phylogenetic networks-a type of network

  10. A Novel Approach for Bi-Level Segmentation of Tuberculosis Bacilli Based on Meta-Heuristic Algorithms

    Directory of Open Access Journals (Sweden)

    AYAS, S.

    2018-02-01

    Full Text Available Image thresholding is the most crucial step in microscopic image analysis to distinguish bacilli objects causing of tuberculosis disease. Therefore, several bi-level thresholding algorithms are widely used to increase the bacilli segmentation accuracy. However, bi-level microscopic image thresholding problem has not been solved using optimization algorithms. This paper introduces a novel approach for the segmentation problem using heuristic algorithms and presents visual and quantitative comparisons of heuristic and state-of-art thresholding algorithms. In this study, well-known heuristic algorithms such as Firefly Algorithm, Particle Swarm Optimization, Cuckoo Search, Flower Pollination are used to solve bi-level microscopic image thresholding problem, and the results are compared with the state-of-art thresholding algorithms such as K-Means, Fuzzy C-Means, Fast Marching. Kapur's entropy is chosen as the entropy measure to be maximized. Experiments are performed to make comparisons in terms of evaluation metrics and execution time. The quantitative results are calculated based on ground truth segmentation. According to the visual results, heuristic algorithms have better performance and the quantitative results are in accord with the visual results. Furthermore, experimental time comparisons show the superiority and effectiveness of the heuristic algorithms over traditional thresholding algorithms.

  11. Level-Set Topology Optimization with Aeroelastic Constraints

    Science.gov (United States)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2015-01-01

    Level-set topology optimization is used to design a wing considering skin buckling under static aeroelastic trim loading, as well as dynamic aeroelastic stability (flutter). The level-set function is defined over the entire 3D volume of a transport aircraft wing box. Therefore, the approach is not limited by any predefined structure and can explore novel configurations. The Sequential Linear Programming (SLP) level-set method is used to solve the constrained optimization problems. The proposed method is demonstrated using three problems with mass, linear buckling and flutter objective and/or constraints. A constraint aggregation method is used to handle multiple buckling constraints in the wing skins. A continuous flutter constraint formulation is used to handle difficulties arising from discontinuities in the design space caused by a switching of the critical flutter mode.

  12. Cheap contouring of costly functions: the Pilot Approximation Trajectory algorithm

    International Nuclear Information System (INIS)

    Huttunen, Janne M J; Stark, Philip B

    2012-01-01

    The Pilot Approximation Trajectory (PAT) contour algorithm can find the contour of a function accurately when it is not practical to evaluate the function on a grid dense enough to use a standard contour algorithm, for instance, when evaluating the function involves conducting a physical experiment or a computationally intensive simulation. PAT relies on an inexpensive pilot approximation to the function, such as interpolating from a sparse grid of inexact values, or solving a partial differential equation (PDE) numerically using a coarse discretization. For each level of interest, the location and ‘trajectory’ of an approximate contour of this pilot function are used to decide where to evaluate the original function to find points on its contour. Those points are joined by line segments to form the PAT approximation of the contour of the original function. Approximating a contour numerically amounts to estimating a lower level set of the function, the set of points on which the function does not exceed the contour level. The area of the symmetric difference between the true lower level set and the estimated lower level set measures the accuracy of the contour. PAT measures its own accuracy by finding an upper confidence bound for this area. In examples, PAT can estimate a contour more accurately than standard algorithms, using far fewer function evaluations than standard algorithms require. We illustrate PAT by constructing a confidence set for viscosity and thermal conductivity of a flowing gas from simulated noisy temperature measurements, a problem in which each evaluation of the function to be contoured requires solving a different set of coupled nonlinear PDEs. (paper)

  13. Three-Dimensional Simulation of DRIE Process Based on the Narrow Band Level Set and Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Jia-Cheng Yu

    2018-02-01

    Full Text Available A three-dimensional topography simulation of deep reactive ion etching (DRIE is developed based on the narrow band level set method for surface evolution and Monte Carlo method for flux distribution. The advanced level set method is implemented to simulate the time-related movements of etched surface. In the meanwhile, accelerated by ray tracing algorithm, the Monte Carlo method incorporates all dominant physical and chemical mechanisms such as ion-enhanced etching, ballistic transport, ion scattering, and sidewall passivation. The modified models of charged particles and neutral particles are epitomized to determine the contributions of etching rate. The effects such as scalloping effect and lag effect are investigated in simulations and experiments. Besides, the quantitative analyses are conducted to measure the simulation error. Finally, this simulator will be served as an accurate prediction tool for some MEMS fabrications.

  14. Quantitative characterization of metastatic disease in the spine. Part I. Semiautomated segmentation using atlas-based deformable registration and the level set method

    International Nuclear Information System (INIS)

    Hardisty, M.; Gordon, L.; Agarwal, P.; Skrinskas, T.; Whyne, C.

    2007-01-01

    Quantitative assessment of metastatic disease in bone is often considered immeasurable and, as such, patients with skeletal metastases are often excluded from clinical trials. In order to effectively quantify the impact of metastatic tumor involvement in the spine, accurate segmentation of the vertebra is required. Manual segmentation can be accurate but involves extensive and time-consuming user interaction. Potential solutions to automating segmentation of metastatically involved vertebrae are demons deformable image registration and level set methods. The purpose of this study was to develop a semiautomated method to accurately segment tumor-bearing vertebrae using the aforementioned techniques. By maintaining morphology of an atlas, the demons-level set composite algorithm was able to accurately differentiate between trans-cortical tumors and surrounding soft tissue of identical intensity. The algorithm successfully segmented both the vertebral body and trabecular centrum of tumor-involved and healthy vertebrae. This work validates our approach as equivalent in accuracy to an experienced user

  15. A synthesis/design optimization algorithm for Rankine cycle based energy systems

    International Nuclear Information System (INIS)

    Toffolo, Andrea

    2014-01-01

    The algorithm presented in this work has been developed to search for the optimal topology and design parameters of a set of Rankine cycles forming an energy system that absorbs/releases heat at different temperature levels and converts part of the absorbed heat into electricity. This algorithm can deal with several applications in the field of energy engineering: e.g., steam cycles or bottoming cycles in combined/cogenerative plants, steam networks, low temperature organic Rankine cycles. The main purpose of this algorithm is to overcome the limitations of the search space introduced by the traditional mixed-integer programming techniques, which assume that possible solutions are derived from a single superstructure embedding them all. The algorithm presented in this work is a hybrid evolutionary/traditional optimization algorithm organized in two levels. A complex original codification of the topology and the intensive design parameters of the system is managed by the upper level evolutionary algorithm according to the criteria set by the HEATSEP method, which are used for the first time to automatically synthesize a “basic” system configuration from a set of elementary thermodynamic cycles. The lower SQP (sequential quadratic programming) algorithm optimizes the objective function(s) with respect to cycle mass flow rates only, taking into account the heat transfer feasibility constraint within the undefined heat transfer section. A challenging example of application is also presented to show the capabilities of the algorithm. - Highlights: • Energy systems based on Rankine cycles are used in many applications. • A hybrid algorithm is proposed to optimize the synthesis/design of such systems. • The topology of the candidate solutions is not limited by a superstructure. • Topology is managed by the genetic operators of the upper level algorithm. • The effectiveness of the algorithm is proved in a complex test case

  16. Modified CC-LR algorithm with three diverse feature sets for motor imagery tasks classification in EEG based brain-computer interface.

    Science.gov (United States)

    Siuly; Li, Yan; Paul Wen, Peng

    2014-03-01

    Motor imagery (MI) tasks classification provides an important basis for designing brain-computer interface (BCI) systems. If the MI tasks are reliably distinguished through identifying typical patterns in electroencephalography (EEG) data, a motor disabled people could communicate with a device by composing sequences of these mental states. In our earlier study, we developed a cross-correlation based logistic regression (CC-LR) algorithm for the classification of MI tasks for BCI applications, but its performance was not satisfactory. This study develops a modified version of the CC-LR algorithm exploring a suitable feature set that can improve the performance. The modified CC-LR algorithm uses the C3 electrode channel (in the international 10-20 system) as a reference channel for the cross-correlation (CC) technique and applies three diverse feature sets separately, as the input to the logistic regression (LR) classifier. The present algorithm investigates which feature set is the best to characterize the distribution of MI tasks based EEG data. This study also provides an insight into how to select a reference channel for the CC technique with EEG signals considering the anatomical structure of the human brain. The proposed algorithm is compared with eight of the most recently reported well-known methods including the BCI III Winner algorithm. The findings of this study indicate that the modified CC-LR algorithm has potential to improve the identification performance of MI tasks in BCI systems. The results demonstrate that the proposed technique provides a classification improvement over the existing methods tested. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. Level set segmentation of bovine corpora lutea in ex situ ovarian ultrasound images

    Directory of Open Access Journals (Sweden)

    Adams Gregg P

    2008-08-01

    Full Text Available Abstract Background The objective of this study was to investigate the viability of level set image segmentation methods for the detection of corpora lutea (corpus luteum, CL boundaries in ultrasonographic ovarian images. It was hypothesized that bovine CL boundaries could be located within 1–2 mm by a level set image segmentation methodology. Methods Level set methods embed a 2D contour in a 3D surface and evolve that surface over time according to an image-dependent speed function. A speed function suitable for segmentation of CL's in ovarian ultrasound images was developed. An initial contour was manually placed and contour evolution was allowed to proceed until the rate of change of the area was sufficiently small. The method was tested on ovarian ultrasonographic images (n = 8 obtained ex situ. A expert in ovarian ultrasound interpretation delineated CL boundaries manually to serve as a "ground truth". Accuracy of the level set segmentation algorithm was determined by comparing semi-automatically determined contours with ground truth contours using the mean absolute difference (MAD, root mean squared difference (RMSD, Hausdorff distance (HD, sensitivity, and specificity metrics. Results and discussion The mean MAD was 0.87 mm (sigma = 0.36 mm, RMSD was 1.1 mm (sigma = 0.47 mm, and HD was 3.4 mm (sigma = 2.0 mm indicating that, on average, boundaries were accurate within 1–2 mm, however, deviations in excess of 3 mm from the ground truth were observed indicating under- or over-expansion of the contour. Mean sensitivity and specificity were 0.814 (sigma = 0.171 and 0.990 (sigma = 0.00786, respectively, indicating that CLs were consistently undersegmented but rarely did the contour interior include pixels that were judged by the human expert not to be part of the CL. It was observed that in localities where gradient magnitudes within the CL were strong due to high contrast speckle, contour expansion stopped too early. Conclusion The

  18. Fast noise level estimation algorithm based on principal component analysis transform and nonlinear rectification

    Science.gov (United States)

    Xu, Shaoping; Zeng, Xiaoxia; Jiang, Yinnan; Tang, Yiling

    2018-01-01

    We proposed a noniterative principal component analysis (PCA)-based noise level estimation (NLE) algorithm that addresses the problem of estimating the noise level with a two-step scheme. First, we randomly extracted a number of raw patches from a given noisy image and took the smallest eigenvalue of the covariance matrix of the raw patches as the preliminary estimation of the noise level. Next, the final estimation was directly obtained with a nonlinear mapping (rectification) function that was trained on some representative noisy images corrupted with different known noise levels. Compared with the state-of-art NLE algorithms, the experiment results show that the proposed NLE algorithm can reliably infer the noise level and has robust performance over a wide range of image contents and noise levels, showing a good compromise between speed and accuracy in general.

  19. A cluster analysis on road traffic accidents using genetic algorithms

    Science.gov (United States)

    Saharan, Sabariah; Baragona, Roberto

    2017-04-01

    The analysis of traffic road accidents is increasingly important because of the accidents cost and public road safety. The availability or large data sets makes the study of factors that affect the frequency and severity accidents are viable. However, the data are often highly unbalanced and overlapped. We deal with the data set of the road traffic accidents recorded in Christchurch, New Zealand, from 2000-2009 with a total of 26440 accidents. The data is in a binary set and there are 50 factors road traffic accidents with four level of severity. We used genetic algorithm for the analysis because we are in the presence of a large unbalanced data set and standard clustering like k-means algorithm may not be suitable for the task. The genetic algorithm based on clustering for unknown K, (GCUK) has been used to identify the factors associated with accidents of different levels of severity. The results provided us with an interesting insight into the relationship between factors and accidents severity level and suggest that the two main factors that contributes to fatal accidents are "Speed greater than 60 km h" and "Did not see other people until it was too late". A comparison with the k-means algorithm and the independent component analysis is performed to validate the results.

  20. Using MaxCompiler for the high level synthesis of trigger algorithms

    International Nuclear Information System (INIS)

    Summers, S.; Rose, A.; Sanders, P.

    2017-01-01

    Firmware for FPGA trigger applications at the CMS experiment is conventionally written using hardware description languages such as Verilog and VHDL. MaxCompiler is an alternative, Java based, tool for developing FPGA applications which uses a higher level of abstraction from the hardware than a hardware description language. An implementation of the jet and energy sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.

  1. Using MaxCompiler for the high level synthesis of trigger algorithms

    Science.gov (United States)

    Summers, S.; Rose, A.; Sanders, P.

    2017-02-01

    Firmware for FPGA trigger applications at the CMS experiment is conventionally written using hardware description languages such as Verilog and VHDL. MaxCompiler is an alternative, Java based, tool for developing FPGA applications which uses a higher level of abstraction from the hardware than a hardware description language. An implementation of the jet and energy sum algorithms for the CMS Level-1 calorimeter trigger has been written using MaxCompiler to benchmark against the VHDL implementation in terms of accuracy, latency, resource usage, and code size. A Kalman Filter track fitting algorithm has been developed using MaxCompiler for a proposed CMS Level-1 track trigger for the High-Luminosity LHC upgrade. The design achieves a low resource usage, and has a latency of 187.5 ns per iteration.

  2. Efficient algorithms for collaborative decision making for large scale settings

    DEFF Research Database (Denmark)

    Assent, Ira

    2011-01-01

    to bring about more effective and more efficient retrieval systems that support the users' decision making process. We sketch promising research directions for more efficient algorithms for collaborative decision making, especially for large scale systems.......Collaborative decision making is a successful approach in settings where data analysis and querying can be done interactively. In large scale systems with huge data volumes or many users, collaboration is often hindered by impractical runtimes. Existing work on improving collaboration focuses...... on avoiding redundancy for users working on the same task. While this improves the effectiveness of the user work process, the underlying query processing engine is typically considered a "black box" and left unchanged. Research in multiple query processing, on the other hand, ignores the application...

  3. An algorithm for computing the hull of the solution set of interval linear equations

    Czech Academy of Sciences Publication Activity Database

    Rohn, Jiří

    2011-01-01

    Roč. 435, č. 2 (2011), s. 193-201 ISSN 0024-3795 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval linear equations * solution set * interval hull * algorithm * absolute value inequality Subject RIV: BA - General Mathematics Impact factor: 0.974, year: 2011

  4. A comparative study of image low level feature extraction algorithms

    Directory of Open Access Journals (Sweden)

    M.M. El-gayar

    2013-07-01

    Full Text Available Feature extraction and matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods for assessing the performance of popular image matching algorithms are presented and rely on costly descriptors for detection and matching. Specifically, the method assesses the type of images under which each of the algorithms reviewed herein perform to its maximum or highest efficiency. The efficiency is measured in terms of the number of matches founds by the algorithm and the number of type I and type II errors encountered when the algorithm is tested against a specific pair of images. Current comparative studies asses the performance of the algorithms based on the results obtained in different criteria such as speed, sensitivity, occlusion, and others. This study addresses the limitations of the existing comparative tools and delivers a generalized criterion to determine beforehand the level of efficiency expected from a matching algorithm given the type of images evaluated. The algorithms and the respective images used within this work are divided into two groups: feature-based and texture-based. And from this broad classification only three of the most widely used algorithms are assessed: color histogram, FAST (Features from Accelerated Segment Test, SIFT (Scale Invariant Feature Transform, PCA-SIFT (Principal Component Analysis-SIFT, F-SIFT (fast-SIFT and SURF (speeded up robust features. The performance of the Fast-SIFT (F-SIFT feature detection methods are compared for scale changes, rotation, blur, illumination changes and affine transformations. All the experiments use repeatability measurement and the number of correct matches for the evaluation measurements. SIFT presents its stability in most situations although its slow. F-SIFT is the fastest one with good performance as the same as SURF, SIFT, PCA-SIFT show its advantages in rotation and illumination changes.

  5. A quantitative evaluation of pleural effusion on computed tomography scans using B-spline and local clustering level set.

    Science.gov (United States)

    Song, Lei; Gao, Jungang; Wang, Sheng; Hu, Huasi; Guo, Youmin

    2017-01-01

    Estimation of the pleural effusion's volume is an important clinical issue. The existing methods cannot assess it accurately when there is large volume of liquid in the pleural cavity and/or the patient has some other disease (e.g. pneumonia). In order to help solve this issue, the objective of this study is to develop and test a novel algorithm using B-spline and local clustering level set method jointly, namely BLL. The BLL algorithm was applied to a dataset involving 27 pleural effusions detected on chest CT examination of 18 adult patients with the presence of free pleural effusion. Study results showed that average volumes of pleural effusion computed using the BLL algorithm and assessed manually by the physicians were 586 ml±339 ml and 604±352 ml, respectively. For the same patient, the volume of the pleural effusion, segmented semi-automatically, was 101.8% ±4.6% of that was segmented manually. Dice similarity was found to be 0.917±0.031. The study demonstrated feasibility of applying the new BLL algorithm to accurately measure the volume of pleural effusion.

  6. An approach for maximizing the smallest eigenfrequency of structure vibration based on piecewise constant level set method

    Science.gov (United States)

    Zhang, Zhengfang; Chen, Weifeng

    2018-05-01

    Maximization of the smallest eigenfrequency of the linearized elasticity system with area constraint is investigated. The elasticity system is extended into a large background domain, but the void is vacuum and not filled with ersatz material. The piecewise constant level set (PCLS) method is applied to present two regions, the original material region and the void region. A quadratic PCLS function is proposed to represent the characteristic function. Consequently, the functional derivative of the smallest eigenfrequency with respect to PCLS function takes nonzero value in the original material region and zero in the void region. A penalty gradient algorithm is proposed, which initializes the whole background domain with the original material and decreases the area of original material region till the area constraint is satisfied. 2D and 3D numerical examples are presented, illustrating the validity of the proposed algorithm.

  7. Multiatlas segmentation of thoracic and abdominal anatomy with level set-based local search.

    Science.gov (United States)

    Schreibmann, Eduard; Marcus, David M; Fox, Tim

    2014-07-08

    Segmentation of organs at risk (OARs) remains one of the most time-consuming tasks in radiotherapy treatment planning. Atlas-based segmentation methods using single templates have emerged as a practical approach to automate the process for brain or head and neck anatomy, but pose significant challenges in regions where large interpatient variations are present. We show that significant changes are needed to autosegment thoracic and abdominal datasets by combining multi-atlas deformable registration with a level set-based local search. Segmentation is hierarchical, with a first stage detecting bulk organ location, and a second step adapting the segmentation to fine details present in the patient scan. The first stage is based on warping multiple presegmented templates to the new patient anatomy using a multimodality deformable registration algorithm able to cope with changes in scanning conditions and artifacts. These segmentations are compacted in a probabilistic map of organ shape using the STAPLE algorithm. Final segmentation is obtained by adjusting the probability map for each organ type, using customized combinations of delineation filters exploiting prior knowledge of organ characteristics. Validation is performed by comparing automated and manual segmentation using the Dice coefficient, measured at an average of 0.971 for the aorta, 0.869 for the trachea, 0.958 for the lungs, 0.788 for the heart, 0.912 for the liver, 0.884 for the kidneys, 0.888 for the vertebrae, 0.863 for the spleen, and 0.740 for the spinal cord. Accurate atlas segmentation for abdominal and thoracic regions can be achieved with the usage of a multi-atlas and perstructure refinement strategy. To improve clinical workflow and efficiency, the algorithm was embedded in a software service, applying the algorithm automatically on acquired scans without any user interaction.

  8. A Level-2 trigger algorithm for the identification of muons in the ATLAS Muon Spectrometer

    CERN Document Server

    Di Mattia, A; Dos Anjos, A; Baines, J T M; Bee, C P; Biglietti, M; Bogaerts, J A C; Boisvert, V; Bosman, M; Caron, B; Casado, M P; Cataldi, G; Cavalli, D; Cervetto, M; Comune, G; Conde-Muíño, P; De Santo, A; Díaz-Gómez, M; Dosil, M; Ellis, Nick; Emeliyanov, D; Epp, B; Falciano, S; Farilla, A; George, S; Ghete, V M; González, S; Grothe, M; Kabana, S; Khomich, A; Kilvington, G; Konstantinidis, N P; Kootz, A; Lowe, A; Luminari, L; Maeno, T; Masik, J; Meessen, C; Mello, A G; Merino, G; Moore, R; Morettini, P; Negri, A; Nikitin, N V; Nisati, A; Padilla, C; Panikashvili, N; Parodi, F; Pasqualucci, E; Pérez-Réale, V; Pinfold, J L; Pinto, P; Qian, Z; Resconi, S; Rosati, S; Sánchez, C; Santamarina-Rios, C; Scannicchio, D A; Schiavi, C; Segura, E; De Seixas, J M; Sivoklokov, S Yu; Soluk, R A; Stefanidis, E; Sushkov, S S; Sutton, M; Tapprogge, Stefan; Thomas, E; Touchard, F; Venda-Pinto, B; Vercesi, V; Werner, P; Wheeler, S; Wickens, F J; Wiedenmann, W; Wielers, M; Zobernig, G; Computing In High Energy Physics

    2005-01-01

    The ATLAS Level-2 trigger provides a software-based event selection after the initial Level-1 hardware trigger. For the muon events, the selection is decomposed in a number of broad steps: first, the Muon Spectrometer data are processed to give physics quantities associated to the muon track (standalone feature extraction) then, other detector data are used to refine the extracted features. The “µFast” algorithm performs the standalone feature extraction, providing a first reduction of the muon event rate from Level-1. It confirms muon track candidates with a precise measurement of the muon momentum. The algorithm is designed to be both conceptually simple and fast so as to be readily implemented in the demanding online environment in which the Level-2 selection code will run. Never-the-less its physics performance approaches, in some cases, that of the offline reconstruction algorithms. This paper describes the implemented algorithm together with the software techniques employed to increase its timing p...

  9. The Algorithm Theoretical Basis Document for Level 1A Processing

    Science.gov (United States)

    Jester, Peggy L.; Hancock, David W., III

    2012-01-01

    The first process of the Geoscience Laser Altimeter System (GLAS) Science Algorithm Software converts the Level 0 data into the Level 1A Data Products. The Level 1A Data Products are the time ordered instrument data converted from counts to engineering units. This document defines the equations that convert the raw instrument data into engineering units. Required scale factors, bias values, and coefficients are defined in this document. Additionally, required quality assurance and browse products are defined in this document.

  10. Effect of a uniform magnetic field on dielectric two-phase bubbly flows using the level set method

    International Nuclear Information System (INIS)

    Ansari, M.R.; Hadidi, A.; Nimvari, M.E.

    2012-01-01

    In this study, the behavior of a single bubble in a dielectric viscous fluid under a uniform magnetic field has been simulated numerically using the Level Set method in two-phase bubbly flow. The two-phase bubbly flow was considered to be laminar and homogeneous. Deformation of the bubble was considered to be due to buoyancy and magnetic forces induced from the external applied magnetic field. A computer code was developed to solve the problem using the flow field, the interface of two phases, and the magnetic field. The Finite Volume method was applied using the SIMPLE algorithm to discretize the governing equations. Using this algorithm enables us to calculate the pressure parameter, which has been eliminated by previous researchers because of the complexity of the two-phase flow. The finite difference method was used to solve the magnetic field equation. The results outlined in the present study agree well with the existing experimental data and numerical results. These results show that the magnetic field affects and controls the shape, size, velocity, and location of the bubble. - Highlights: ►A bubble behavior was simulated numerically. ► A single bubble behavior was considered in a dielectric viscous fluid. ► A uniform magnetic field is used to study a bubble behavior. ► Deformation of the bubble was considered using the Level Set method. ► The magnetic field affects the shape, size, velocity, and location of the bubble.

  11. A computerized traffic control algorithm to determine optimal traffic signal settings. Ph.D. Thesis - Toledo Univ.

    Science.gov (United States)

    Seldner, K.

    1977-01-01

    An algorithm was developed to optimally control the traffic signals at each intersection using a discrete time traffic model applicable to heavy or peak traffic. Off line optimization procedures were applied to compute the cycle splits required to minimize the lengths of the vehicle queues and delay at each intersection. The method was applied to an extensive traffic network in Toledo, Ohio. Results obtained with the derived optimal settings are compared with the control settings presently in use.

  12. Utility of an Algorithm to Increase the Accuracy of Medication History in an Obstetrical Setting.

    Science.gov (United States)

    Corbel, Aline; Baud, David; Chaouch, Aziz; Beney, Johnny; Csajka, Chantal; Panchaud, Alice

    2016-01-01

    In an obstetrical setting, inaccurate medication histories at hospital admission may result in failure to identify potentially harmful treatments for patients and/or their fetus(es). This prospective study was conducted to assess average concordance rates between (1) a medication list obtained with a one-page structured medication history algorithm developed for the obstetrical setting and (2) the medication list reported in medical records and obtained by open-ended questions based on standard procedures. Both lists were converted into concordance rate using a best possible medication history approach as the reference (information obtained by patients, prescribers and community pharmacists' interviews). The algorithm-based method obtained a higher average concordance rate than the standard method, with respectively 90.2% [CI95% 85.8-94.3] versus 24.6% [CI95%15.3-34.4] concordance rates (phistory in our obstetric population, without using substantial resources. Its implementation is an effective first step to the medication reconciliation process, which has been recognized as a very important component of patients' drug safety.

  13. Development of fuzzy algorithm with learning function for nuclear steam generator level control

    International Nuclear Information System (INIS)

    Park, Gee Yong; Seong, Poong Hyun

    1993-01-01

    A fuzzy algorithm with learning function is applied to the steam generator level control of nuclear power plant. This algorithm can make its rule base and membership functions suited for steam generator level control by use of the data obtained from the control actions of a skilled operator or of other controllers (i.e., PID controller). The rule base of fuzzy controller with learning function is divided into two parts. One part of the rule base is provided to level control of steam generator at low power level (0 % - 30 % of full power) and the other to level control at high power level (30 % - 100 % of full power). Response time of steam generator level control at low power range with this rule base is shown to be shorter than that of fuzzy controller with direct inference. (Author)

  14. Planar graphs theory and algorithms

    CERN Document Server

    Nishizeki, T

    1988-01-01

    Collected in this volume are most of the important theorems and algorithms currently known for planar graphs, together with constructive proofs for the theorems. Many of the algorithms are written in Pidgin PASCAL, and are the best-known ones; the complexities are linear or 0(nlogn). The first two chapters provide the foundations of graph theoretic notions and algorithmic techniques. The remaining chapters discuss the topics of planarity testing, embedding, drawing, vertex- or edge-coloring, maximum independence set, subgraph listing, planar separator theorem, Hamiltonian cycles, and single- or multicommodity flows. Suitable for a course on algorithms, graph theory, or planar graphs, the volume will also be useful for computer scientists and graph theorists at the research level. An extensive reference section is included.

  15. A level set method for multiple sclerosis lesion segmentation.

    Science.gov (United States)

    Zhao, Yue; Guo, Shuxu; Luo, Min; Shi, Xue; Bilello, Michel; Zhang, Shaoxiang; Li, Chunming

    2018-06-01

    In this paper, we present a level set method for multiple sclerosis (MS) lesion segmentation from FLAIR images in the presence of intensity inhomogeneities. We use a three-phase level set formulation of segmentation and bias field estimation to segment MS lesions and normal tissue region (including GM and WM) and CSF and the background from FLAIR images. To save computational load, we derive a two-phase formulation from the original multi-phase level set formulation to segment the MS lesions and normal tissue regions. The derived method inherits the desirable ability to precisely locate object boundaries of the original level set method, which simultaneously performs segmentation and estimation of the bias field to deal with intensity inhomogeneity. Experimental results demonstrate the advantages of our method over other state-of-the-art methods in terms of segmentation accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. A parametric level-set method for partially discrete tomography

    NARCIS (Netherlands)

    A. Kadu (Ajinkya); T. van Leeuwen (Tristan); K.J. Batenburg (Joost)

    2017-01-01

    textabstractThis paper introduces a parametric level-set method for tomographic reconstruction of partially discrete images. Such images consist of a continuously varying background and an anomaly with a constant (known) grey-value. We express the geometry of the anomaly using a level-set function,

  17. Optimization-Based Image Segmentation by Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Rosenberger C

    2008-01-01

    Full Text Available Abstract Many works in the literature focus on the definition of evaluation metrics and criteria that enable to quantify the performance of an image processing algorithm. These evaluation criteria can be used to define new image processing algorithms by optimizing them. In this paper, we propose a general scheme to segment images by a genetic algorithm. The developed method uses an evaluation criterion which quantifies the quality of an image segmentation result. The proposed segmentation method can integrate a local ground truth when it is available in order to set the desired level of precision of the final result. A genetic algorithm is then used in order to determine the best combination of information extracted by the selected criterion. Then, we show that this approach can either be applied for gray-levels or multicomponents images in a supervised context or in an unsupervised one. Last, we show the efficiency of the proposed method through some experimental results on several gray-levels and multicomponents images.

  18. Optimization-Based Image Segmentation by Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    H. Laurent

    2008-05-01

    Full Text Available Many works in the literature focus on the definition of evaluation metrics and criteria that enable to quantify the performance of an image processing algorithm. These evaluation criteria can be used to define new image processing algorithms by optimizing them. In this paper, we propose a general scheme to segment images by a genetic algorithm. The developed method uses an evaluation criterion which quantifies the quality of an image segmentation result. The proposed segmentation method can integrate a local ground truth when it is available in order to set the desired level of precision of the final result. A genetic algorithm is then used in order to determine the best combination of information extracted by the selected criterion. Then, we show that this approach can either be applied for gray-levels or multicomponents images in a supervised context or in an unsupervised one. Last, we show the efficiency of the proposed method through some experimental results on several gray-levels and multicomponents images.

  19. A Novel Control Algorithm Expressions Set for not Negligible Resistive Parameters PM Brushless AC Motors

    Directory of Open Access Journals (Sweden)

    Renato RIZZO

    2012-08-01

    Full Text Available This paper deals with Permanent Magnet Brushless Motors. In particular is proposed a new set of control algorithm expressions that is realized taking into account resistive parameters of the motor, differently from simplified models of this type of motors where these parameters are usually neglected. The control is set up and an analysis of the performance is reported in the paper, where the validation of the new expressions is done with reference to a motor prototype particularly compact because is foreseen for application on tram propulsion drives. The results are evidenced in the last part of the paper.

  20. Multi-Threaded Algorithms for GPGPU in the ATLAS High Level Trigger

    Science.gov (United States)

    Conde Muíño, P.; ATLAS Collaboration

    2017-10-01

    General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with Level-1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz Level-1 acceptance rate to 1.5 kHz for recording, requiring an average per-event processing time of ∼ 250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significant challenge that will increase significantly with future LHC upgrades. During the LHC data taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further to 7.5 times the design value in 2026 following LHC and ATLAS upgrades. Corresponding improvements in the speed of the reconstruction code will be needed to provide the required trigger selection power within affordable computing resources. Key factors determining the potential benefit of including GPGPU as part of the HLT processor farm are: the relative speed of the CPU and GPGPU algorithm implementations; the relative execution times of the GPGPU algorithms and serial code remaining on the CPU; the number of GPGPU required, and the relative financial cost of the selected GPGPU. We give a brief overview of the algorithms implemented and present new measurements that compare the performance of various configurations exploiting GPGPU cards.

  1. A Multiagent Evolutionary Algorithm for the Resource-Constrained Project Portfolio Selection and Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Yongyi Shou

    2014-01-01

    Full Text Available A multiagent evolutionary algorithm is proposed to solve the resource-constrained project portfolio selection and scheduling problem. The proposed algorithm has a dual level structure. In the upper level a set of agents make decisions to select appropriate project portfolios. Each agent selects its project portfolio independently. The neighborhood competition operator and self-learning operator are designed to improve the agent’s energy, that is, the portfolio profit. In the lower level the selected projects are scheduled simultaneously and completion times are computed to estimate the expected portfolio profit. A priority rule-based heuristic is used by each agent to solve the multiproject scheduling problem. A set of instances were generated systematically from the widely used Patterson set. Computational experiments confirmed that the proposed evolutionary algorithm is effective for the resource-constrained project portfolio selection and scheduling problem.

  2. Procedural Generation of Levels with Controllable Difficulty for a Platform Game Using a Genetic Algorithm

    OpenAIRE

    Classon, Johan; Andersson, Viktor

    2016-01-01

    This thesis describes the implementation and evaluation of a genetic algorithm (GA) for procedurally generating levels with controllable difficulty for a motion-based 2D platform game. Manually creating content can be time-consuming, and it may be desirable to automate this process with an algorithm, using Procedural Content Generation (PCG). An algorithm was implemented and then refined with an iterative method by conducting user tests. The resulting algorithm is considered a success and sho...

  3. Conserved PCR primer set designing for closely-related species to complete mitochondrial genome sequencing using a sliding window-based PSO algorithm.

    Directory of Open Access Journals (Sweden)

    Cheng-Hong Yang

    Full Text Available BACKGROUND: Complete mitochondrial (mt genome sequencing is becoming increasingly common for phylogenetic reconstruction and as a model for genome evolution. For long template sequencing, i.e., like the entire mtDNA, it is essential to design primers for Polymerase Chain Reaction (PCR amplicons which are partly overlapping each other. The presented chromosome walking strategy provides the overlapping design to solve the problem for unreliable sequencing data at the 5' end and provides the effective sequencing. However, current algorithms and tools are mostly focused on the primer design for a local region in the genomic sequence. Accordingly, it is still challenging to provide the primer sets for the entire mtDNA. METHODOLOGY/PRINCIPAL FINDINGS: The purpose of this study is to develop an integrated primer design algorithm for entire mt genome in general, and for the common primer sets for closely-related species in particular. We introduce ClustalW to generate the multiple sequence alignment needed to find the conserved sequences in closely-related species. These conserved sequences are suitable for designing the common primers for the entire mtDNA. Using a heuristic algorithm particle swarm optimization (PSO, all the designed primers were computationally validated to fit the common primer design constraints, such as the melting temperature, primer length and GC content, PCR product length, secondary structure, specificity, and terminal limitation. The overlap requirement for PCR amplicons in the entire mtDNA is satisfied by defining the overlapping region with the sliding window technology. Finally, primer sets were designed within the overlapping region. The primer sets for the entire mtDNA sequences were successfully demonstrated in the example of two closely-related fish species. The pseudo code for the primer design algorithm is provided. CONCLUSIONS/SIGNIFICANCE: In conclusion, it can be said that our proposed sliding window-based PSO

  4. The Aquarius Level 2 Algorithm

    Science.gov (United States)

    Meissner, T.; Wentz, F. J.; Hilburn, K. A.; Lagerloef, G. S.; Le Vine, D. M.

    2012-12-01

    The Aquarius L-band radiometer/scatterometer system is designed to provide monthly salinity maps at 150 km spatial scale to an accuracy of 0.2 psu. The sensor was launched on June 10, 2011, aboard the Argentine CONAE SAC-D spacecraft. The L-band radiometers and the scatterometer have been taking science data observations since August 25, 2011. This presentation discusses the current state of the Aquarius Level processing algorithm, which transforms radiometer counts ultimately into sea surface salinity (SSS). We focus on several topics that we have investigated since launch: 1. Updated Pointing A detailed check of the Aquarius pointing angles was performed, which consists in making adjustments of the two pointing angles, azimuth angle and off-nadir angle, for each horn. It has been found that the necessary adjustments for all 3 horns can be explained by a single offset for the antenna pointing if we introduce a constant offset in the roll angle by - 0.51 deg and the pitch angle by + 0.16 deg. 2. Antenna Patterns and Instrument Calibration In March 2012 JPL has produced a set of new antenna patterns using the GRASP software. Compared with the various pre-launch patterns those new patterns lead to an increase in the spillover coefficient by about 1%. We discuss its impact on several components of the Level 2 processing: the antenna pattern correction (APC), the correction for intrusion of galactic and solar radiation that is reflected from the ocean surface into the Aquarius field of view, and the correction of contamination from land surface radiation entering into the sidelobes. We show that the new antenna patterns result in a consistent calibration of all 3 Stokes parameters, which can be best demonstrated during spacecraft pitch maneuvers. 3. Cross Polarization Couplings of the 3rd Stokes Parameter Using the APC values for the cross polarization coupling of the 3rd Stokes parameter into the 1st and 2nd Stokes parameter lead to a spurious image of the 3rd Stokes

  5. Algorithmic Approach to Abstracting Linear Systems by Timed Automata

    DEFF Research Database (Denmark)

    Sloth, Christoffer; Wisniewski, Rafael

    2011-01-01

    This paper proposes an LMI-based algorithm for abstracting dynamical systems by timed automata, which enables automatic formal verification of linear systems. The proposed abstraction is based on partitioning the state space of the system using positive invariant sets, generated by Lyapunov...... functions. This partitioning ensures that the vector field of the dynamical system is transversal to all facets of the cells, which induces some desirable properties of the abstraction. The algorithm is based on identifying intersections of level sets of quadratic Lyapunov functions, and determining...

  6. Trilateral market coupling. Algorithm appendix

    International Nuclear Information System (INIS)

    2006-03-01

    each local market. The Market Coupling algorithm provides as an output for each market: The set of accepted Block Orders; The Net Position for each Settlement Period of the following day; and The price (MCP) for each Settlement Period of the following day. The results of the Market Coupling algorithm are consistent with a number of 'High Level Properties'. The High Level Properties can be divided into two subsets: Market Coupling High Level Properties (constraints that the Market Results fulfill for each Settlement Period), and Exchanges High Level Properties (constraints that the Market Results must fulfill for each Settlement Period. They reflect the requirements of individual participants trading on the exchanges). Using the ATCs and NECs, the Market Coupling algorithm can determine for each Settlement Period the Price and Net Position of each market. A NEC is built for a given set of accepted Block Orders (Winning Subset). When a set of NECs is used to determine the prices and Net Positions of each market, the set of prices returned for each market may very well not be compatible with this assumed Winning Subset. The Winning Subset needs to be updated and the calculations run again with the derived new NEC. This procedure must be repeated until a stable solution is found. As a consequence, the Market Coupling algorithm involves iterations between two modules: The Coordination Module which is in charge of the centralized computations; The Block Selector of each exchange which performs the decentralized computations. The iterative nature of the algorithm derives from the treatment of Block Orders. The data flows and calculations of the iterative algorithm are described in the rest of the document

  7. Trilateral market coupling. Algorithm appendix

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-03-15

    participants in each local market. The Market Coupling algorithm provides as an output for each market: The set of accepted Block Orders; The Net Position for each Settlement Period of the following day; and The price (MCP) for each Settlement Period of the following day. The results of the Market Coupling algorithm are consistent with a number of 'High Level Properties'. The High Level Properties can be divided into two subsets: Market Coupling High Level Properties (constraints that the Market Results fulfill for each Settlement Period), and Exchanges High Level Properties (constraints that the Market Results must fulfill for each Settlement Period. They reflect the requirements of individual participants trading on the exchanges). Using the ATCs and NECs, the Market Coupling algorithm can determine for each Settlement Period the Price and Net Position of each market. A NEC is built for a given set of accepted Block Orders (Winning Subset). When a set of NECs is used to determine the prices and Net Positions of each market, the set of prices returned for each market may very well not be compatible with this assumed Winning Subset. The Winning Subset needs to be updated and the calculations run again with the derived new NEC. This procedure must be repeated until a stable solution is found. As a consequence, the Market Coupling algorithm involves iterations between two modules: The Coordination Module which is in charge of the centralized computations; The Block Selector of each exchange which performs the decentralized computations. The iterative nature of the algorithm derives from the treatment of Block Orders. The data flows and calculations of the iterative algorithm are described in the rest of the document.

  8. Application of a fuzzy control algorithm with improved learning speed to nuclear steam generator level control

    International Nuclear Information System (INIS)

    Park, Gee Yong; Seong, Poong Hyun

    1994-01-01

    In order to reduce the load of tuning works by trial-and-error for obtaining the best control performance of conventional fuzzy control algorithm, a fuzzy control algorithm with learning function is investigated in this work. This fuzzy control algorithm can make its rule base and tune the membership functions automatically by use of learning function which needs the data from the control actions of the plant operator or other controllers. Learning process in fuzzy control algorithm is to find the optimal values of parameters, which consist of the membership functions and the rule base, by gradient descent method. Learning speed of gradient descent is significantly improved in this work with the addition of modified momentum. This control algorithm is applied to the steam generator level control by computer simulations. The simulation results confirm the good performance of this control algorithm for level control and show that the fuzzy learning algorithm has the generalization capability for the relation of inputs and outputs and it also has the excellent capability of disturbance rejection

  9. Ant-Based Phylogenetic Reconstruction (ABPR: A new distance algorithm for phylogenetic estimation based on ant colony optimization

    Directory of Open Access Journals (Sweden)

    Karla Vittori

    2008-12-01

    Full Text Available We propose a new distance algorithm for phylogenetic estimation based on Ant Colony Optimization (ACO, named Ant-Based Phylogenetic Reconstruction (ABPR. ABPR joins two taxa iteratively based on evolutionary distance among sequences, while also accounting for the quality of the phylogenetic tree built according to the total length of the tree. Similar to optimization algorithms for phylogenetic estimation, the algorithm allows exploration of a larger set of nearly optimal solutions. We applied the algorithm to four empirical data sets of mitochondrial DNA ranging from 12 to 186 sequences, and from 898 to 16,608 base pairs, and covering taxonomic levels from populations to orders. We show that ABPR performs better than the commonly used Neighbor-Joining algorithm, except when sequences are too closely related (e.g., population-level sequences. The phylogenetic relationships recovered at and above species level by ABPR agree with conventional views. However, like other algorithms of phylogenetic estimation, the proposed algorithm failed to recover expected relationships when distances are too similar or when rates of evolution are very variable, leading to the problem of long-branch attraction. ABPR, as well as other ACO-based algorithms, is emerging as a fast and accurate alternative method of phylogenetic estimation for large data sets.

  10. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  11. Classification of Non-Small Cell Lung Cancer Using Significance Analysis of Microarray-Gene Set Reduction Algorithm

    Directory of Open Access Journals (Sweden)

    Lei Zhang

    2016-01-01

    Full Text Available Among non-small cell lung cancer (NSCLC, adenocarcinoma (AC, and squamous cell carcinoma (SCC are two major histology subtypes, accounting for roughly 40% and 30% of all lung cancer cases, respectively. Since AC and SCC differ in their cell of origin, location within the lung, and growth pattern, they are considered as distinct diseases. Gene expression signatures have been demonstrated to be an effective tool for distinguishing AC and SCC. Gene set analysis is regarded as irrelevant to the identification of gene expression signatures. Nevertheless, we found that one specific gene set analysis method, significance analysis of microarray-gene set reduction (SAMGSR, can be adopted directly to select relevant features and to construct gene expression signatures. In this study, we applied SAMGSR to a NSCLC gene expression dataset. When compared with several novel feature selection algorithms, for example, LASSO, SAMGSR has equivalent or better performance in terms of predictive ability and model parsimony. Therefore, SAMGSR is a feature selection algorithm, indeed. Additionally, we applied SAMGSR to AC and SCC subtypes separately to discriminate their respective stages, that is, stage II versus stage I. Few overlaps between these two resulting gene signatures illustrate that AC and SCC are technically distinct diseases. Therefore, stratified analyses on subtypes are recommended when diagnostic or prognostic signatures of these two NSCLC subtypes are constructed.

  12. Discretisation Schemes for Level Sets of Planar Gaussian Fields

    Science.gov (United States)

    Beliaev, D.; Muirhead, S.

    2018-01-01

    Smooth random Gaussian functions play an important role in mathematical physics, a main example being the random plane wave model conjectured by Berry to give a universal description of high-energy eigenfunctions of the Laplacian on generic compact manifolds. Our work is motivated by questions about the geometry of such random functions, in particular relating to the structure of their nodal and level sets. We study four discretisation schemes that extract information about level sets of planar Gaussian fields. Each scheme recovers information up to a different level of precision, and each requires a maximum mesh-size in order to be valid with high probability. The first two schemes are generalisations and enhancements of similar schemes that have appeared in the literature (Beffara and Gayet in Publ Math IHES, 2017. https://doi.org/10.1007/s10240-017-0093-0; Mischaikow and Wanner in Ann Appl Probab 17:980-1018, 2007); these give complete topological information about the level sets on either a local or global scale. As an application, we improve the results in Beffara and Gayet (2017) on Russo-Seymour-Welsh estimates for the nodal set of positively-correlated planar Gaussian fields. The third and fourth schemes are, to the best of our knowledge, completely new. The third scheme is specific to the nodal set of the random plane wave, and provides global topological information about the nodal set up to `visible ambiguities'. The fourth scheme gives a way to approximate the mean number of excursion domains of planar Gaussian fields.

  13. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    Science.gov (United States)

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Enhancements to the CALIOP Aerosol Subtyping and Lidar Ratio Selection Algorithms for Level II Version 4

    Science.gov (United States)

    Omar, A. H.; Tackett, J. L.; Vaughan, M. A.; Kar, J.; Trepte, C. R.; Winker, D. M.

    2016-12-01

    This presentation describes several enhancements planned for the version 4 aerosol subtyping and lidar ratio selection algorithms of the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) instrument. The CALIOP subtyping algorithm determines the most likely aerosol type from CALIOP measurements (attenuated backscatter, estimated particulate depolarization ratios δe, layer altitude), and surface type. The aerosol type, so determined, is associated with a lidar ratio (LR) from a discrete set of values. Some of these lidar ratios have been updated in the version 4 algorithms. In particular, the dust and polluted dust will be adjusted to reflect the latest measurements and model studies of these types. Version 4 eliminates the confusion between smoke and clean marine aerosols seen in version 3 by modifications to the elevated layer flag definitions used to identify smoke aerosols over the ocean. In the subtyping algorithms pure dust is determined by high estimated particulate depolarization ratios [δe > 0.20]. Mixtures of dust and other aerosol types are determined by intermediate values of the estimated depolarization ratio [0.075limited to mixtures of dust and smoke, the so called polluted dust aerosol type. To differentiate between mixtures of dust and smoke, and dust and marine aerosols, a new aerosol type will be added in the version 4 data products. In the revised classification algorithms, polluted dust will still defined as dust + smoke/pollution but in the marine boundary layer instances of moderate depolarization will be typed as dusty marine aerosols with a lower lidar ratio than polluted dust. The dusty marine type introduced in version 4 is modeled as a mixture of dust + marine aerosol. To account for fringes, the version 4 Level 2 algorithms implement Subtype Coalescence Algorithm for AeRosol Fringes (SCAARF) routine to detect and classify fringe of aerosol plumes that are detected at 20 km or 80 km horizontal resolution at the plume base. These

  15. Setting-level influences on implementation of the responsive classroom approach.

    Science.gov (United States)

    Wanless, Shannon B; Patton, Christine L; Rimm-Kaufman, Sara E; Deutsch, Nancy L

    2013-02-01

    We used mixed methods to examine the association between setting-level factors and observed implementation of a social and emotional learning intervention (Responsive Classroom® approach; RC). In study 1 (N = 33 3rd grade teachers after the first year of RC implementation), we identified relevant setting-level factors and uncovered the mechanisms through which they related to implementation. In study 2 (N = 50 4th grade teachers after the second year of RC implementation), we validated our most salient Study 1 finding across multiple informants. Findings suggested that teachers perceived setting-level factors, particularly principal buy-in to the intervention and individualized coaching, as influential to their degree of implementation. Further, we found that intervention coaches' perspectives of principal buy-in were more related to implementation than principals' or teachers' perspectives. Findings extend the application of setting theory to the field of implementation science and suggest that interventionists may want to consider particular accounts of school setting factors before determining the likelihood of schools achieving high levels of implementation.

  16. Multi-phase flow monitoring with electrical impedance tomography using level set based method

    International Nuclear Information System (INIS)

    Liu, Dong; Khambampati, Anil Kumar; Kim, Sin; Kim, Kyung Youn

    2015-01-01

    Highlights: • LSM has been used for shape reconstruction to monitor multi-phase flow using EIT. • Multi-phase level set model for conductivity is represented by two level set functions. • LSM handles topological merging and breaking naturally during evolution process. • To reduce the computational time, a narrowband technique was applied. • Use of narrowband and optimization approach results in efficient and fast method. - Abstract: In this paper, a level set-based reconstruction scheme is applied to multi-phase flow monitoring using electrical impedance tomography (EIT). The proposed scheme involves applying a narrowband level set method to solve the inverse problem of finding the interface between the regions having different conductivity values. The multi-phase level set model for the conductivity distribution inside the domain is represented by two level set functions. The key principle of the level set-based method is to implicitly represent the shape of interface as the zero level set of higher dimensional function and then solve a set of partial differential equations. The level set-based scheme handles topological merging and breaking naturally during the evolution process. It also offers several advantages compared to traditional pixel-based approach. Level set-based method for multi-phase flow is tested with numerical and experimental data. It is found that level set-based method has better reconstruction performance when compared to pixel-based method

  17. Electric vehicle charging algorithms for coordination of the grid and distribution transformer levels

    International Nuclear Information System (INIS)

    Ramos Muñoz, Edgar; Razeghi, Ghazal; Zhang, Li; Jabbari, Faryar

    2016-01-01

    The need to reduce greenhouse gas emissions and fossil fuel consumption has increased the popularity of plug-in electric vehicles. However, a large penetration of plug-in electric vehicles can pose challenges at the grid and local distribution levels. Various charging strategies have been proposed to address such challenges, often separately. In this paper, it is shown that, with uncoordinated charging, distribution transformers and the grid can operate under highly undesirable conditions. Next, several strategies that require modest communication efforts are proposed to mitigate the burden created by high concentrations of plug-in electric vehicles, at the grid and local levels. Existing transformer and battery electric vehicle characteristics are used along with the National Household Travel Survey to simulate various charging strategies. It is shown through the analysis of hot spot temperature and equivalent aging factor that the coordinated strategies proposed here reduce the chances of transformer failure with the addition of plug-in electric vehicle loads, even for an under-designed transformer while uncontrolled and uncoordinated plug-in electric vehicle charging results in increased risk of transformer failure. - Highlights: • Charging algorithm for battery electric vehicles, for high penetration levels. • Algorithm reduces transformer overloading, for grid level valley filling. • Computation and communication requirements are minimal. • The distributed algorithm is implemented without large scale iterations. • Hot spot temperature and loss of life for transformers are evaluated.

  18. Using SETS to find minimal cut sets in large fault trees

    International Nuclear Information System (INIS)

    Worrell, R.B.; Stack, D.W.

    1978-01-01

    An efficient algebraic algorithm for finding the minimal cut sets for a large fault tree was defined and a new procedure which implements the algorithm was added to the Set Equation Transformation System (SETS). The algorithm includes the identification and separate processing of independent subtrees, the coalescing of consecutive gates of the same kind, the creation of additional independent subtrees, and the derivation of the fault tree stem equation in stages. The computer time required to determine the minimal cut sets using these techniques is shown to be substantially less than the computer time required to determine the minimal cut sets when these techniques are not employed. It is shown for a given example that the execution time required to determine the minimal cut sets can be reduced from 7,686 seconds to 7 seconds when all of these techniques are employed

  19. Level Set Structure of an Integrable Cellular Automaton

    Directory of Open Access Journals (Sweden)

    Taichiro Takagi

    2010-03-01

    Full Text Available Based on a group theoretical setting a sort of discrete dynamical system is constructed and applied to a combinatorial dynamical system defined on the set of certain Bethe ansatz related objects known as the rigged configurations. This system is then used to study a one-dimensional periodic cellular automaton related to discrete Toda lattice. It is shown for the first time that the level set of this cellular automaton is decomposed into connected components and every such component is a torus.

  20. ATLAS High-Level Trigger Performance for Calorimeter-Based Algorithms in LHC Run-I

    CERN Document Server

    Mann, A; The ATLAS collaboration

    2013-01-01

    The ATLAS detector operated during the three years of the Run-I of the Large Hadron Collider collecting information on a large number of proton-proton events. One the most important results obtained so far is the discovery of one Higgs boson. More precise measurements of this particle must be performed as well as there are other very important physics topics still to be explored. One of the key components of the ATLAS detector is its trigger system. It is composed of three levels: one (called Level 1 - L1) built on custom hardware and the two others based on software algorithms - called Level 2 (L2) and Event Filter (EF) – altogether referred to as the ATLAS High Level Trigger. The ATLAS trigger is responsible for reducing almost 20 million of collisions per second produced by the accelerator to less than 1000. The L2 operates only in the regions tagged by the first hardware level as containing possible interesting physics while the EF operates in the full detector, normally using offline-like algorithms to...

  1. Response-only modal identification using random decrement algorithm with time-varying threshold level

    International Nuclear Information System (INIS)

    Lin, Chang Sheng; Tseng, Tse Chuan

    2014-01-01

    Modal Identification from response data only is studied for structural systems under nonstationary ambient vibration. The topic of this paper is the estimation of modal parameters from nonstationary ambient vibration data by applying the random decrement algorithm with time-varying threshold level. In the conventional random decrement algorithm, the threshold level for evaluating random dec signatures is defined as the standard deviation value of response data of the reference channel. The distortion of random dec signatures may be, however, induced by the error involved in noise from the original response data in practice. To improve the accuracy of identification, a modification of the sampling procedure in random decrement algorithm is proposed for modal-parameter identification from the nonstationary ambient response data. The time-varying threshold level is presented for the acquisition of available sample time history to perform averaging analysis, and defined as the temporal root-mean-square function of structural response, which can appropriately describe a wide variety of nonstationary behaviors in reality, such as the time-varying amplitude (variance) of a nonstationary process in a seismic record. Numerical simulations confirm the validity and robustness of the proposed modal-identification method from nonstationary ambient response data under noisy conditions.

  2. Iterative algorithm of discrete Fourier transform for processing randomly sampled NMR data sets

    International Nuclear Information System (INIS)

    Stanek, Jan; Kozminski, Wiktor

    2010-01-01

    Spectra obtained by application of multidimensional Fourier Transformation (MFT) to sparsely sampled nD NMR signals are usually corrupted due to missing data. In the present paper this phenomenon is investigated on simulations and experiments. An effective iterative algorithm for artifact suppression for sparse on-grid NMR data sets is discussed in detail. It includes automated peak recognition based on statistical methods. The results enable one to study NMR spectra of high dynamic range of peak intensities preserving benefits of random sampling, namely the superior resolution in indirectly measured dimensions. Experimental examples include 3D 15 N- and 13 C-edited NOESY-HSQC spectra of human ubiquitin.

  3. A parallel algorithm for switch-level timing simulation on a hypercube multiprocessor

    Science.gov (United States)

    Rao, Hariprasad Nannapaneni

    1989-01-01

    The parallel approach to speeding up simulation is studied, specifically the simulation of digital LSI MOS circuitry on the Intel iPSC/2 hypercube. The simulation algorithm is based on RSIM, an event driven switch-level simulator that incorporates a linear transistor model for simulating digital MOS circuits. Parallel processing techniques based on the concepts of Virtual Time and rollback are utilized so that portions of the circuit may be simulated on separate processors, in parallel for as large an increase in speed as possible. A partitioning algorithm is also developed in order to subdivide the circuit for parallel processing.

  4. Two-step digit-set-restricted modified signed-digit addition-subtraction algorithm and its optoelectronic implementation.

    Science.gov (United States)

    Qian, F; Li, G; Ruan, H; Jing, H; Liu, L

    1999-09-10

    A novel, to our knowledge, two-step digit-set-restricted modified signed-digit (MSD) addition-subtraction algorithm is proposed. With the introduction of the reference digits, the operand words are mapped into an intermediate carry word with all digits restricted to the set {1, 0} and an intermediate sum word with all digits restricted to the set {0, 1}, which can be summed to form the final result without carry generation. The operation can be performed in parallel by use of binary logic. An optical system that utilizes an electron-trapping device is suggested for accomplishing the required binary logic operations. By programming of the illumination of data arrays, any complex logic operations of multiple variables can be realized without additional temporal latency of the intermediate results. This technique has a high space-bandwidth product and signal-to-noise ratio. The main structure can be stacked to construct a compact optoelectronic MSD adder-subtracter.

  5. Dilution testing using rapid diagnostic tests in a HIV diagnostic algorithm: a novel alternative for confirmation testing in resource limited settings.

    Science.gov (United States)

    Shanks, Leslie; Siddiqui, M Ruby; Abebe, Almaz; Piriou, Erwan; Pearce, Neil; Ariti, Cono; Masiga, Johnson; Muluneh, Libsework; Wazome, Joseph; Ritmeijer, Koert; Klarkowski, Derryck

    2015-05-14

    Current WHO testing guidelines for resource limited settings diagnose HIV on the basis of screening tests without a confirmation test due to cost constraints. This leads to a potential risk of false positive HIV diagnosis. In this paper, we evaluate the dilution test, a novel method for confirmation testing, which is simple, rapid, and low cost. The principle of the dilution test is to alter the sensitivity of a rapid diagnostic test (RDT) by dilution of the sample, in order to screen out the cross reacting antibodies responsible for falsely positive RDT results. Participants were recruited from two testing centres in Ethiopia where a tiebreaker algorithm using 3 different RDTs in series is used to diagnose HIV. All samples positive on the initial screening RDT and every 10th negative sample underwent testing with the gold standard and dilution test. Dilution testing was performed using Determine™ rapid diagnostic test at 6 different dilutions. Results were compared to the gold standard of Western Blot; where Western Blot was indeterminate, PCR testing determined the final result. 2895 samples were recruited to the study. 247 were positive for a prevalence of 8.5 % (247/2895). A total of 495 samples underwent dilution testing. The RDT diagnostic algorithm misclassified 18 samples as positive. Dilution at the level of 1/160 was able to correctly identify all these 18 false positives, but at a cost of a single false negative result (sensitivity 99.6 %, 95 % CI 97.8-100; specificity 100 %, 95 % CI: 98.5-100). Concordance between the gold standard and the 1/160 dilution strength was 99.8 %. This study provides proof of concept for a new, low cost method of confirming HIV diagnosis in resource-limited settings. It has potential for use as a supplementary test in a confirmatory algorithm, whereby double positive RDT results undergo dilution testing, with positive results confirming HIV infection. Negative results require nucleic acid testing to rule out false

  6. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    Science.gov (United States)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  7. Reevaluation of steam generator level trip set point

    Energy Technology Data Exchange (ETDEWEB)

    Shim, Yoon Sub; Soh, Dong Sub; Kim, Sung Oh; Jung, Se Won; Sung, Kang Sik; Lee, Joon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1994-06-01

    The reactor trip by the low level of steam generator water accounts for a substantial portion of reactor scrams in a nuclear plant and the feasibility of modification of the steam generator water level trip system of YGN 1/2 was evaluated in this study. The study revealed removal of the reactor trip function from the SG water level trip system is not possible because of plant safety but relaxation of the trip set point by 9 % is feasible. The set point relaxation requires drilling of new holes for level measurement to operating steam generators. Characteristics of negative neutron flux rate trip and reactor trip were also reviewed as an additional work. Since the purpose of the trip system modification for reduction of a reactor scram frequency is not to satisfy legal requirements but to improve plant performance and the modification yields positive and negative aspects, the decision of actual modification needs to be made based on the results of this study and also the policy of a plant owner. 37 figs, 6 tabs, 14 refs. (Author).

  8. Performance comparison of extracellular spike sorting algorithms for single-channel recordings.

    Science.gov (United States)

    Wild, Jiri; Prekopcsak, Zoltan; Sieger, Tomas; Novak, Daniel; Jech, Robert

    2012-01-30

    Proper classification of action potentials from extracellular recordings is essential for making an accurate study of neuronal behavior. Many spike sorting algorithms have been presented in the technical literature. However, no comparative analysis has hitherto been performed. In our study, three widely-used publicly-available spike sorting algorithms (WaveClus, KlustaKwik, OSort) were compared with regard to their parameter settings. The algorithms were evaluated using 112 artificial signals (publicly available online) with 2-9 different neurons and varying noise levels between 0.00 and 0.60. An optimization technique based on Adjusted Mutual Information was employed to find near-optimal parameter settings for a given artificial signal and algorithm. All three algorithms performed significantly better (psorting algorithm, receiving the best evaluation score for 60% of all signals. OSort operated at almost five times the speed of the other algorithms. In terms of accuracy, OSort performed significantly less well (palgorithms was optimal in general. The accuracy of the algorithms depended on proper choice of the algorithm parameters and also on specific properties of the examined signal. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. A Greedy Algorithm for Neighborhood Overlap-Based Community Detection

    Directory of Open Access Journals (Sweden)

    Natarajan Meghanathan

    2016-01-01

    Full Text Available The neighborhood overlap (NOVER of an edge u-v is defined as the ratio of the number of nodes who are neighbors for both u and v to that of the number of nodes who are neighbors of at least u or v. In this paper, we hypothesize that an edge u-v with a lower NOVER score bridges two or more sets of vertices, with very few edges (other than u-v connecting vertices from one set to another set. Accordingly, we propose a greedy algorithm of iteratively removing the edges of a network in the increasing order of their neighborhood overlap and calculating the modularity score of the resulting network component(s after the removal of each edge. The network component(s that have the largest cumulative modularity score are identified as the different communities of the network. We evaluate the performance of the proposed NOVER-based community detection algorithm on nine real-world network graphs and compare the performance against the multi-level aggregation-based Louvain algorithm, as well as the original and time-efficient versions of the edge betweenness-based Girvan-Newman (GN community detection algorithm.

  10. Quantum algorithm for linear regression

    Science.gov (United States)

    Wang, Guoming

    2017-07-01

    We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.

  11. HPC in Basin Modeling: Simulating Mechanical Compaction through Vertical Effective Stress using Level Sets

    Science.gov (United States)

    McGovern, S.; Kollet, S. J.; Buerger, C. M.; Schwede, R. L.; Podlaha, O. G.

    2017-12-01

    ). Novel basin modelling concept for simulating deformation from mechanical compaction using level sets. Computational Geosciences, SI:ECMOR XV, 1-14.[2] Bangerth, et. al. (2011). Algorithms and data structures for massively parallel generic adaptive finite element codes. ACM Transactions on Mathematical Software (TOMS), 38(2):14.

  12. Gradient augmented level set method for phase change simulations

    Science.gov (United States)

    Anumolu, Lakshman; Trujillo, Mario F.

    2018-01-01

    A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.

  13. A simple mass-conserved level set method for simulation of multiphase flows

    Science.gov (United States)

    Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.

    2018-04-01

    In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.

  14. Online Planning Algorithm

    Science.gov (United States)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  15. Reconstruction of thin electromagnetic inclusions by a level-set method

    International Nuclear Information System (INIS)

    Park, Won-Kwang; Lesselier, Dominique

    2009-01-01

    In this contribution, we consider a technique of electromagnetic imaging (at a single, non-zero frequency) which uses the level-set evolution method for reconstructing a thin inclusion (possibly made of disconnected parts) with either dielectric or magnetic contrast with respect to the embedding homogeneous medium. Emphasis is on the proof of the concept, the scattering problem at hand being so far based on a two-dimensional scalar model. To do so, two level-set functions are employed; the first one describes location and shape, and the other one describes connectivity and length. Speeds of evolution of the level-set functions are calculated via the introduction of Fréchet derivatives of a least-square cost functional. Several numerical experiments on noiseless and noisy data as well illustrate how the proposed method behaves

  16. Level-Set Methodology on Adaptive Octree Grids

    Science.gov (United States)

    Gibou, Frederic; Guittet, Arthur; Mirzadeh, Mohammad; Theillard, Maxime

    2017-11-01

    Numerical simulations of interfacial problems in fluids require a methodology capable of tracking surfaces that can undergo changes in topology and capable to imposing jump boundary conditions in a sharp manner. In this talk, we will discuss recent advances in the level-set framework, in particular one that is based on adaptive grids.

  17. Fully Automated Segmentation of Fluid/Cyst Regions in Optical Coherence Tomography Images With Diabetic Macular Edema Using Neutrosophic Sets and Graph Algorithms.

    Science.gov (United States)

    Rashno, Abdolreza; Koozekanani, Dara D; Drayna, Paul M; Nazari, Behzad; Sadri, Saeed; Rabbani, Hossein; Parhi, Keshab K

    2018-05-01

    This paper presents a fully automated algorithm to segment fluid-associated (fluid-filled) and cyst regions in optical coherence tomography (OCT) retina images of subjects with diabetic macular edema. The OCT image is segmented using a novel neutrosophic transformation and a graph-based shortest path method. In neutrosophic domain, an image is transformed into three sets: (true), (indeterminate) that represents noise, and (false). This paper makes four key contributions. First, a new method is introduced to compute the indeterminacy set , and a new -correction operation is introduced to compute the set in neutrosophic domain. Second, a graph shortest-path method is applied in neutrosophic domain to segment the inner limiting membrane and the retinal pigment epithelium as regions of interest (ROI) and outer plexiform layer and inner segment myeloid as middle layers using a novel definition of the edge weights . Third, a new cost function for cluster-based fluid/cyst segmentation in ROI is presented which also includes a novel approach in estimating the number of clusters in an automated manner. Fourth, the final fluid regions are achieved by ignoring very small regions and the regions between middle layers. The proposed method is evaluated using two publicly available datasets: Duke, Optima, and a third local dataset from the UMN clinic which is available online. The proposed algorithm outperforms the previously proposed Duke algorithm by 8% with respect to the dice coefficient and by 5% with respect to precision on the Duke dataset, while achieving about the same sensitivity. Also, the proposed algorithm outperforms a prior method for Optima dataset by 6%, 22%, and 23% with respect to the dice coefficient, sensitivity, and precision, respectively. Finally, the proposed algorithm also achieves sensitivity of 67.3%, 88.8%, and 76.7%, for the Duke, Optima, and the university of minnesota (UMN) datasets, respectively.

  18. A clinical algorithm for triaging patients with significant lymphadenopathy in primary health care settings in Sudan

    Directory of Open Access Journals (Sweden)

    Eltahir A.G. Khalil

    2013-06-01

    Full Text Available Background: Tuberculosis is a major health problem in developing countries. The distinction between tuberculous lymphadenitis, non-specific lymphadenitis and malignant lymph node enlargement has to be made at primary health care levels using easy, simple and cheap methods. Objective: To develop a reliable clinical algorithm for primary care settings to triage cases ofnon-specific, tuberculous and malignant lymphadenopathies. Methods: Calculation of the odd ratios (OR of the chosen predictor variables was carried out using logistic regression. The numerical score values of the predictor variables were weighed against their respective OR. The performance of the score was evaluated by the ROC (ReceiverOperator Characteristic curve. Results: Four predictor variables; Mantoux reading, erythrocytes sedimentation rate (ESR,nocturnal fever and discharging sinuses correlated significantly with TB diagnosis and were included in the reduced model to establish score A. For score B, the reduced model included Mantoux reading, ESR, lymph-node size and lymph-node number as predictor variables for malignant lymph nodes. Score A ranged 0 to 12 and a cut-off point of 6 gave a best sensitivity and specificity of 91% and 90% respectively, whilst score B ranged -3 to 8 and a cut-off point of3 gave a best sensitivity and specificity of 83% and 76% respectively. The calculated area underthe ROC curve was 0.964 (95% CI, 0.949 – 0.980 and -0.856 (95% CI, 0.787 ‑ 0.925 for scores Aand B respectively, indicating good performance. Conclusion: The developed algorithm can efficiently triage cases with tuberculous andmalignant lymphadenopathies for treatment or referral to specialised centres for furtherwork-up.

  19. An accurate conservative level set/ghost fluid method for simulating turbulent atomization

    International Nuclear Information System (INIS)

    Desjardins, Olivier; Moureau, Vincent; Pitsch, Heinz

    2008-01-01

    This paper presents a novel methodology for simulating incompressible two-phase flows by combining an improved version of the conservative level set technique introduced in [E. Olsson, G. Kreiss, A conservative level set method for two phase flow, J. Comput. Phys. 210 (2005) 225-246] with a ghost fluid approach. By employing a hyperbolic tangent level set function that is transported and re-initialized using fully conservative numerical schemes, mass conservation issues that are known to affect level set methods are greatly reduced. In order to improve the accuracy of the conservative level set method, high order numerical schemes are used. The overall robustness of the numerical approach is increased by computing the interface normals from a signed distance function reconstructed from the hyperbolic tangent level set by a fast marching method. The convergence of the curvature calculation is ensured by using a least squares reconstruction. The ghost fluid technique provides a way of handling the interfacial forces and large density jumps associated with two-phase flows with good accuracy, while avoiding artificial spreading of the interface. Since the proposed approach relies on partial differential equations, its implementation is straightforward in all coordinate systems, and it benefits from high parallel efficiency. The robustness and efficiency of the approach is further improved by using implicit schemes for the interface transport and re-initialization equations, as well as for the momentum solver. The performance of the method is assessed through both classical level set transport tests and simple two-phase flow examples including topology changes. It is then applied to simulate turbulent atomization of a liquid Diesel jet at Re=3000. The conservation errors associated with the accurate conservative level set technique are shown to remain small even for this complex case

  20. Evaluation of a fever-management algorithm in a pediatric cancer center in a low-resource setting.

    Science.gov (United States)

    Mukkada, Sheena; Smith, Cristel Kate; Aguilar, Delta; Sykes, April; Tang, Li; Dolendo, Mae; Caniza, Miguela A

    2018-02-01

    In low- and middle-income countries (LMICs), inconsistent or delayed management of fever contributes to poor outcomes among pediatric patients with cancer. We hypothesized that standardizing practice with a clinical algorithm adapted to local resources would improve outcomes. Therefore, we developed a resource-specific algorithm for fever management in Davao City, Philippines. The primary objective of this study was to evaluate adherence to the algorithm. This was a prospective cohort study of algorithm adherence to assess the types of deviation, reasons for deviation, and pathogens isolated. All pediatric oncology patients who were admitted with fever (defined as an axillary temperature  >37.7°C on one occasion or ≥37.4°C on two occasions 1 hr apart) or who developed fever within 48 hr of admission were included. Univariate and multiple linear regression analyses were used to determine the relation between clinical predictors and length of hospitalization. During the study, 93 patients had 141 qualifying febrile episodes. Even though the algorithm was designed locally, deviations occurred in 70 (50%) of 141 febrile episodes on day 0, reflecting implementation barriers at the patient, provider, and institutional levels. There were 259 deviations during the first 7 days of admission in 92 (65%) of 141 patient episodes. Failure to identify high-risk patients, missed antimicrobial doses, and pathogen isolation were associated with prolonged hospitalization. Monitoring algorithm adherence helps in assessing the quality of pediatric oncology care in LMICs and identifying opportunities for improvement. Measures that decrease high-frequency/high-impact algorithm deviations may shorten hospitalizations and improve healthcare use in LMICs. © 2017 Wiley Periodicals, Inc.

  1. Access Selection Algorithm of Heterogeneous Wireless Networks for Smart Distribution Grid Based on Entropy-Weight and Rough Set

    Science.gov (United States)

    Xiang, Min; Qu, Qinqin; Chen, Cheng; Tian, Li; Zeng, Lingkang

    2017-11-01

    To improve the reliability of communication service in smart distribution grid (SDG), an access selection algorithm based on dynamic network status and different service types for heterogeneous wireless networks was proposed. The network performance index values were obtained in real time by multimode terminal and the variation trend of index values was analyzed by the growth matrix. The index weights were calculated by entropy-weight and then modified by rough set to get the final weights. Combining the grey relational analysis to sort the candidate networks, and the optimum communication network is selected. Simulation results show that the proposed algorithm can implement dynamically access selection in heterogeneous wireless networks of SDG effectively and reduce the network blocking probability.

  2. A level set approach for shock-induced α-γ phase transition of RDX

    Science.gov (United States)

    Josyula, Kartik; Rahul; De, Suvranu

    2018-02-01

    We present a thermodynamically consistent level sets approach based on regularization energy functional which can be directly incorporated into a Galerkin finite element framework to model interface motion. The regularization energy leads to a diffusive form of flux that is embedded within the level sets evolution equation which maintains the signed distance property of the level set function. The scheme is shown to compare well with the velocity extension method in capturing the interface position. The proposed level sets approach is employed to study the α-γphase transformation in RDX single crystal shocked along the (100) plane. Example problems in one and three dimensions are presented. We observe smooth evolution of the phase interface along the shock direction in both models. There is no diffusion of the interface during the zero level set evolution in the three dimensional model. The level sets approach is shown to capture the characteristics of the shock-induced α-γ phase transformation such as stress relaxation behind the phase interface and the finite time required for the phase transformation to complete. The regularization energy based level sets approach is efficient, robust, and easy to implement.

  3. Clustering for Binary Data Sets by Using Genetic Algorithm-Incremental K-means

    Science.gov (United States)

    Saharan, S.; Baragona, R.; Nor, M. E.; Salleh, R. M.; Asrah, N. M.

    2018-04-01

    This research was initially driven by the lack of clustering algorithms that specifically focus in binary data. To overcome this gap in knowledge, a promising technique for analysing this type of data became the main subject in this research, namely Genetic Algorithms (GA). For the purpose of this research, GA was combined with the Incremental K-means (IKM) algorithm to cluster the binary data streams. In GAIKM, the objective function was based on a few sufficient statistics that may be easily and quickly calculated on binary numbers. The implementation of IKM will give an advantage in terms of fast convergence. The results show that GAIKM is an efficient and effective new clustering algorithm compared to the clustering algorithms and to the IKM itself. In conclusion, the GAIKM outperformed other clustering algorithms such as GCUK, IKM, Scalable K-means (SKM) and K-means clustering and paves the way for future research involving missing data and outliers.

  4. Level Set Approach to Anisotropic Wet Etching of Silicon

    Directory of Open Access Journals (Sweden)

    Branislav Radjenović

    2010-05-01

    Full Text Available In this paper a methodology for the three dimensional (3D modeling and simulation of the profile evolution during anisotropic wet etching of silicon based on the level set method is presented. Etching rate anisotropy in silicon is modeled taking into account full silicon symmetry properties, by means of the interpolation technique using experimentally obtained values for the etching rates along thirteen principal and high index directions in KOH solutions. The resulting level set equations are solved using an open source implementation of the sparse field method (ITK library, developed in medical image processing community, extended for the case of non-convex Hamiltonians. Simulation results for some interesting initial 3D shapes, as well as some more practical examples illustrating anisotropic etching simulation in the presence of masks (simple square aperture mask, convex corner undercutting and convex corner compensation, formation of suspended structures are shown also. The obtained results show that level set method can be used as an effective tool for wet etching process modeling, and that is a viable alternative to the Cellular Automata method which now prevails in the simulations of the wet etching process.

  5. Towards a computational- and algorithmic-level account of concept blending using analogies and amalgams

    Science.gov (United States)

    Besold, Tarek R.; Kühnberger, Kai-Uwe; Plaza, Enric

    2017-10-01

    Concept blending - a cognitive process which allows for the combination of certain elements (and their relations) from originally distinct conceptual spaces into a new unified space combining these previously separate elements, and enables reasoning and inference over the combination - is taken as a key element of creative thought and combinatorial creativity. In this article, we summarise our work towards the development of a computational-level and algorithmic-level account of concept blending, combining approaches from computational analogy-making and case-based reasoning (CBR). We present the theoretical background, as well as an algorithmic proposal integrating higher-order anti-unification matching and generalisation from analogy with amalgams from CBR. The feasibility of the approach is then exemplified in two case studies.

  6. AN IMPROVED FUZZY CLUSTERING ALGORITHM FOR MICROARRAY IMAGE SPOTS SEGMENTATION

    Directory of Open Access Journals (Sweden)

    V.G. Biju

    2015-11-01

    Full Text Available An automatic cDNA microarray image processing using an improved fuzzy clustering algorithm is presented in this paper. The spot segmentation algorithm proposed uses the gridding technique developed by the authors earlier, for finding the co-ordinates of each spot in an image. Automatic cropping of spots from microarray image is done using these co-ordinates. The present paper proposes an improved fuzzy clustering algorithm Possibility fuzzy local information c means (PFLICM to segment the spot foreground (FG from background (BG. The PFLICM improves fuzzy local information c means (FLICM algorithm by incorporating typicality of a pixel along with gray level information and local spatial information. The performance of the algorithm is validated using a set of simulated cDNA microarray images added with different levels of AWGN noise. The strength of the algorithm is tested by computing the parameters such as the Segmentation matching factor (SMF, Probability of error (pe, Discrepancy distance (D and Normal mean square error (NMSE. SMF value obtained for PFLICM algorithm shows an improvement of 0.9 % and 0.7 % for high noise and low noise microarray images respectively compared to FLICM algorithm. The PFLICM algorithm is also applied on real microarray images and gene expression values are computed.

  7. A Robust Level-Set Algorithm for Centerline Extraction

    NARCIS (Netherlands)

    Telea, Alexandru; Vilanova, Anna

    2003-01-01

    We present a robust method for extracting 3D centerlines from volumetric datasets. We start from a 2D skeletonization method to locate voxels centered with respect to three orthogonal slicing directions. Next, we introduce a new detection criterion to extract the centerline voxels from the above

  8. Music analysis and point-set compression

    DEFF Research Database (Denmark)

    Meredith, David

    2015-01-01

    COSIATEC, SIATECCompress and Forth’s algorithm are point-set compression algorithms developed for discovering repeated patterns in music, such as themes and motives that would be of interest to a music analyst. To investigate their effectiveness and versatility, these algorithms were evaluated...... on three analytical tasks that depend on the discovery of repeated patterns: classifying folk song melodies into tune families, discovering themes and sections in polyphonic music, and discovering subject and countersubject entries in fugues. Each algorithm computes a compressed encoding of a point......-set representation of a musical object in the form of a list of compact patterns, each pattern being given with a set of vectors indicating its occurrences. However, the algorithms adopt different strategies in their attempts to discover encodings that maximize compression.The best-performing algorithm on the folk...

  9. Noise properties of the EM algorithm. Pt. 1

    International Nuclear Information System (INIS)

    Barrett, H.H.; Wilson, D.W.; Tsui, B.M.W.

    1994-01-01

    The expectation-maximisation (EM) algorithm is an important tool for maximum-likelihood (ML) estimation and image reconstruction, especially in medical imaging. It is a non-linear iterative algorithm that attempts to find the ML estimate of the object that produced a data set. The convergence of the algorithm and other deterministic properties are well established, but relatively little is known about how noise in the data influences noise in the final reconstructed image. In this paper we present a detailed treatment of these statistical properties. The specific application we have in mind is image reconstruction in emission tomography, but the results are valid for any application of the EM algorithm in which the data set can be described by Poisson statistics. We show that the probability density function for the grey level at a pixel in the image is well approximated by a log-normal law. An expression is derived for the variance of the grey level and for pixel-to-pixel covariance. The variance increases rapidly with iteration number at first, but eventually saturates as the ML estimate is approached. Moreover, the variance at any iteration number has a factor proportional to the square of the mean image (though other factors may also depend on the mean image), so a map of the standard deviation resembles the object itself. Thus low-intensity regions of the image tend to have low noise. (author)

  10. Two Surface-Tension Formulations For The Level Set Interface-Tracking Method

    International Nuclear Information System (INIS)

    Shepel, S.V.; Smith, B.L.

    2005-01-01

    The paper describes a comparative study of two surface-tension models for the Level Set interface tracking method. In both models, the surface tension is represented as a body force, concentrated near the interface, but the technical implementation of the two options is different. The first is based on a traditional Level Set approach, in which the surface tension is distributed over a narrow band around the interface using a smoothed Delta function. In the second model, which is based on the integral form of the fluid-flow equations, the force is imposed only in those computational cells through which the interface passes. Both models have been incorporated into the Finite-Element/Finite-Volume Level Set method, previously implemented into the commercial Computational Fluid Dynamics (CFD) code CFX-4. A critical evaluation of the two models, undertaken in the context of four standard Level Set benchmark problems, shows that the first model, based on the smoothed Delta function approach, is the more general, and more robust, of the two. (author)

  11. A deep level set method for image segmentation

    OpenAIRE

    Tang, Min; Valipour, Sepehr; Zhang, Zichen Vincent; Cobzas, Dana; MartinJagersand

    2017-01-01

    This paper proposes a novel image segmentation approachthat integrates fully convolutional networks (FCNs) with a level setmodel. Compared with a FCN, the integrated method can incorporatesmoothing and prior information to achieve an accurate segmentation.Furthermore, different than using the level set model as a post-processingtool, we integrate it into the training phase to fine-tune the FCN. Thisallows the use of unlabeled data during training in a semi-supervisedsetting. Using two types o...

  12. A Harmony Search Algorithm approach for optimizing traffic signal timings

    Directory of Open Access Journals (Sweden)

    Mauro Dell'Orco

    2013-07-01

    Full Text Available In this study, a bi-level formulation is presented for solving the Equilibrium Network Design Problem (ENDP. The optimisation of the signal timing has been carried out at the upper-level using the Harmony Search Algorithm (HSA, whilst the traffic assignment has been carried out through the Path Flow Estimator (PFE at the lower level. The results of HSA have been first compared with those obtained using the Genetic Algorithm, and the Hill Climbing on a two-junction network for a fixed set of link flows. Secondly, the HSA with PFE has been applied to the medium-sized network to show the applicability of the proposed algorithm in solving the ENDP. Additionally, in order to test the sensitivity of perceived travel time error, we have used the HSA with PFE with various level of perceived travel time. The results showed that the proposed method is quite simple and efficient in solving the ENDP.

  13. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin

    2011-04-01

    In this paper, we construct a level set method for an elliptic obstacle problem, which can be reformulated as a shape optimization problem. We provide a detailed shape sensitivity analysis for this reformulation and a stability result for the shape Hessian at the optimal shape. Using the shape sensitivities, we construct a geometric gradient flow, which can be realized in the context of level set methods. We prove the convergence of the gradient flow to an optimal shape and provide a complete analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its behavior through several computational experiments. © 2011 World Scientific Publishing Company.

  14. Development of Human-level Decision Making Algorithm for NPPs through Deep Neural Networks : Conceptual Approach

    International Nuclear Information System (INIS)

    Kim, Seung Geun; Seong, Poong Hyun

    2017-01-01

    Development of operation support systems and automation systems are closely related to machine learning field. However, since it is hard to achieve human-level delicacy and flexibility for complex tasks with conventional machine learning technologies, only operation support systems with simple purposes were developed and high-level automation related studies were not actively conducted. As one of the efforts for reducing human error in NPPs and technical advance toward automation, the ultimate goal of this research is to develop human-level decision making algorithm for NPPs during emergency situations. The concepts of SL, RL, policy network, value network, and MCTS, which were applied to decision making algorithm for other fields are introduced and combined with nuclear field specifications. Since the research is currently at the conceptual stage, more research is warranted.

  15. Automatic Derivation of Statistical Algorithms: The EM Family and Beyond

    OpenAIRE

    Gray, Alexander G.; Fischer, Bernd; Schumann, Johann; Buntine, Wray

    2003-01-01

    Machine learning has reached a point where many probabilistic methods can be understood as variations, extensions and combinations of a much smaller set of abstract themes, e.g., as different instances of the EM algorithm. This enables the systematic derivation of algorithms customized for different models. Here, we describe the AUTOBAYES system which takes a high-level statistical model specification, uses powerful symbolic techniques based on schema-based program synthesis and computer alge...

  16. A fuzzy logic algorithm to assign confidence levels to heart and respiratory rate time series

    International Nuclear Information System (INIS)

    Liu, J; McKenna, T M; Gribok, A; Reifman, J; Beidleman, B A; Tharion, W J

    2008-01-01

    We have developed a fuzzy logic-based algorithm to qualify the reliability of heart rate (HR) and respiratory rate (RR) vital-sign time-series data by assigning a confidence level to the data points while they are measured as a continuous data stream. The algorithm's membership functions are derived from physiology-based performance limits and mass-assignment-based data-driven characteristics of the signals. The assigned confidence levels are based on the reliability of each HR and RR measurement as well as the relationship between them. The algorithm was tested on HR and RR data collected from subjects undertaking a range of physical activities, and it showed acceptable performance in detecting four types of faults that result in low-confidence data points (receiver operating characteristic areas under the curve ranged from 0.67 (SD 0.04) to 0.83 (SD 0.03), mean and standard deviation (SD) over all faults). The algorithm is sensitive to noise in the raw HR and RR data and will flag many data points as low confidence if the data are noisy; prior processing of the data to reduce noise allows identification of only the most substantial faults. Depending on how HR and RR data are processed, the algorithm can be applied as a tool to evaluate sensor performance or to qualify HR and RR time-series data in terms of their reliability before use in automated decision-assist systems

  17. Alignment of Custom Standards by Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Adela Sirbu

    2010-09-01

    Full Text Available Building an efficient model for automatic alignment of terminologies would bring a significant improvement to the information retrieval process. We have developed and compared two machine learning based algorithms whose aim is to align 2 custom standards built on a 3 level taxonomy, using kNN and SVM classifiers that work on a vector representation consisting of several similarity measures. The weights utilized by the kNN were optimized with an evolutionary algorithm, while the SVM classifier's hyper-parameters were optimized with a grid search algorithm. The database used for train was semi automatically obtained by using the Coma++ tool. The performance of our aligners is shown by the results obtained on the test set.

  18. Level-set techniques for facies identification in reservoir modeling

    Science.gov (United States)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  19. Level-set techniques for facies identification in reservoir modeling

    International Nuclear Information System (INIS)

    Iglesias, Marco A; McLaughlin, Dennis

    2011-01-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil–water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301–29; 2004 Inverse Problems 20 259–82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg–Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush–Kuhn–Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies

  20. A hybrid interface tracking - level set technique for multiphase flow with soluble surfactant

    Science.gov (United States)

    Shin, Seungwon; Chergui, Jalel; Juric, Damir; Kahouadji, Lyes; Matar, Omar K.; Craster, Richard V.

    2018-04-01

    A formulation for soluble surfactant transport in multiphase flows recently presented by Muradoglu and Tryggvason (JCP 274 (2014) 737-757) [17] is adapted to the context of the Level Contour Reconstruction Method, LCRM, (Shin et al. IJNMF 60 (2009) 753-778, [8]) which is a hybrid method that combines the advantages of the Front-tracking and Level Set methods. Particularly close attention is paid to the formulation and numerical implementation of the surface gradients of surfactant concentration and surface tension. Various benchmark tests are performed to demonstrate the accuracy of different elements of the algorithm. To verify surfactant mass conservation, values for surfactant diffusion along the interface are compared with the exact solution for the problem of uniform expansion of a sphere. The numerical implementation of the discontinuous boundary condition for the source term in the bulk concentration is compared with the approximate solution. Surface tension forces are tested for Marangoni drop translation. Our numerical results for drop deformation in simple shear are compared with experiments and results from previous simulations. All benchmarking tests compare well with existing data thus providing confidence that the adapted LCRM formulation for surfactant advection and diffusion is accurate and effective in three-dimensional multiphase flows with a structured mesh. We also demonstrate that this approach applies easily to massively parallel simulations.

  1. PERFORMANCE ANALYSIS OF SET PARTITIONING IN HIERARCHICAL TREES (SPIHT ALGORITHM FOR A FAMILY OF WAVELETS USED IN COLOR IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    A. Sreenivasa Murthy

    2014-11-01

    Full Text Available With the spurt in the amount of data (Image, video, audio, speech, & text available on the net, there is a huge demand for memory & bandwidth savings. One has to achieve this, by maintaining the quality & fidelity of the data acceptable to the end user. Wavelet transform is an important and practical tool for data compression. Set partitioning in hierarchal trees (SPIHT is a widely used compression algorithm for wavelet transformed images. Among all wavelet transform and zero-tree quantization based image compression algorithms SPIHT has become the benchmark state-of-the-art algorithm because it is simple to implement & yields good results. In this paper we present a comparative study of various wavelet families for image compression with SPIHT algorithm. We have conducted experiments with Daubechies, Coiflet, Symlet, Bi-orthogonal, Reverse Bi-orthogonal and Demeyer wavelet types. The resulting image quality is measured objectively, using peak signal-to-noise ratio (PSNR, and subjectively, using perceived image quality (human visual perception, HVP for short. The resulting reduction in the image size is quantified by compression ratio (CR.

  2. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin; Matevosyan, Norayr; Wolfram, Marie-Therese

    2011-01-01

    analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its

  3. Knowledge Reduction Based on Divide and Conquer Method in Rough Set Theory

    Directory of Open Access Journals (Sweden)

    Feng Hu

    2012-01-01

    Full Text Available The divide and conquer method is a typical granular computing method using multiple levels of abstraction and granulations. So far, although some achievements based on divided and conquer method in the rough set theory have been acquired, the systematic methods for knowledge reduction based on divide and conquer method are still absent. In this paper, the knowledge reduction approaches based on divide and conquer method, under equivalence relation and under tolerance relation, are presented, respectively. After that, a systematic approach, named as the abstract process for knowledge reduction based on divide and conquer method in rough set theory, is proposed. Based on the presented approach, two algorithms for knowledge reduction, including an algorithm for attribute reduction and an algorithm for attribute value reduction, are presented. Some experimental evaluations are done to test the methods on uci data sets and KDDCUP99 data sets. The experimental results illustrate that the proposed approaches are efficient to process large data sets with good recognition rate, compared with KNN, SVM, C4.5, Naive Bayes, and CART.

  4. Test of TEDA, Tsunami Early Detection Algorithm

    Science.gov (United States)

    Bressan, Lidia; Tinti, Stefano

    2010-05-01

    Tsunami detection in real-time, both offshore and at the coastline, plays a key role in Tsunami Warning Systems since it provides so far the only reliable and timely proof of tsunami generation, and is used to confirm or cancel tsunami warnings previously issued on the basis of seismic data alone. Moreover, in case of submarine or coastal landslide generated tsunamis, which are not announced by clear seismic signals and are typically local, real-time detection at the coastline might be the fastest way to release a warning, even if the useful time for emergency operations might be limited. TEDA is an algorithm for real-time detection of tsunami signal on sea-level records, developed by the Tsunami Research Team of the University of Bologna. The development and testing of the algorithm has been accomplished within the framework of the Italian national project DPC-INGV S3 and the European project TRANSFER. The algorithm is to be implemented at station level, and it is based therefore only on sea-level data of a single station, either a coastal tide-gauge or an offshore buoy. TEDA's principle is to discriminate the first tsunami wave from the previous background signal, which implies the assumption that the tsunami waves introduce a difference in the previous sea-level signal. Therefore, in TEDA the instantaneous (most recent) and the previous background sea-level elevation gradients are characterized and compared by proper functions (IS and BS) that are updated at every new data acquisition. Detection is triggered when the instantaneous signal function passes a set threshold and at the same time it is significantly bigger compared to the previous background signal. The functions IS and BS depend on temporal parameters that allow the algorithm to be adapted different situations: in general, coastal tide-gauges have a typical background spectrum depending on the location where the instrument is installed, due to local topography and bathymetry, while offshore buoys are

  5. Improving probe set selection for microbial community analysis by leveraging taxonomic information of training sequences

    Directory of Open Access Journals (Sweden)

    Jiang Tao

    2011-10-01

    Full Text Available Abstract Background Population levels of microbial phylotypes can be examined using a hybridization-based method that utilizes a small set of computationally-designed DNA probes targeted to a gene common to all. Our previous algorithm attempts to select a set of probes such that each training sequence manifests a unique theoretical hybridization pattern (a binary fingerprint to a probe set. It does so without taking into account similarity between training gene sequences or their putative taxonomic classifications, however. We present an improved algorithm for probe set selection that utilizes the available taxonomic information of training gene sequences and attempts to choose probes such that the resultant binary fingerprints cluster into real taxonomic groups. Results Gene sequences manifesting identical fingerprints with probes chosen by the new algorithm are more likely to be from the same taxonomic group than probes chosen by the previous algorithm. In cases where they are from different taxonomic groups, underlying DNA sequences of identical fingerprints are more similar to each other in probe sets made with the new versus the previous algorithm. Complete removal of large taxonomic groups from training data does not greatly decrease the ability of probe sets to distinguish those groups. Conclusions Probe sets made from the new algorithm create fingerprints that more reliably cluster into biologically meaningful groups. The method can readily distinguish microbial phylotypes that were excluded from the training sequences, suggesting novel microbes can also be detected.

  6. Improving probe set selection for microbial community analysis by leveraging taxonomic information of training sequences.

    Science.gov (United States)

    Ruegger, Paul M; Della Vedova, Gianluca; Jiang, Tao; Borneman, James

    2011-10-10

    Population levels of microbial phylotypes can be examined using a hybridization-based method that utilizes a small set of computationally-designed DNA probes targeted to a gene common to all. Our previous algorithm attempts to select a set of probes such that each training sequence manifests a unique theoretical hybridization pattern (a binary fingerprint) to a probe set. It does so without taking into account similarity between training gene sequences or their putative taxonomic classifications, however. We present an improved algorithm for probe set selection that utilizes the available taxonomic information of training gene sequences and attempts to choose probes such that the resultant binary fingerprints cluster into real taxonomic groups. Gene sequences manifesting identical fingerprints with probes chosen by the new algorithm are more likely to be from the same taxonomic group than probes chosen by the previous algorithm. In cases where they are from different taxonomic groups, underlying DNA sequences of identical fingerprints are more similar to each other in probe sets made with the new versus the previous algorithm. Complete removal of large taxonomic groups from training data does not greatly decrease the ability of probe sets to distinguish those groups. Probe sets made from the new algorithm create fingerprints that more reliably cluster into biologically meaningful groups. The method can readily distinguish microbial phylotypes that were excluded from the training sequences, suggesting novel microbes can also be detected.

  7. Evaluation of Parallel Level Sets and Bowsher's Method as Segmentation-Free Anatomical Priors for Time-of-Flight PET Reconstruction.

    Science.gov (United States)

    Schramm, Georg; Holler, Martin; Rezaei, Ahmadreza; Vunckx, Kathleen; Knoll, Florian; Bredies, Kristian; Boada, Fernando; Nuyts, Johan

    2018-02-01

    In this article, we evaluate Parallel Level Sets (PLS) and Bowsher's method as segmentation-free anatomical priors for regularized brain positron emission tomography (PET) reconstruction. We derive the proximity operators for two PLS priors and use the EM-TV algorithm in combination with the first order primal-dual algorithm by Chambolle and Pock to solve the non-smooth optimization problem for PET reconstruction with PLS regularization. In addition, we compare the performance of two PLS versions against the symmetric and asymmetric Bowsher priors with quadratic and relative difference penalty function. For this aim, we first evaluate reconstructions of 30 noise realizations of simulated PET data derived from a real hybrid positron emission tomography/magnetic resonance imaging (PET/MR) acquisition in terms of regional bias and noise. Second, we evaluate reconstructions of a real brain PET/MR data set acquired on a GE Signa time-of-flight PET/MR in a similar way. The reconstructions of simulated and real 3D PET/MR data show that all priors were superior to post-smoothed maximum likelihood expectation maximization with ordered subsets (OSEM) in terms of bias-noise characteristics in different regions of interest where the PET uptake follows anatomical boundaries. Our implementation of the asymmetric Bowsher prior showed slightly superior performance compared with the two versions of PLS and the symmetric Bowsher prior. At very high regularization weights, all investigated anatomical priors suffer from the transfer of non-shared gradients.

  8. Algorithms and programs for processing of satellite data on ozone layer and UV radiation levels

    International Nuclear Information System (INIS)

    Borkovskij, N.B.; Ivanyukovich, V.A.

    2012-01-01

    Some algorithms and programs for automatic retrieving and processing ozone layer satellite data are discussed. These techniques are used for reliable short-term UV-radiation levels forecasting. (authors)

  9. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    Science.gov (United States)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  10. Automated Vectorization of Decision-Based Algorithms

    Science.gov (United States)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  11. Improving Accuracy and Simplifying Training in Fingerprinting-Based Indoor Location Algorithms at Room Level

    Directory of Open Access Journals (Sweden)

    Mario Muñoz-Organero

    2016-01-01

    Full Text Available Fingerprinting-based algorithms are popular in indoor location systems based on mobile devices. Comparing the RSSI (Received Signal Strength Indicator from different radio wave transmitters, such as Wi-Fi access points, with prerecorded fingerprints from located points (using different artificial intelligence algorithms, fingerprinting-based systems can locate unknown points with a few meters resolution. However, training the system with already located fingerprints tends to be an expensive task both in time and in resources, especially if large areas are to be considered. Moreover, the decision algorithms tend to be of high memory and CPU consuming in such cases and so does the required time for obtaining the estimated location for a new fingerprint. In this paper, we study, propose, and validate a way to select the locations for the training fingerprints which reduces the amount of required points while improving the accuracy of the algorithms when locating points at room level resolution. We present a comparison of different artificial intelligence decision algorithms and select those with better results. We do a comparison with other systems in the literature and draw conclusions about the improvements obtained in our proposal. Moreover, some techniques such as filtering nonstable access points for improving accuracy are introduced, studied, and validated.

  12. A novel automated spike sorting algorithm with adaptable feature extraction.

    Science.gov (United States)

    Bestel, Robert; Daus, Andreas W; Thielemann, Christiane

    2012-10-15

    To study the electrophysiological properties of neuronal networks, in vitro studies based on microelectrode arrays have become a viable tool for analysis. Although in constant progress, a challenging task still remains in this area: the development of an efficient spike sorting algorithm that allows an accurate signal analysis at the single-cell level. Most sorting algorithms currently available only extract a specific feature type, such as the principal components or Wavelet coefficients of the measured spike signals in order to separate different spike shapes generated by different neurons. However, due to the great variety in the obtained spike shapes, the derivation of an optimal feature set is still a very complex issue that current algorithms struggle with. To address this problem, we propose a novel algorithm that (i) extracts a variety of geometric, Wavelet and principal component-based features and (ii) automatically derives a feature subset, most suitable for sorting an individual set of spike signals. Thus, there is a new approach that evaluates the probability distribution of the obtained spike features and consequently determines the candidates most suitable for the actual spike sorting. These candidates can be formed into an individually adjusted set of spike features, allowing a separation of the various shapes present in the obtained neuronal signal by a subsequent expectation maximisation clustering algorithm. Test results with simulated data files and data obtained from chick embryonic neurons cultured on microelectrode arrays showed an excellent classification result, indicating the superior performance of the described algorithm approach. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. SU-D-202-04: Validation of Deformable Image Registration Algorithms for Head and Neck Adaptive Radiotherapy in Routine Clinical Setting

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, L; Pi, Y; Chen, Z; Xu, X [University of Science and Technology of China, Hefei, Anhui (China); Wang, Z [University of Science and Technology of China, Hefei, Anhui (China); The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui (China); Shi, C [Saint Vincent Medical Center, Bridgeport, CT (United States); Long, T; Luo, W; Wang, F [The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui (China)

    2016-06-15

    Purpose: To evaluate the ROI contours and accumulated dose difference using different deformable image registration (DIR) algorithms for head and neck (H&N) adaptive radiotherapy. Methods: Eight H&N cancer patients were randomly selected from the affiliated hospital. During the treatment, patients were rescanned every week with ROIs well delineated by radiation oncologist on each weekly CT. New weekly treatment plans were also re-designed with consistent dose prescription on the rescanned CT and executed for one week on Siemens CT-on-rails accelerator. At the end, we got six weekly CT scans from CT1 to CT6 including six weekly treatment plans for each patient. The primary CT1 was set as the reference CT for DIR proceeding with the left five weekly CTs using ANACONDA and MORFEUS algorithms separately in RayStation and the external skin ROI was set to be the controlling ROI both. The entire calculated weekly dose were deformed and accumulated on corresponding reference CT1 according to the deformation vector field (DVFs) generated by the two different DIR algorithms respectively. Thus we got both the ANACONDA-based and MORFEUS-based accumulated total dose on CT1 for each patient. At the same time, we mapped the ROIs on CT1 to generate the corresponding ROIs on CT6 using ANACONDA and MORFEUS DIR algorithms. DICE coefficients between the DIR deformed and radiation oncologist delineated ROIs on CT6 were calculated. Results: For DIR accumulated dose, PTV D95 and Left-Eyeball Dmax show significant differences with 67.13 cGy and 109.29 cGy respectively (Table1). For DIR mapped ROIs, PTV, Spinal cord and Left-Optic nerve show difference with −0.025, −0.127 and −0.124 (Table2). Conclusion: Even two excellent DIR algorithms can give divergent results for ROI deformation and dose accumulation. As more and more TPS get DIR module integrated, there is an urgent need to realize the potential risk using DIR in clinical.

  14. A robust firearm identification algorithm of forensic ballistics specimens

    Science.gov (United States)

    Chuan, Z. L.; Jemain, A. A.; Liong, C.-Y.; Ghani, N. A. M.; Tan, L. K.

    2017-09-01

    There are several inherent difficulties in the existing firearm identification algorithms, include requiring the physical interpretation and time consuming. Therefore, the aim of this study is to propose a robust algorithm for a firearm identification based on extracting a set of informative features from the segmented region of interest (ROI) using the simulated noisy center-firing pin impression images. The proposed algorithm comprises Laplacian sharpening filter, clustering-based threshold selection, unweighted least square estimator, and segment a square ROI from the noisy images. A total of 250 simulated noisy images collected from five different pistols of the same make, model and caliber are used to evaluate the robustness of the proposed algorithm. This study found that the proposed algorithm is able to perform the identical task on the noisy images with noise levels as high as 70%, while maintaining a firearm identification accuracy rate of over 90%.

  15. Application of smart transmitter technology in nuclear engineering measurements with level detection algorithm

    International Nuclear Information System (INIS)

    Kang, Hyun Gook; Seong, Poong Hyun

    1994-01-01

    In this study a programmable smart transmitter is designed and applied to the nuclear engineering measurements. In order to apply the smart transmitter technology to nuclear engineering measurements, the water level detection function is developed and applied in this work. In the real time system, the application of level detection algorithm can make the operator of the nuclear power plant sense the water level more rapidly. Furthermore this work can simplify the data communication between the level-sensing thermocouples and the main signal processor because the level signal is determined at field. The water level detection function reduces the detection time to about 8.3 seconds by processing the signal which has the time constant 250 seconds and the heavy noise signal

  16. A parametric level-set approach for topology optimization of flow domains

    DEFF Research Database (Denmark)

    Pingen, Georg; Waidmann, Matthias; Evgrafov, Anton

    2010-01-01

    of the design variables in the traditional approaches is seen as a possible cause for the slow convergence. Non-smooth material distributions are suspected to trigger premature onset of instationary flows which cannot be treated by steady-state flow models. In the present work, we study whether the convergence...... and the versatility of topology optimization methods for fluidic systems can be improved by employing a parametric level-set description. In general, level-set methods allow controlling the smoothness of boundaries, yield a non-local influence of design variables, and decouple the material description from the flow...... field discretization. The parametric level-set method used in this study utilizes a material distribution approach to represent flow boundaries, resulting in a non-trivial mapping between design variables and local material properties. Using a hydrodynamic lattice Boltzmann method, we study...

  17. GPS-Free Localization Algorithm for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2010-06-01

    Full Text Available Localization is one of the most fundamental problems in wireless sensor networks, since the locations of the sensor nodes are critical to both network operations and most application level tasks. A GPS-free localization scheme for wireless sensor networks is presented in this paper. First, we develop a standardized clustering-based approach for the local coordinate system formation wherein a multiplication factor is introduced to regulate the number of master and slave nodes and the degree of connectivity among master nodes. Second, using homogeneous coordinates, we derive a transformation matrix between two Cartesian coordinate systems to efficiently merge them into a global coordinate system and effectively overcome the flip ambiguity problem. The algorithm operates asynchronously without a centralized controller; and does not require that the location of the sensors be known a priori. A set of parameter-setting guidelines for the proposed algorithm is derived based on a probability model and the energy requirements are also investigated. A simulation analysis on a specific numerical example is conducted to validate the mathematical analytical results. We also compare the performance of the proposed algorithm under a variety multiplication factor, node density and node communication radius scenario. Experiments show that our algorithm outperforms existing mechanisms in terms of accuracy and convergence time.

  18. Learning-based meta-algorithm for MRI brain extraction.

    Science.gov (United States)

    Shi, Feng; Wang, Li; Gilmore, John H; Lin, Weili; Shen, Dinggang

    2011-01-01

    Multiple-segmentation-and-fusion method has been widely used for brain extraction, tissue segmentation, and region of interest (ROI) localization. However, such studies are hindered in practice by their computational complexity, mainly coming from the steps of template selection and template-to-subject nonlinear registration. In this study, we address these two issues and propose a novel learning-based meta-algorithm for MRI brain extraction. Specifically, we first use exemplars to represent the entire template library, and assign the most similar exemplar to the test subject. Second, a meta-algorithm combining two existing brain extraction algorithms (BET and BSE) is proposed to conduct multiple extractions directly on test subject. Effective parameter settings for the meta-algorithm are learned from the training data and propagated to subject through exemplars. We further develop a level-set based fusion method to combine multiple candidate extractions together with a closed smooth surface, for obtaining the final result. Experimental results show that, with only a small portion of subjects for training, the proposed method is able to produce more accurate and robust brain extraction results, at Jaccard Index of 0.956 +/- 0.010 on total 340 subjects under 6-fold cross validation, compared to those by the BET and BSE even using their best parameter combinations.

  19. Shuffled Frog Leaping Algorithm for Preemptive Project Scheduling Problems with Resource Vacations Based on Patterson Set

    Directory of Open Access Journals (Sweden)

    Yi Han

    2013-01-01

    Full Text Available This paper presents a shuffled frog leaping algorithm (SFLA for the single-mode resource-constrained project scheduling problem where activities can be divided into equant units and interrupted during processing. Each activity consumes 0–3 types of resources which are renewable and temporarily not available due to resource vacations in each period. The presence of scarce resources and precedence relations between activities makes project scheduling a difficult and important task in project management. A recent popular metaheuristic shuffled frog leaping algorithm, which is enlightened by the predatory habit of frog group in a small pond, is adopted to investigate the project makespan improvement on Patterson benchmark sets which is composed of different small and medium size projects. Computational results demonstrate the effectiveness and efficiency of SFLA in reducing project makespan and minimizing activity splitting number within an average CPU runtime, 0.521 second. This paper exposes all the scheduling sequences for each project and shows that of the 23 best known solutions have been improved.

  20. Expediting Combinatorial Data Set Analysis by Combining Human and Algorithmic Analysis.

    Science.gov (United States)

    Stein, Helge Sören; Jiao, Sally; Ludwig, Alfred

    2017-01-09

    A challenge in combinatorial materials science remains the efficient analysis of X-ray diffraction (XRD) data and its correlation to functional properties. Rapid identification of phase-regions and proper assignment of corresponding crystal structures is necessary to keep pace with the improved methods for synthesizing and characterizing materials libraries. Therefore, a new modular software called htAx (high-throughput analysis of X-ray and functional properties data) is presented that couples human intelligence tasks used for "ground-truth" phase-region identification with subsequent unbiased verification by an algorithm to efficiently analyze which phases are present in a materials library. Identified phases and phase-regions may then be correlated to functional properties in an expedited manner. For the functionality of htAx to be proven, two previously published XRD benchmark data sets of the materials systems Al-Cr-Fe-O and Ni-Ti-Cu are analyzed by htAx. The analysis of ∼1000 XRD patterns takes less than 1 day with htAx. The proposed method reliably identifies phase-region boundaries and robustly identifies multiphase structures. The method also addresses the problem of identifying regions with previously unpublished crystal structures using a special daisy ternary plot.

  1. TES Level 1 Algorithms: Interferogram Processing, Geolocation, Radiometric, and Spectral Calibration

    Science.gov (United States)

    Worden, Helen; Beer, Reinhard; Bowman, Kevin W.; Fisher, Brendan; Luo, Mingzhao; Rider, David; Sarkissian, Edwin; Tremblay, Denis; Zong, Jia

    2006-01-01

    The Tropospheric Emission Spectrometer (TES) on the Earth Observing System (EOS) Aura satellite measures the infrared radiance emitted by the Earth's surface and atmosphere using Fourier transform spectrometry. The measured interferograms are converted into geolocated, calibrated radiance spectra by the L1 (Level 1) processing, and are the inputs to L2 (Level 2) retrievals of atmospheric parameters, such as vertical profiles of trace gas abundance. We describe the algorithmic components of TES Level 1 processing, giving examples of the intermediate results and diagnostics that are necessary for creating TES L1 products. An assessment of noise-equivalent spectral radiance levels and current systematic errors is provided. As an initial validation of our spectral radiances, TES data are compared to the Atmospheric Infrared Sounder (AIRS) (on EOS Aqua), after accounting for spectral resolution differences by applying the AIRS spectral response function to the TES spectra. For the TES L1 nadir data products currently available, the agreement with AIRS is 1 K or better.

  2. Hierarchical Artificial Bee Colony Algorithm for RFID Network Planning Optimization

    Directory of Open Access Journals (Sweden)

    Lianbo Ma

    2014-01-01

    Full Text Available This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness.

  3. Hierarchical artificial bee colony algorithm for RFID network planning optimization.

    Science.gov (United States)

    Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong

    2014-01-01

    This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness.

  4. Evaluation of the interface-capturing algorithm of OpenFoam for the simulation of incompressible immiscible two-phase flow

    NARCIS (Netherlands)

    Raees, F.; Van der Heul, D.R.; Vuik, C.

    2011-01-01

    The Mass-Conserving Level-Set method combines the efficiency of a Level-Set algorithm with the mass conserving properties of the Volume Of Fluid method. It avoids the work intensive interface construction of the former method and imposes a mass-conserving correction to the distance function of the

  5. Semioptimal practicable algorithmic cooling

    International Nuclear Information System (INIS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-01-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  6. Hierarchical Sets: Analyzing Pangenome Structure through Scalable Set Visualizations

    DEFF Research Database (Denmark)

    Pedersen, Thomas Lin

    2017-01-01

    of hierarchical sets by applying it to a pangenome based on 113 Escherichia and Shigella genomes and find it provides a powerful addition to pangenome analysis. The described clustering algorithm and visualizations are implemented in the hierarchicalSets R package available from CRAN (https...

  7. Multi person detection and tracking based on hierarchical level-set method

    Science.gov (United States)

    Khraief, Chadia; Benzarti, Faouzi; Amiri, Hamid

    2018-04-01

    In this paper, we propose an efficient unsupervised method for mutli-person tracking based on hierarchical level-set approach. The proposed method uses both edge and region information in order to effectively detect objects. The persons are tracked on each frame of the sequence by minimizing an energy functional that combines color, texture and shape information. These features are enrolled in covariance matrix as region descriptor. The present method is fully automated without the need to manually specify the initial contour of Level-set. It is based on combined person detection and background subtraction methods. The edge-based is employed to maintain a stable evolution, guide the segmentation towards apparent boundaries and inhibit regions fusion. The computational cost of level-set is reduced by using narrow band technique. Many experimental results are performed on challenging video sequences and show the effectiveness of the proposed method.

  8. A Novel Algorithm for Feature Level Fusion Using SVM Classifier for Multibiometrics-Based Person Identification

    Directory of Open Access Journals (Sweden)

    Ujwalla Gawande

    2013-01-01

    Full Text Available Recent times witnessed many advancements in the field of biometric and ultimodal biometric fields. This is typically observed in the area, of security, privacy, and forensics. Even for the best of unimodal biometric systems, it is often not possible to achieve a higher recognition rate. Multimodal biometric systems overcome various limitations of unimodal biometric systems, such as nonuniversality, lower false acceptance, and higher genuine acceptance rates. More reliable recognition performance is achievable as multiple pieces of evidence of the same identity are available. The work presented in this paper is focused on multimodal biometric system using fingerprint and iris. Distinct textual features of the iris and fingerprint are extracted using the Haar wavelet-based technique. A novel feature level fusion algorithm is developed to combine these unimodal features using the Mahalanobis distance technique. A support-vector-machine-based learning algorithm is used to train the system using the feature extracted. The performance of the proposed algorithms is validated and compared with other algorithms using the CASIA iris database and real fingerprint database. From the simulation results, it is evident that our algorithm has higher recognition rate and very less false rejection rate compared to existing approaches.

  9. Research of Improved Apriori Algorithm Based on Itemset Array

    Directory of Open Access Journals (Sweden)

    Naili Liu

    2013-06-01

    Full Text Available Mining frequent item sets is a major key process in data mining research. Apriori and many improved algorithms are lowly efficient because they need scan database many times and storage transaction ID in memory, so time and space overhead is very high. Especially, they are lower efficient when they process large scale database. The main task of the improved algorithm is to reduce time and space overhead for mining frequent item sets. Because, it scans database only once to generate binary item set array, it adopts binary instead of transaction ID when it storages transaction flag, it adopts logic AND operation to judge whether an item set is frequent item set. Moreover, the improved algorithm is more suitable for large scale database. Experimental results show that the improved algorithm has better efficiency than classic Apriori algorithm.

  10. Level set methods for detonation shock dynamics using high-order finite elements

    Energy Technology Data Exchange (ETDEWEB)

    Dobrev, V. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Grogan, F. C. [Univ. of California, San Diego, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kolev, T. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rieben, R [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Tomov, V. Z. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-26

    Level set methods are a popular approach to modeling evolving interfaces. We present a level set ad- vection solver in two and three dimensions using the discontinuous Galerkin method with high-order nite elements. During evolution, the level set function is reinitialized to a signed distance function to maintain ac- curacy. Our approach leads to stable front propagation and convergence on high-order, curved, unstructured meshes. The ability of the solver to implicitly track moving fronts lends itself to a number of applications; in particular, we highlight applications to high-explosive (HE) burn and detonation shock dynamics (DSD). We provide results for two- and three-dimensional benchmark problems as well as applications to DSD.

  11. An investigation of children's levels of inquiry in an informal science setting

    Science.gov (United States)

    Clark-Thomas, Beth Anne

    Elementary school students' understanding of both science content and processes are enhanced by the higher level thinking associated with inquiry-based science investigations. Informal science setting personnel, elementary school teachers, and curriculum specialists charged with designing inquiry-based investigations would be well served by an understanding of the varying influence of certain present factors upon the students' willingness and ability to delve into such higher level inquiries. This study examined young children's use of inquiry-based materials and factors which may influence the level of inquiry they engaged in during informal science activities. An informal science setting was selected as the context for the examination of student inquiry behaviors because of the rich inquiry-based environment present at the site and the benefits previously noted in the research regarding the impact of informal science settings upon the construction of knowledge in science. The study revealed several patterns of behavior among children when they are engaged in inquiry-based activities at informal science exhibits. These repeated behaviors varied in the children's apparent purposeful use of the materials at the exhibits. These levels of inquiry behavior were taxonomically defined as high/medium/low within this study utilizing a researcher-developed tool. Furthermore, in this study adult interventions, questions, or prompting were found to impact the level of inquiry engaged in by the children. This study revealed that higher levels of inquiry were preceded by task directed and physical feature prompts. Moreover, the levels of inquiry behaviors were haltered, even lowered, when preceded by a prompt that focused on a science content or concept question. Results of this study have implications for the enhancement of inquiry-based science activities in elementary schools as well as in informal science settings. These findings have significance for all science educators

  12. Algorithm of Particle Data Association for SLAM Based on Improved Ant Algorithm

    Directory of Open Access Journals (Sweden)

    KeKe Gen

    2015-01-01

    Full Text Available The article considers a problem of data association algorithm for simultaneous localization and mapping guidelines in determining the route of unmanned aerial vehicles (UAVs. Currently, these equipments are already widely used, but mainly controlled from the remote operator. An urgent task is to develop a control system that allows for autonomous flight. Algorithm SLAM (simultaneous localization and mapping, which allows to predict the location, speed, the ratio of flight parameters and the coordinates of landmarks and obstacles in an unknown environment, is one of the key technologies to achieve real autonomous UAV flight. The aim of this work is to study the possibility of solving this problem by using an improved ant algorithm.The data association for SLAM algorithm is meant to establish a matching set of observed landmarks and landmarks in the state vector. Ant algorithm is one of the widely used optimization algorithms with positive feedback and the ability to search in parallel, so the algorithm is suitable for solving the problem of data association for SLAM. But the traditional ant algorithm in the process of finding routes easily falls into local optimum. Adding random perturbations in the process of updating the global pheromone to avoid local optima. Setting limits pheromone on the route can increase the search space with a reasonable amount of calculations for finding the optimal route.The paper proposes an algorithm of the local data association for SLAM algorithm based on an improved ant algorithm. To increase the speed of calculation, local data association is used instead of the global data association. The first stage of the algorithm defines targets in the matching space and the observed landmarks with the possibility of association by the criterion of individual compatibility (IC. The second stage defines the matched landmarks and their coordinates using improved ant algorithm. Simulation results confirm the efficiency and

  13. SU-F-J-180: A Reference Data Set for Testing Two Dimension Registration Algorithms

    International Nuclear Information System (INIS)

    Dankwa, A; Castillo, E; Guerrero, T

    2016-01-01

    Purpose: To create and characterize a reference data set for testing image registration algorithms that transform portal image (PI) to digitally reconstructed radiograph (DRR). Methods: Anterior-posterior (AP) and Lateral (LAT) projection and DRR image pairs from nine cases representing four different anatomical sites (head and neck, thoracic, abdominal, and pelvis) were selected for this study. Five experts will perform manual registration by placing landmarks points (LMPs) on the DRR and finding their corresponding points on the PI using computer assisted manual point selection tool (CAMPST), a custom-made MATLAB software tool developed in house. The landmark selection process will be repeated on both the PI and the DRR in order to characterize inter- and -intra observer variations associated with the point selection process. Inter and an intra observer variation in LMPs was done using Bland-Altman (B&A) analysis and one-way analysis of variance. We set our limit such that the absolute value of the mean difference between the readings should not exceed 3mm. Later on in this project we will test different two dimension (2D) image registration algorithms and quantify the uncertainty associated with their registration. Results: Using one-way analysis of variance (ANOVA) there was no variations within the readers. When Bland-Altman analysis was used the variation within the readers was acceptable. The variation was higher in the PI compared to the DRR.ConclusionThe variation seen for the PI is because although the PI has a much better spatial resolution the poor resolution on the DRR makes it difficult to locate the actual corresponding anatomical feature on the PI. We hope this becomes more evident when all the readers complete the point selection. The reason for quantifying inter- and -intra observer variation tells us to what degree of accuracy a manual registration can be done. Research supported by William Beaumont Hospital Research Start Up Fund.

  14. SU-F-J-180: A Reference Data Set for Testing Two Dimension Registration Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Dankwa, A; Castillo, E; Guerrero, T [William Beaumont Hospital, Rochester Hills, MI (United States)

    2016-06-15

    Purpose: To create and characterize a reference data set for testing image registration algorithms that transform portal image (PI) to digitally reconstructed radiograph (DRR). Methods: Anterior-posterior (AP) and Lateral (LAT) projection and DRR image pairs from nine cases representing four different anatomical sites (head and neck, thoracic, abdominal, and pelvis) were selected for this study. Five experts will perform manual registration by placing landmarks points (LMPs) on the DRR and finding their corresponding points on the PI using computer assisted manual point selection tool (CAMPST), a custom-made MATLAB software tool developed in house. The landmark selection process will be repeated on both the PI and the DRR in order to characterize inter- and -intra observer variations associated with the point selection process. Inter and an intra observer variation in LMPs was done using Bland-Altman (B&A) analysis and one-way analysis of variance. We set our limit such that the absolute value of the mean difference between the readings should not exceed 3mm. Later on in this project we will test different two dimension (2D) image registration algorithms and quantify the uncertainty associated with their registration. Results: Using one-way analysis of variance (ANOVA) there was no variations within the readers. When Bland-Altman analysis was used the variation within the readers was acceptable. The variation was higher in the PI compared to the DRR.ConclusionThe variation seen for the PI is because although the PI has a much better spatial resolution the poor resolution on the DRR makes it difficult to locate the actual corresponding anatomical feature on the PI. We hope this becomes more evident when all the readers complete the point selection. The reason for quantifying inter- and -intra observer variation tells us to what degree of accuracy a manual registration can be done. Research supported by William Beaumont Hospital Research Start Up Fund.

  15. Shadow algorithms data miner

    CERN Document Server

    Woo, Andrew

    2012-01-01

    Digital shadow generation continues to be an important aspect of visualization and visual effects in film, games, simulations, and scientific applications. This resource offers a thorough picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. From general fundamentals to specific applications, it addresses shadow algorithms and how to manage huge data sets from a shadow perspective. The book also examines the use of shadow algorithms in industrial applications, in terms of what algorithms are used and what software is applicable.

  16. Aerostructural Level Set Topology Optimization for a Common Research Model Wing

    Science.gov (United States)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2014-01-01

    The purpose of this work is to use level set topology optimization to improve the design of a representative wing box structure for the NASA common research model. The objective is to minimize the total compliance of the structure under aerodynamic and body force loading, where the aerodynamic loading is coupled to the structural deformation. A taxi bump case was also considered, where only body force loads were applied. The trim condition that aerodynamic lift must balance the total weight of the aircraft is enforced by allowing the root angle of attack to change. The level set optimization method is implemented on an unstructured three-dimensional grid, so that the method can optimize a wing box with arbitrary geometry. Fast matching and upwind schemes are developed for an unstructured grid, which make the level set method robust and efficient. The adjoint method is used to obtain the coupled shape sensitivities required to perform aerostructural optimization of the wing box structure.

  17. An adaptive control algorithm for optimization of intensity modulated radiotherapy considering uncertainties in beam profiles, patient set-up and internal organ motion

    International Nuclear Information System (INIS)

    Loef, Johan; Lind, Bengt K.; Brahme, Anders

    1998-01-01

    A new general beam optimization algorithm for inverse treatment planning is presented. It utilizes a new formulation of the probability to achieve complication-free tumour control. The new formulation explicitly describes the dependence of the treatment outcome on the incident fluence distribution, the patient geometry, the radiobiological properties of the patient and the fractionation schedule. In order to account for both measured and non-measured positioning uncertainties, the algorithm is based on a combination of dynamic and stochastic optimization techniques. Because of the difficulty in measuring all aspects of the intra- and interfractional variations in the patient geometry, such as internal organ displacements and deformations, these uncertainties are primarily accounted for in the treatment planning process by intensity modulation using stochastic optimization. The information about the deviations from the nominal fluence profiles and the nominal position of the patient relative to the beam that is obtained by portal imaging during treatment delivery, is used in a feedback loop to automatically adjust the profiles and the location of the patient for all subsequent treatments. Based on the treatment delivered in previous fractions, the algorithm furnishes optimal corrections for the remaining dose delivery both with regard to the fluence profile and its position relative to the patient. By dynamically refining the beam configuration from fraction to fraction, the algorithm generates an optimal sequence of treatments that very effectively reduces the influence of systematic and random set-up uncertainties to minimize and almost eliminate their overall effect on the treatment. Computer simulations have shown that the present algorithm leads to a significant increase in the probability of uncomplicated tumour control compared with the simple classical approach of adding fixed set-up margins to the internal target volume. (author)

  18. A simple MC-based algorithm for evaluating reliability of stochastic-flow network with unreliable nodes

    International Nuclear Information System (INIS)

    Yeh, W.-C.

    2004-01-01

    A MP/minimal cutset (MC) is a path/cut set such that if any edge is removed from this path/cut set, then the remaining set is no longer a path/cut set. An intuitive method is proposed to evaluate the reliability in terms of MCs in a stochastic-flow network subject to both edge and node failures under the condition that all of the MCs are given in advance. This is an extension of the best of known algorithms for solving the d-MC (a special MC but formatted in a system-state vector, where d is the lower bound points of the system capacity level) problem from the stochastic-flow network without unreliable nodes to with unreliable nodes by introducing some simple concepts. These concepts were first developed in the literature to implement the proposed algorithm to reduce the number of d-MC candidates. This method is more efficient than the best of known existing algorithms regardless if the network has or does not have unreliable nodes. Two examples are illustrated to show how the reliability is determined using the proposed algorithm in the network with or without unreliable nodes. The computational complexity of the proposed algorithm is analyzed and compared with the existing methods

  19. Knowledge-based low-level image analysis for computer vision systems

    Science.gov (United States)

    Dhawan, Atam P.; Baxi, Himanshu; Ranganath, M. V.

    1988-01-01

    Two algorithms for entry-level image analysis and preliminary segmentation are proposed which are flexible enough to incorporate local properties of the image. The first algorithm involves pyramid-based multiresolution processing and a strategy to define and use interlevel and intralevel link strengths. The second algorithm, which is designed for selected window processing, extracts regions adaptively using local histograms. The preliminary segmentation and a set of features are employed as the input to an efficient rule-based low-level analysis system, resulting in suboptimal meaningful segmentation.

  20. Multiparametric programming based algorithms for pure integer and mixed-integer bilevel programming problems

    KAUST Repository

    Domínguez, Luis F.

    2010-12-01

    This work introduces two algorithms for the solution of pure integer and mixed-integer bilevel programming problems by multiparametric programming techniques. The first algorithm addresses the integer case of the bilevel programming problem where integer variables of the outer optimization problem appear in linear or polynomial form in the inner problem. The algorithm employs global optimization techniques to convexify nonlinear terms generated by a reformulation linearization technique (RLT). A continuous multiparametric programming algorithm is then used to solve the reformulated convex inner problem. The second algorithm addresses the mixed-integer case of the bilevel programming problem where integer and continuous variables of the outer problem appear in linear or polynomial forms in the inner problem. The algorithm relies on the use of global multiparametric mixed-integer programming techniques at the inner optimization level. In both algorithms, the multiparametric solutions obtained are embedded in the outer problem to form a set of single-level (M)(I)(N)LP problems - which are then solved to global optimality using standard fixed-point (global) optimization methods. Numerical examples drawn from the open literature are presented to illustrate the proposed algorithms. © 2010 Elsevier Ltd.

  1. New Maximal Two-distance Sets

    DEFF Research Database (Denmark)

    Lisonek, Petr

    1996-01-01

    A two-distance set in E^d is a point set X inthe d-dimensional Euclidean spacesuch that the distances between distinct points in Xassume only two different non-zero values. Based on results from classical distance geometry, we developan algorithm to classify, for a given dimension, all maximal...... (largest possible)two-distance sets in E^d.Using this algorithm we have completed the full classificationfor all dimensions less than or equal to 7, andwe have found one set in E^8 whosemaximality follows from Blokhuis' upper bound on sizes of s-distance sets.While in the dimensions less than or equal to 6...

  2. An Implementation and Parallelization of the Scale Space Meshing Algorithm

    Directory of Open Access Journals (Sweden)

    Julie Digne

    2015-11-01

    Full Text Available Creating an interpolating mesh from an unorganized set of oriented points is a difficult problemwhich is often overlooked. Most methods focus indeed on building a watertight smoothed meshby defining some function whose zero level set is the surface of the object. However in some casesit is crucial to build a mesh that interpolates the points and does not fill the acquisition holes:either because the data are sparse and trying to fill the holes would create spurious artifactsor because the goal is to explore visually the data exactly as they were acquired without anysmoothing process. In this paper we detail a parallel implementation of the Scale-Space Meshingalgorithm, which builds on the scale-space framework for reconstructing a high precision meshfrom an input oriented point set. This algorithm first smoothes the point set, producing asingularity free shape. It then uses a standard mesh reconstruction technique, the Ball PivotingAlgorithm, to build a mesh from the smoothed point set. The final step consists in back-projecting the mesh built on the smoothed positions onto the original point set. The result ofthis process is an interpolating, hole-preserving surface mesh reconstruction.

  3. Validation of a non-uniform meshing algorithm for the 3D-FDTD method by means of a two-wire crosstalk experimental set-up

    Directory of Open Access Journals (Sweden)

    Raúl Esteban Jiménez-Mejía

    2015-06-01

    Full Text Available This paper presents an algorithm used to automatically mesh a 3D computational domain in order to solve electromagnetic interaction scenarios by means of the Finite-Difference Time-Domain -FDTD-  Method. The proposed algorithm has been formulated in a general mathematical form, where convenient spacing functions can be defined for the problem space discretization, allowing the inclusion of small sized objects in the FDTD method and the calculation of detailed variations of the electromagnetic field at specified regions of the computation domain. The results obtained by using the FDTD method with the proposed algorithm have been contrasted not only with a typical uniform mesh algorithm, but also with experimental measurements for a two-wire crosstalk set-up, leading to excellent agreement between theoretical and experimental waveforms. A discussion about the advantages of the non-uniform mesh over the uniform one is also presented.

  4. The development of an algebraic multigrid algorithm for symmetric positive definite linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Vanek, P.; Mandel, J.; Brezina, M. [Univ. of Colorado, Denver, CO (United States)

    1996-12-31

    An algebraic multigrid algorithm for symmetric, positive definite linear systems is developed based on the concept of prolongation by smoothed aggregation. Coarse levels are generated automatically. We present a set of requirements motivated heuristically by a convergence theory. The algorithm then attempts to satisfy the requirements. Input to the method are the coefficient matrix and zero energy modes, which are determined from nodal coordinates and knowledge of the differential equation. Efficiency of the resulting algorithm is demonstrated by computational results on real world problems from solid elasticity, plate blending, and shells.

  5. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  6. MOCUS, Minimal Cut Sets and Minimal Path Sets from Fault Tree Analysis

    International Nuclear Information System (INIS)

    Fussell, J.B.; Henry, E.B.; Marshall, N.H.

    1976-01-01

    1 - Description of problem or function: From a description of the Boolean failure logic of a system, called a fault tree, and control parameters specifying the minimal cut set length to be obtained MOCUS determines the system failure modes, or minimal cut sets, and the system success modes, or minimal path sets. 2 - Method of solution: MOCUS uses direct resolution of the fault tree into the cut and path sets. The algorithm used starts with the main failure of interest, the top event, and proceeds to basic independent component failures, called primary events, to resolve the fault tree to obtain the minimal sets. A key point of the algorithm is that an and gate alone always increases the number of path sets; an or gate alone always increases the number of cut sets and increases the size of path sets. Other types of logic gates must be described in terms of and and or logic gates. 3 - Restrictions on the complexity of the problem: Output from MOCUS can include minimal cut and path sets for up to 20 gates

  7. Pareto Optimization of a Half Car Passive Suspension Model Using a Novel Multiobjective Heat Transfer Search Algorithm

    Directory of Open Access Journals (Sweden)

    Vimal Savsani

    2017-01-01

    Full Text Available Most of the modern multiobjective optimization algorithms are based on the search technique of genetic algorithms; however the search techniques of other recently developed metaheuristics are emerging topics among researchers. This paper proposes a novel multiobjective optimization algorithm named multiobjective heat transfer search (MOHTS algorithm, which is based on the search technique of heat transfer search (HTS algorithm. MOHTS employs the elitist nondominated sorting and crowding distance approach of an elitist based nondominated sorting genetic algorithm-II (NSGA-II for obtaining different nondomination levels and to preserve the diversity among the optimal set of solutions, respectively. The capability in yielding a Pareto front as close as possible to the true Pareto front of MOHTS has been tested on the multiobjective optimization problem of the vehicle suspension design, which has a set of five second-order linear ordinary differential equations. Half car passive ride model with two different sets of five objectives is employed for optimizing the suspension parameters using MOHTS and NSGA-II. The optimization studies demonstrate that MOHTS achieves the better nondominated Pareto front with the widespread (diveresed set of optimal solutions as compared to NSGA-II, and further the comparison of the extreme points of the obtained Pareto front reveals the dominance of MOHTS over NSGA-II, multiobjective uniform diversity genetic algorithm (MUGA, and combined PSO-GA based MOEA.

  8. Classification algorithms using adaptive partitioning

    KAUST Repository

    Binev, Peter; Cohen, Albert; Dahmen, Wolfgang; DeVore, Ronald

    2014-01-01

    © 2014 Institute of Mathematical Statistics. Algorithms for binary classification based on adaptive tree partitioning are formulated and analyzed for both their risk performance and their friendliness to numerical implementation. The algorithms can be viewed as generating a set approximation to the Bayes set and thus fall into the general category of set estimators. In contrast with the most studied tree-based algorithms, which utilize piecewise constant approximation on the generated partition [IEEE Trans. Inform. Theory 52 (2006) 1335.1353; Mach. Learn. 66 (2007) 209.242], we consider decorated trees, which allow us to derive higher order methods. Convergence rates for these methods are derived in terms the parameter - of margin conditions and a rate s of best approximation of the Bayes set by decorated adaptive partitions. They can also be expressed in terms of the Besov smoothness β of the regression function that governs its approximability by piecewise polynomials on adaptive partition. The execution of the algorithms does not require knowledge of the smoothness or margin conditions. Besov smoothness conditions are weaker than the commonly used Holder conditions, which govern approximation by nonadaptive partitions, and therefore for a given regression function can result in a higher rate of convergence. This in turn mitigates the compatibility conflict between smoothness and margin parameters.

  9. Classification algorithms using adaptive partitioning

    KAUST Repository

    Binev, Peter

    2014-12-01

    © 2014 Institute of Mathematical Statistics. Algorithms for binary classification based on adaptive tree partitioning are formulated and analyzed for both their risk performance and their friendliness to numerical implementation. The algorithms can be viewed as generating a set approximation to the Bayes set and thus fall into the general category of set estimators. In contrast with the most studied tree-based algorithms, which utilize piecewise constant approximation on the generated partition [IEEE Trans. Inform. Theory 52 (2006) 1335.1353; Mach. Learn. 66 (2007) 209.242], we consider decorated trees, which allow us to derive higher order methods. Convergence rates for these methods are derived in terms the parameter - of margin conditions and a rate s of best approximation of the Bayes set by decorated adaptive partitions. They can also be expressed in terms of the Besov smoothness β of the regression function that governs its approximability by piecewise polynomials on adaptive partition. The execution of the algorithms does not require knowledge of the smoothness or margin conditions. Besov smoothness conditions are weaker than the commonly used Holder conditions, which govern approximation by nonadaptive partitions, and therefore for a given regression function can result in a higher rate of convergence. This in turn mitigates the compatibility conflict between smoothness and margin parameters.

  10. A comparative analysis of biclustering algorithms for gene expression data

    Science.gov (United States)

    Eren, Kemal; Deveci, Mehmet; Küçüktunç, Onur; Çatalyürek, Ümit V.

    2013-01-01

    The need to analyze high-dimension biological data is driving the development of new data mining methods. Biclustering algorithms have been successfully applied to gene expression data to discover local patterns, in which a subset of genes exhibit similar expression levels over a subset of conditions. However, it is not clear which algorithms are best suited for this task. Many algorithms have been published in the past decade, most of which have been compared only to a small number of algorithms. Surveys and comparisons exist in the literature, but because of the large number and variety of biclustering algorithms, they are quickly outdated. In this article we partially address this problem of evaluating the strengths and weaknesses of existing biclustering methods. We used the BiBench package to compare 12 algorithms, many of which were recently published or have not been extensively studied. The algorithms were tested on a suite of synthetic data sets to measure their performance on data with varying conditions, such as different bicluster models, varying noise, varying numbers of biclusters and overlapping biclusters. The algorithms were also tested on eight large gene expression data sets obtained from the Gene Expression Omnibus. Gene Ontology enrichment analysis was performed on the resulting biclusters, and the best enrichment terms are reported. Our analyses show that the biclustering method and its parameters should be selected based on the desired model, whether that model allows overlapping biclusters, and its robustness to noise. In addition, we observe that the biclustering algorithms capable of finding more than one model are more successful at capturing biologically relevant clusters. PMID:22772837

  11. Use of the MULTINEST algorithm for gravitational wave data analysis

    International Nuclear Information System (INIS)

    Feroz, Farhan; Hobson, Michael P; Gair, Jonathan R; Porter, Edward K

    2009-01-01

    We describe an application of the MULTINEST algorithm to gravitational wave data analysis. MULTINEST is a multimodal nested sampling algorithm designed to efficiently evaluate the Bayesian evidence and return posterior probability densities for likelihood surfaces containing multiple secondary modes. The algorithm employs a set of 'live' points which are updated by partitioning the set into multiple overlapping ellipsoids and sampling uniformly from within them. This set of 'live' points climbs up the likelihood surface through nested iso-likelihood contours and the evidence and posterior distributions can be recovered from the point set evolution. The algorithm is model independent in the sense that the specific problem being tackled enters only through the likelihood computation, and does not change how the 'live' point set is updated. In this paper, we consider the use of the algorithm for gravitational wave data analysis by searching a simulated LISA data set containing two non-spinning supermassive black hole binary signals. The algorithm is able to rapidly identify all the modes of the solution and recover the true parameters of the sources to high precision.

  12. Rate-control algorithms testing by using video source model

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Anna

    2008-01-01

    In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.......In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set....

  13. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    Science.gov (United States)

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  14. Automated Detection of Cancer Associated Genes Using a Combined Fuzzy-Rough-Set-Based F-Information and Water Swirl Algorithm of Human Gene Expression Data.

    Directory of Open Access Journals (Sweden)

    Pugalendhi Ganesh Kumar

    Full Text Available This study describes a novel approach to reducing the challenges of highly nonlinear multiclass gene expression values for cancer diagnosis. To build a fruitful system for cancer diagnosis, in this study, we introduced two levels of gene selection such as filtering and embedding for selection of potential genes and the most relevant genes associated with cancer, respectively. The filter procedure was implemented by developing a fuzzy rough set (FR-based method for redefining the criterion function of f-information (FI to identify the potential genes without discretizing the continuous gene expression values. The embedded procedure is implemented by means of a water swirl algorithm (WSA, which attempts to optimize the rule set and membership function required to classify samples using a fuzzy-rule-based multiclassification system (FRBMS. Two novel update equations are proposed in WSA, which have better exploration and exploitation abilities while designing a self-learning FRBMS. The efficiency of our new approach was evaluated on 13 multicategory and 9 binary datasets of cancer gene expression. Additionally, the performance of the proposed FRFI-WSA method in designing an FRBMS was compared with existing methods for gene selection and optimization such as genetic algorithm (GA, particle swarm optimization (PSO, and artificial bee colony algorithm (ABC on all the datasets. In the global cancer map with repeated measurements (GCM_RM dataset, the FRFI-WSA showed the smallest number of 16 most relevant genes associated with cancer using a minimal number of 26 compact rules with the highest classification accuracy (96.45%. In addition, the statistical validation used in this study revealed that the biological relevance of the most relevant genes associated with cancer and their linguistics detected by the proposed FRFI-WSA approach are better than those in the other methods. The simple interpretable rules with most relevant genes and effectively

  15. Efficient triangulation of Poisson-disk sampled point sets

    KAUST Repository

    Guo, Jianwei

    2014-05-06

    In this paper, we present a simple yet efficient algorithm for triangulating a 2D input domain containing a Poisson-disk sampled point set. The proposed algorithm combines a regular grid and a discrete clustering approach to speedup the triangulation. Moreover, our triangulation algorithm is flexible and performs well on more general point sets such as adaptive, non-maximal Poisson-disk sets. The experimental results demonstrate that our algorithm is robust for a wide range of input domains and achieves significant performance improvement compared to the current state-of-the-art approaches. © 2014 Springer-Verlag Berlin Heidelberg.

  16. Automatable algorithms to identify nonmedical opioid use using electronic data: a systematic review.

    Science.gov (United States)

    Canan, Chelsea; Polinski, Jennifer M; Alexander, G Caleb; Kowal, Mary K; Brennan, Troyen A; Shrank, William H

    2017-11-01

    Improved methods to identify nonmedical opioid use can help direct health care resources to individuals who need them. Automated algorithms that use large databases of electronic health care claims or records for surveillance are a potential means to achieve this goal. In this systematic review, we reviewed the utility, attempts at validation, and application of such algorithms to detect nonmedical opioid use. We searched PubMed and Embase for articles describing automatable algorithms that used electronic health care claims or records to identify patients or prescribers with likely nonmedical opioid use. We assessed algorithm development, validation, and performance characteristics and the settings where they were applied. Study variability precluded a meta-analysis. Of 15 included algorithms, 10 targeted patients, 2 targeted providers, 2 targeted both, and 1 identified medications with high abuse potential. Most patient-focused algorithms (67%) used prescription drug claims and/or medical claims, with diagnosis codes of substance abuse and/or dependence as the reference standard. Eleven algorithms were developed via regression modeling. Four used natural language processing, data mining, audit analysis, or factor analysis. Automated algorithms can facilitate population-level surveillance. However, there is no true gold standard for determining nonmedical opioid use. Users must recognize the implications of identifying false positives and, conversely, false negatives. Few algorithms have been applied in real-world settings. Automated algorithms may facilitate identification of patients and/or providers most likely to need more intensive screening and/or intervention for nonmedical opioid use. Additional implementation research in real-world settings would clarify their utility. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  17. Applicability of a set of tomographic reconstruction algorithms for quantitative SPECT on irradiated nuclear fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Jacobsson Svärd, Staffan, E-mail: staffan.jacobsson_svard@physics.uu.se; Holcombe, Scott; Grape, Sophie

    2015-05-21

    A fuel assembly operated in a nuclear power plant typically contains 100–300 fuel rods, depending on fuel type, which become strongly radioactive during irradiation in the reactor core. For operational and security reasons, it is of interest to experimentally deduce rod-wise information from the fuel, preferably by means of non-destructive measurements. The tomographic SPECT technique offers such possibilities through its two-step application; (1) recording the gamma-ray flux distribution around the fuel assembly, and (2) reconstructing the assembly's internal source distribution, based on the recorded radiation field. In this paper, algorithms for performing the latter step and extracting quantitative relative rod-by-rod data are accounted for. As compared to application of SPECT in nuclear medicine, nuclear fuel assemblies present a much more heterogeneous distribution of internal attenuation to gamma radiation than the human body, typically with rods containing pellets of heavy uranium dioxide surrounded by cladding of a zirconium alloy placed in water or air. This inhomogeneity severely complicates the tomographic quantification of the rod-wise relative source content, and the deduction of conclusive data requires detailed modelling of the attenuation to be introduced in the reconstructions. However, as shown in this paper, simplified models may still produce valuable information about the fuel. Here, a set of reconstruction algorithms for SPECT on nuclear fuel assemblies are described and discussed in terms of their quantitative performance for two applications; verification of fuel assemblies' completeness in nuclear safeguards, and rod-wise fuel characterization. It is argued that a request not to base the former assessment on any a priori information brings constraints to which reconstruction methods that may be used in that case, whereas the use of a priori information on geometry and material content enables highly accurate quantitative

  18. Level sets and extrema of random processes and fields

    CERN Document Server

    Azais, Jean-Marc

    2009-01-01

    A timely and comprehensive treatment of random field theory with applications across diverse areas of study Level Sets and Extrema of Random Processes and Fields discusses how to understand the properties of the level sets of paths as well as how to compute the probability distribution of its extremal values, which are two general classes of problems that arise in the study of random processes and fields and in related applications. This book provides a unified and accessible approach to these two topics and their relationship to classical theory and Gaussian processes and fields, and the most modern research findings are also discussed. The authors begin with an introduction to the basic concepts of stochastic processes, including a modern review of Gaussian fields and their classical inequalities. Subsequent chapters are devoted to Rice formulas, regularity properties, and recent results on the tails of the distribution of the maximum. Finally, applications of random fields to various areas of mathematics a...

  19. Skull defect reconstruction based on a new hybrid level set.

    Science.gov (United States)

    Zhang, Ziqun; Zhang, Ran; Song, Zhijian

    2014-01-01

    Skull defect reconstruction is an important aspect of surgical repair. Historically, a skull defect prosthesis was created by the mirroring technique, surface fitting, or formed templates. These methods are not based on the anatomy of the individual patient's skull, and therefore, the prosthesis cannot precisely correct the defect. This study presented a new hybrid level set model, taking into account both the global optimization region information and the local accuracy edge information, while avoiding re-initialization during the evolution of the level set function. Based on the new method, a skull defect was reconstructed, and the skull prosthesis was produced by rapid prototyping technology. This resulted in a skull defect prosthesis that well matched the skull defect with excellent individual adaptation.

  20. 76 FR 9004 - Public Comment on Setting Achievement Levels in Writing

    Science.gov (United States)

    2011-02-16

    ... DEPARTMENT OF EDUCATION Public Comment on Setting Achievement Levels in Writing AGENCY: U.S... Achievement Levels in Writing. SUMMARY: The National Assessment Governing Board (Governing Board) is... for NAEP in writing. This notice provides opportunity for public comment and submitting...

  1. Appropriate criteria set for personnel promotion across organizational levels using analytic hierarchy process (AHP

    Directory of Open Access Journals (Sweden)

    Charles Noven Castillo

    2017-01-01

    Full Text Available Currently, there has been limited established specific set of criteria for personnel promotion to each level of the organization. This study is conducted in order to develop a personnel promotion strategy by identifying specific sets of criteria for each level of the organization. The complexity of identifying the criteria set along with the subjectivity of these criteria require the use of multi-criteria decision-making approach particularly the analytic hierarchy process (AHP. Results show different sets of criteria for each management level which are consistent with several frameworks in literature. These criteria sets would help avoid mismatch of employee skills and competencies and their job, and at the same time eliminate the issues in personnel promotion such as favouritism, glass ceiling, and gender and physical attractiveness preference. This work also shows that personality and traits, job satisfaction and experience and skills are more critical rather than social capital across different organizational levels. The contribution of this work is in identifying relevant criteria in developing a personnel promotion strategy across organizational levels.

  2. Robust non-rigid point set registration using student's-t mixture model.

    Directory of Open Access Journals (Sweden)

    Zhiyong Zhou

    Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.

  3. Level-set segmentation of pulmonary nodules in megavolt electronic portal images using a CT prior

    International Nuclear Information System (INIS)

    Schildkraut, J. S.; Prosser, N.; Savakis, A.; Gomez, J.; Nazareth, D.; Singh, A. K.; Malhotra, H. K.

    2010-01-01

    Purpose: Pulmonary nodules present unique problems during radiation treatment due to nodule position uncertainty that is caused by respiration. The radiation field has to be enlarged to account for nodule motion during treatment. The purpose of this work is to provide a method of locating a pulmonary nodule in a megavolt portal image that can be used to reduce the internal target volume (ITV) during radiation therapy. A reduction in the ITV would result in a decrease in radiation toxicity to healthy tissue. Methods: Eight patients with nonsmall cell lung cancer were used in this study. CT scans that include the pulmonary nodule were captured with a GE Healthcare LightSpeed RT 16 scanner. Megavolt portal images were acquired with a Varian Trilogy unit equipped with an AS1000 electronic portal imaging device. The nodule localization method uses grayscale morphological filtering and level-set segmentation with a prior. The treatment-time portion of the algorithm is implemented on a graphical processing unit. Results: The method was retrospectively tested on eight cases that include a total of 151 megavolt portal image frames. The method reduced the nodule position uncertainty by an average of 40% for seven out of the eight cases. The treatment phase portion of the method has a subsecond execution time that makes it suitable for near-real-time nodule localization. Conclusions: A method was developed to localize a pulmonary nodule in a megavolt portal image. The method uses the characteristics of the nodule in a prior CT scan to enhance the nodule in the portal image and to identify the nodule region by level-set segmentation. In a retrospective study, the method reduced the nodule position uncertainty by an average of 40% for seven out of the eight cases studied.

  4. Algorithms for Learning Preferences for Sets of Objects

    Science.gov (United States)

    Wagstaff, Kiri L.; desJardins, Marie; Eaton, Eric

    2010-01-01

    A method is being developed that provides for an artificial-intelligence system to learn a user's preferences for sets of objects and to thereafter automatically select subsets of objects according to those preferences. The method was originally intended to enable automated selection, from among large sets of images acquired by instruments aboard spacecraft, of image subsets considered to be scientifically valuable enough to justify use of limited communication resources for transmission to Earth. The method is also applicable to other sets of objects: examples of sets of objects considered in the development of the method include food menus, radio-station music playlists, and assortments of colored blocks for creating mosaics. The method does not require the user to perform the often-difficult task of quantitatively specifying preferences; instead, the user provides examples of preferred sets of objects. This method goes beyond related prior artificial-intelligence methods for learning which individual items are preferred by the user: this method supports a concept of setbased preferences, which include not only preferences for individual items but also preferences regarding types and degrees of diversity of items in a set. Consideration of diversity in this method involves recognition that members of a set may interact with each other in the sense that when considered together, they may be regarded as being complementary, redundant, or incompatible to various degrees. The effects of such interactions are loosely summarized in the term portfolio effect. The learning method relies on a preference representation language, denoted DD-PREF, to express set-based preferences. In DD-PREF, a preference is represented by a tuple that includes quality (depth) functions to estimate how desired a specific value is, weights for each feature preference, the desired diversity of feature values, and the relative importance of diversity versus depth. The system applies statistical

  5. Algorithmic requirements for swarm intelligence in differently coupled collective systems

    International Nuclear Information System (INIS)

    Stradner, Jürgen; Thenius, Ronald; Zahadat, Payam; Hamann, Heiko; Crailsheim, Karl; Schmickl, Thomas

    2013-01-01

    Swarm systems are based on intermediate connectivity between individuals and dynamic neighborhoods. In natural swarms self-organizing principles bring their agents to that favorable level of connectivity. They serve as interesting sources of inspiration for control algorithms in swarm robotics on the one hand, and in modular robotics on the other hand. In this paper we demonstrate and compare a set of bio-inspired algorithms that are used to control the collective behavior of swarms and modular systems: BEECLUST, AHHS (hormone controllers), FGRN (fractal genetic regulatory networks), and VE (virtual embryogenesis). We demonstrate how such bio-inspired control paradigms bring their host systems to a level of intermediate connectivity, what delivers sufficient robustness to these systems for collective decentralized control. In parallel, these algorithms allow sufficient volatility of shared information within these systems to help preventing local optima and deadlock situations, this way keeping those systems flexible and adaptive in dynamic non-deterministic environments

  6. Out-of-Core Computations of High-Resolution Level Sets by Means of Code Transformation

    DEFF Research Database (Denmark)

    Christensen, Brian Bunch; Nielsen, Michael Bang; Museth, Ken

    2012-01-01

    We propose a storage efficient, fast and parallelizable out-of-core framework for streaming computations of high resolution level sets. The fundamental techniques are skewing and tiling transformations of streamed level set computations which allow for the combination of interface propagation, re...... computations are now CPU bound and consequently the overall performance is unaffected by disk latency and bandwidth limitations. We demonstrate this with several benchmark tests that show sustained out-of-core throughputs close to that of in-core level set simulations....

  7. Accurate prediction of complex free surface flow around a high speed craft using a single-phase level set method

    Science.gov (United States)

    Broglia, Riccardo; Durante, Danilo

    2017-11-01

    This paper focuses on the analysis of a challenging free surface flow problem involving a surface vessel moving at high speeds, or planing. The investigation is performed using a general purpose high Reynolds free surface solver developed at CNR-INSEAN. The methodology is based on a second order finite volume discretization of the unsteady Reynolds-averaged Navier-Stokes equations (Di Mascio et al. in A second order Godunov—type scheme for naval hydrodynamics, Kluwer Academic/Plenum Publishers, Dordrecht, pp 253-261, 2001; Proceedings of 16th international offshore and polar engineering conference, San Francisco, CA, USA, 2006; J Mar Sci Technol 14:19-29, 2009); air/water interface dynamics is accurately modeled by a non standard level set approach (Di Mascio et al. in Comput Fluids 36(5):868-886, 2007a), known as the single-phase level set method. In this algorithm the governing equations are solved only in the water phase, whereas the numerical domain in the air phase is used for a suitable extension of the fluid dynamic variables. The level set function is used to track the free surface evolution; dynamic boundary conditions are enforced directly on the interface. This approach allows to accurately predict the evolution of the free surface even in the presence of violent breaking waves phenomena, maintaining the interface sharp, without any need to smear out the fluid properties across the two phases. This paper is aimed at the prediction of the complex free-surface flow field generated by a deep-V planing boat at medium and high Froude numbers (from 0.6 up to 1.2). In the present work, the planing hull is treated as a two-degree-of-freedom rigid object. Flow field is characterized by the presence of thin water sheets, several energetic breaking waves and plungings. The computational results include convergence of the trim angle, sinkage and resistance under grid refinement; high-quality experimental data are used for the purposes of validation, allowing to

  8. Algorithms for selecting informative marker panels for population assignment.

    Science.gov (United States)

    Rosenberg, Noah A

    2005-11-01

    Given a set of potential source populations, genotypes of an individual of unknown origin at a collection of markers can be used to predict the correct source population of the individual. For improved efficiency, informative markers can be chosen from a larger set of markers to maximize the accuracy of this prediction. However, selecting the loci that are individually most informative does not necessarily produce the optimal panel. Here, using genotypes from eight species--carp, cat, chicken, dog, fly, grayling, human, and maize--this univariate accumulation procedure is compared to new multivariate "greedy" and "maximin" algorithms for choosing marker panels. The procedures generally suggest similar panels, although the greedy method often recommends inclusion of loci that are not chosen by the other algorithms. In seven of the eight species, when applied to five or more markers, all methods achieve at least 94% assignment accuracy on simulated individuals, with one species--dog--producing this level of accuracy with only three markers, and the eighth species--human--requiring approximately 13-16 markers. The new algorithms produce substantial improvements over use of randomly selected markers; where differences among the methods are noticeable, the greedy algorithm leads to slightly higher probabilities of correct assignment. Although none of the approaches necessarily chooses the panel with optimal performance, the algorithms all likely select panels with performance near enough to the maximum that they all are suitable for practical use.

  9. Use of the MULTINEST algorithm for gravitational wave data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Feroz, Farhan; Hobson, Michael P [Astrophysics Group, Cavendish Laboratory, JJ Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Gair, Jonathan R [Institute of Astronomy, Madingley Road, Cambridge CB3 0HA (United Kingdom); Porter, Edward K [APC, UMR 7164, Universite Paris 7 Denis Diderot, 10, rue Alice Domon et Leonie Duquet, 75205 Paris Cedex 13 (France)

    2009-11-07

    We describe an application of the MULTINEST algorithm to gravitational wave data analysis. MULTINEST is a multimodal nested sampling algorithm designed to efficiently evaluate the Bayesian evidence and return posterior probability densities for likelihood surfaces containing multiple secondary modes. The algorithm employs a set of 'live' points which are updated by partitioning the set into multiple overlapping ellipsoids and sampling uniformly from within them. This set of 'live' points climbs up the likelihood surface through nested iso-likelihood contours and the evidence and posterior distributions can be recovered from the point set evolution. The algorithm is model independent in the sense that the specific problem being tackled enters only through the likelihood computation, and does not change how the 'live' point set is updated. In this paper, we consider the use of the algorithm for gravitational wave data analysis by searching a simulated LISA data set containing two non-spinning supermassive black hole binary signals. The algorithm is able to rapidly identify all the modes of the solution and recover the true parameters of the sources to high precision.

  10. Clinical Applications of a CT Window Blending Algorithm: RADIO (Relative Attenuation-Dependent Image Overlay).

    Science.gov (United States)

    Mandell, Jacob C; Khurana, Bharti; Folio, Les R; Hyun, Hyewon; Smith, Stacy E; Dunne, Ruth M; Andriole, Katherine P

    2017-06-01

    A methodology is described using Adobe Photoshop and Adobe Extendscript to process DICOM images with a Relative Attenuation-Dependent Image Overlay (RADIO) algorithm to visualize the full dynamic range of CT in one view, without requiring a change in window and level settings. The potential clinical uses for such an algorithm are described in a pictorial overview, including applications in emergency radiology, oncologic imaging, and nuclear medicine and molecular imaging.

  11. Selection of views to materialize using simulated annealing algorithms

    Science.gov (United States)

    Zhou, Lijuan; Liu, Chi; Wang, Hongfeng; Liu, Daixin

    2002-03-01

    A data warehouse contains lots of materialized views over the data provided by the distributed heterogeneous databases for the purpose of efficiently implementing decision-support or OLAP queries. It is important to select the right view to materialize that answer a given set of queries. The goal is the minimization of the combination of the query evaluation and view maintenance costs. In this paper, we have addressed and designed algorithms for selecting a set of views to be materialized so that the sum of processing a set of queries and maintaining the materialized views is minimized. We develop an approach using simulated annealing algorithms to solve it. First, we explore simulated annealing algorithms to optimize the selection of materialized views. Then we use experiments to demonstrate our approach. The results show that our algorithm works better. We implemented our algorithms and a performance study of the algorithms shows that the proposed algorithm gives an optimal solution.

  12. Parameterized Analysis of Paging and List Update Algorithms

    DEFF Research Database (Denmark)

    Dorrigiv, Reza; Ehmsen, Martin R.; López-Ortiz, Alejandro

    2015-01-01

    that a larger cache leads to a better performance. We also apply the parameterized analysis framework to list update and show that certain randomized algorithms which are superior to MTF in the classical model are not so in the parameterized case, which matches experimental results....... set model and express the performance of well known algorithms in terms of this parameter. This explicitly introduces parameterized-style analysis to online algorithms. The idea is that rather than normalizing the performance of an online algorithm by an (optimal) offline algorithm, we explicitly...... express the behavior of the algorithm in terms of two more natural parameters: the size of the cache and Denning’s working set measure. This technique creates a performance hierarchy of paging algorithms which better reflects their experimentally observed relative strengths. It also reflects the intuition...

  13. Automatic quality verification of the TV sets

    Science.gov (United States)

    Marijan, Dusica; Zlokolica, Vladimir; Teslic, Nikola; Pekovic, Vukota; Temerinac, Miodrag

    2010-01-01

    In this paper we propose a methodology for TV set verification, intended for detecting picture quality degradation and functional failures within a TV set. In the proposed approach we compare the TV picture captured from a TV set under investigation with the reference image for the corresponding TV set in order to assess the captured picture quality and therefore, assess the acceptability of TV set quality. The methodology framework comprises a logic block for designing the verification process flow, a block for TV set quality estimation (based on image quality assessment) and a block for generating the defect tracking database. The quality assessment algorithm is a full-reference intra-frame approach which aims at detecting various digital specific-TV-set picture degradations, coming from TV system hardware and software failures, and erroneous operational modes and settings in TV sets. The proposed algorithm is a block-based scheme which incorporates the mean square error and a local variance between the reference and the tested image. The artifact detection algorithm is shown to be highly robust against brightness and contrast changes in TV sets. The algorithm is evaluated by performance comparison with the other state-of-the-art image quality assessment metrics in terms of detecting TV picture degradations, such as illumination and contrast change, compression artifacts, picture misalignment, aliasing, blurring and other types of degradations that are due to defects within the TV set video chain.

  14. Optimizing survivability of multi-state systems with multi-level protection by multi-processor genetic algorithm

    International Nuclear Information System (INIS)

    Levitin, Gregory; Dai Yuanshun; Xie Min; Leng Poh, Kim

    2003-01-01

    In this paper we consider vulnerable systems which can have different states corresponding to different combinations of available elements composing the system. Each state can be characterized by a performance rate, which is the quantitative measure of a system's ability to perform its task. Both the impact of external factors (stress) and internal causes (failures) affect system survivability, which is determined as probability of meeting a given demand. In order to increase the survivability of the system, a multi-level protection is applied to its subsystems. This means that a subsystem and its inner level of protection are in their turn protected by the protection of an outer level. This double-protected subsystem has its outer protection and so forth. In such systems, the protected subsystems can be destroyed only if all of the levels of their protection are destroyed. Each level of protection can be destroyed only if all of the outer levels of protection are destroyed. We formulate the problem of finding the structure of series-parallel multi-state system (including choice of system elements, choice of structure of multi-level protection and choice of protection methods) in order to achieve a desired level of system survivability by the minimal cost. An algorithm based on the universal generating function method is used for determination of the system survivability. A multi-processor version of genetic algorithm is used as optimization tool in order to solve the structure optimization problem. An application example is presented to illustrate the procedure presented in this paper

  15. Algorithms for Decision Tree Construction

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    The study of algorithms for decision tree construction was initiated in 1960s. The first algorithms are based on the separation heuristic [13, 31] that at each step tries dividing the set of objects as evenly as possible. Later Garey and Graham [28

  16. Cascade Error Projection Learning Algorithm

    Science.gov (United States)

    Duong, T. A.; Stubberud, A. R.; Daud, T.

    1995-01-01

    A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.

  17. Cache-Oblivious Algorithms and Data Structures

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    2004-01-01

    Frigo, Leiserson, Prokop and Ramachandran in 1999 introduced the ideal-cache model as a formal model of computation for developing algorithms in environments with multiple levels of caching, and coined the terminology of cache-oblivious algorithms. Cache-oblivious algorithms are described...... as standard RAM algorithms with only one memory level, i.e. without any knowledge about memory hierarchies, but are analyzed in the two-level I/O model of Aggarwal and Vitter for an arbitrary memory and block size and an optimal off-line cache replacement strategy. The result are algorithms that automatically...... apply to multi-level memory hierarchies. This paper gives an overview of the results achieved on cache-oblivious algorithms and data structures since the seminal paper by Frigo et al....

  18. Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement

    Science.gov (United States)

    Shervani-Tabar, Navid; Vasilyev, Oleg V.

    2016-11-01

    This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.

  19. Thermogram breast cancer prediction approach based on Neutrosophic sets and fuzzy c-means algorithm.

    Science.gov (United States)

    Gaber, Tarek; Ismail, Gehad; Anter, Ahmed; Soliman, Mona; Ali, Mona; Semary, Noura; Hassanien, Aboul Ella; Snasel, Vaclav

    2015-08-01

    The early detection of breast cancer makes many women survive. In this paper, a CAD system classifying breast cancer thermograms to normal and abnormal is proposed. This approach consists of two main phases: automatic segmentation and classification. For the former phase, an improved segmentation approach based on both Neutrosophic sets (NS) and optimized Fast Fuzzy c-mean (F-FCM) algorithm was proposed. Also, post-segmentation process was suggested to segment breast parenchyma (i.e. ROI) from thermogram images. For the classification, different kernel functions of the Support Vector Machine (SVM) were used to classify breast parenchyma into normal or abnormal cases. Using benchmark database, the proposed CAD system was evaluated based on precision, recall, and accuracy as well as a comparison with related work. The experimental results showed that our system would be a very promising step toward automatic diagnosis of breast cancer using thermograms as the accuracy reached 100%.

  20. Some numerical studies of interface advection properties of level set ...

    Indian Academy of Sciences (India)

    explicit computational elements moving through an Eulerian grid. ... location. The interface is implicitly defined (captured) as the location of the discontinuity in the ... This level set function is advected with the background flow field and thus ...

  1. Road detection in SAR images using a tensor voting algorithm

    Science.gov (United States)

    Shen, Dajiang; Hu, Chun; Yang, Bing; Tian, Jinwen; Liu, Jian

    2007-11-01

    In this paper, the problem of the detection of road networks in Synthetic Aperture Radar (SAR) images is addressed. Most of the previous methods extract the road by detecting lines and network reconstruction. Traditional algorithms such as MRFs, GA, Level Set, used in the progress of reconstruction are iterative. The tensor voting methodology we proposed is non-iterative, and non-sensitive to initialization. Furthermore, the only free parameter is the size of the neighborhood, related to the scale. The algorithm we present is verified to be effective when it's applied to the road extraction using the real Radarsat Image.

  2. Development of a stereolithography (STL input and computer numerical control (CNC output algorithm for an entry-level 3-D printer

    Directory of Open Access Journals (Sweden)

    Brown, Andrew

    2014-08-01

    Full Text Available This paper presents a prototype Stereolithography (STL file format slicing and tool-path generation algorithm, which serves as a data front-end for a Rapid Prototyping (RP entry- level three-dimensional (3-D printer. Used mainly in Additive Manufacturing (AM, 3-D printers are devices that apply plastic, ceramic, and metal, layer by layer, in all three dimensions on a flat surface (X, Y, and Z axis. 3-D printers, unfortunately, cannot print an object without a special algorithm that is required to create the Computer Numerical Control (CNC instructions for printing. An STL algorithm therefore forms a critical component for Layered Manufacturing (LM, also referred to as RP. The purpose of this study was to develop an algorithm that is capable of processing and slicing an STL file or multiple files, resulting in a tool-path, and finally compiling a CNC file for an entry-level 3- D printer. The prototype algorithm was implemented for an entry-level 3-D printer that utilises the Fused Deposition Modelling (FDM process or Solid Freeform Fabrication (SFF process; an AM technology. Following an experimental method, the full data flow path for the prototype algorithm was developed, starting with STL data files, and then processing the STL data file into a G-code file format by slicing the model and creating a tool-path. This layering method is used by most 3-D printers to turn a 2-D object into a 3-D object. The STL algorithm developed in this study presents innovative opportunities for LM, since it allows engineers and architects to transform their ideas easily into a solid model in a fast, simple, and cheap way. This is accomplished by allowing STL models to be sliced rapidly, effectively, and without error, and finally to be processed and prepared into a G-code print file.

  3. A stationary wavelet transform and a time-frequency based spike detection algorithm for extracellular recorded data.

    Science.gov (United States)

    Lieb, Florian; Stark, Hans-Georg; Thielemann, Christiane

    2017-06-01

    Spike detection from extracellular recordings is a crucial preprocessing step when analyzing neuronal activity. The decision whether a specific part of the signal is a spike or not is important for any kind of other subsequent preprocessing steps, like spike sorting or burst detection in order to reduce the classification of erroneously identified spikes. Many spike detection algorithms have already been suggested, all working reasonably well whenever the signal-to-noise ratio is large enough. When the noise level is high, however, these algorithms have a poor performance. In this paper we present two new spike detection algorithms. The first is based on a stationary wavelet energy operator and the second is based on the time-frequency representation of spikes. Both algorithms are more reliable than all of the most commonly used methods. The performance of the algorithms is confirmed by using simulated data, resembling original data recorded from cortical neurons with multielectrode arrays. In order to demonstrate that the performance of the algorithms is not restricted to only one specific set of data, we also verify the performance using a simulated publicly available data set. We show that both proposed algorithms have the best performance under all tested methods, regardless of the signal-to-noise ratio in both data sets. This contribution will redound to the benefit of electrophysiological investigations of human cells. Especially the spatial and temporal analysis of neural network communications is improved by using the proposed spike detection algorithms.

  4. Searching for the majority: algorithms of voluntary control.

    Directory of Open Access Journals (Sweden)

    Jin Fan

    Full Text Available Voluntary control of information processing is crucial to allocate resources and prioritize the processes that are most important under a given situation; the algorithms underlying such control, however, are often not clear. We investigated possible algorithms of control for the performance of the majority function, in which participants searched for and identified one of two alternative categories (left or right pointing arrows as composing the majority in each stimulus set. We manipulated the amount (set size of 1, 3, and 5 and content (ratio of left and right pointing arrows within a set of the inputs to test competing hypotheses regarding mental operations for information processing. Using a novel measure based on computational load, we found that reaction time was best predicted by a grouping search algorithm as compared to alternative algorithms (i.e., exhaustive or self-terminating search. The grouping search algorithm involves sampling and resampling of the inputs before a decision is reached. These findings highlight the importance of investigating the implications of voluntary control via algorithms of mental operations.

  5. Probabilistic Open Set Recognition

    Science.gov (United States)

    Jain, Lalit Prithviraj

    Real-world tasks in computer vision, pattern recognition and machine learning often touch upon the open set recognition problem: multi-class recognition with incomplete knowledge of the world and many unknown inputs. An obvious way to approach such problems is to develop a recognition system that thresholds probabilities to reject unknown classes. Traditional rejection techniques are not about the unknown; they are about the uncertain boundary and rejection around that boundary. Thus traditional techniques only represent the "known unknowns". However, a proper open set recognition algorithm is needed to reduce the risk from the "unknown unknowns". This dissertation examines this concept and finds existing probabilistic multi-class recognition approaches are ineffective for true open set recognition. We hypothesize the cause is due to weak adhoc assumptions combined with closed-world assumptions made by existing calibration techniques. Intuitively, if we could accurately model just the positive data for any known class without overfitting, we could reject the large set of unknown classes even under this assumption of incomplete class knowledge. For this, we formulate the problem as one of modeling positive training data by invoking statistical extreme value theory (EVT) near the decision boundary of positive data with respect to negative data. We provide a new algorithm called the PI-SVM for estimating the unnormalized posterior probability of class inclusion. This dissertation also introduces a new open set recognition model called Compact Abating Probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical EVT for score calibration with one-class and binary

  6. A Cartesian Adaptive Level Set Method for Two-Phase Flows

    Science.gov (United States)

    Ham, F.; Young, Y.-N.

    2003-01-01

    In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.

  7. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation.

    Science.gov (United States)

    Barasa, Edwine W; Molyneux, Sassy; English, Mike; Cleary, Susan

    2015-09-16

    Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these complementary schools of thought. © 2015

  8. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation

    Science.gov (United States)

    Barasa, Edwine W.; Molyneux, Sassy; English, Mike; Cleary, Susan

    2015-01-01

    Background: Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. Methods: We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Results: Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1) Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a) Stakeholder satisfaction, (b) Stakeholder understanding, (c) Shifted priorities (reallocation of resources), and (d) Implementation of decisions. (2) Priority setting processes should also meet the procedural conditions of (a) Stakeholder engagement, (b) Stakeholder empowerment, (c) Transparency, (d) Use of evidence, (e) Revisions, (f) Enforcement, and (g) Being grounded on community values. Conclusion: Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from these

  9. Setting Healthcare Priorities at the Macro and Meso Levels: A Framework for Evaluation

    Directory of Open Access Journals (Sweden)

    Edwine W. Barasa

    2015-11-01

    Full Text Available Background Priority setting in healthcare is a key determinant of health system performance. However, there is no widely accepted priority setting evaluation framework. We reviewed literature with the aim of developing and proposing a framework for the evaluation of macro and meso level healthcare priority setting practices. Methods We systematically searched Econlit, PubMed, CINAHL, and EBSCOhost databases and supplemented this with searches in Google Scholar, relevant websites and reference lists of relevant papers. A total of 31 papers on evaluation of priority setting were identified. These were supplemented by broader theoretical literature related to evaluation of priority setting. A conceptual review of selected papers was undertaken. Results Based on a synthesis of the selected literature, we propose an evaluative framework that requires that priority setting practices at the macro and meso levels of the health system meet the following conditions: (1 Priority setting decisions should incorporate both efficiency and equity considerations as well as the following outcomes; (a Stakeholder satisfaction, (b Stakeholder understanding, (c Shifted priorities (reallocation of resources, and (d Implementation of decisions. (2 Priority setting processes should also meet the procedural conditions of (a Stakeholder engagement, (b Stakeholder empowerment, (c Transparency, (d Use of evidence, (e Revisions, (f Enforcement, and (g Being grounded on community values. Conclusion Available frameworks for the evaluation of priority setting are mostly grounded on procedural requirements, while few have included outcome requirements. There is, however, increasing recognition of the need to incorporate both consequential and procedural considerations in priority setting practices. In this review, we adapt an integrative approach to develop and propose a framework for the evaluation of priority setting practices at the macro and meso levels that draws from

  10. Algorithmic Mechanism Design of Evolutionary Computation.

    Science.gov (United States)

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm.

  11. Advancements to the planogram frequency–distance rebinning algorithm

    International Nuclear Information System (INIS)

    Champley, Kyle M; Kinahan, Paul E; Raylman, Raymond R

    2010-01-01

    In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact

  12. Adaptive Filtering Algorithms and Practical Implementation

    CERN Document Server

    Diniz, Paulo S R

    2013-01-01

    In the fourth edition of Adaptive Filtering: Algorithms and Practical Implementation, author Paulo S.R. Diniz presents the basic concepts of adaptive signal processing and adaptive filtering in a concise and straightforward manner. The main classes of adaptive filtering algorithms are presented in a unified framework, using clear notations that facilitate actual implementation. The main algorithms are described in tables, which are detailed enough to allow the reader to verify the covered concepts. Many examples address problems drawn from actual applications. New material to this edition includes: Analytical and simulation examples in Chapters 4, 5, 6 and 10 Appendix E, which summarizes the analysis of set-membership algorithm Updated problems and references Providing a concise background on adaptive filtering, this book covers the family of LMS, affine projection, RLS and data-selective set-membership algorithms as well as nonlinear, sub-band, blind, IIR adaptive filtering, and more. Several problems are...

  13. Greedy Algorithms for Nonnegativity-Constrained Simultaneous Sparse Recovery

    Science.gov (United States)

    Kim, Daeun; Haldar, Justin P.

    2016-01-01

    This work proposes a family of greedy algorithms to jointly reconstruct a set of vectors that are (i) nonnegative and (ii) simultaneously sparse with a shared support set. The proposed algorithms generalize previous approaches that were designed to impose these constraints individually. Similar to previous greedy algorithms for sparse recovery, the proposed algorithms iteratively identify promising support indices. In contrast to previous approaches, the support index selection procedure has been adapted to prioritize indices that are consistent with both the nonnegativity and shared support constraints. Empirical results demonstrate for the first time that the combined use of simultaneous sparsity and nonnegativity constraints can substantially improve recovery performance relative to existing greedy algorithms that impose less signal structure. PMID:26973368

  14. Structure-Based Algorithms for Microvessel Classification

    KAUST Repository

    Smith, Amy F.

    2015-02-01

    © 2014 The Authors. Microcirculation published by John Wiley & Sons Ltd. Objective: Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Methods: Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. Results: The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. Conclusions: The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries, and venules.

  15. Reproducibility of F18-FDG PET radiomic features for different cervical tumor segmentation methods, gray-level discretization, and reconstruction algorithms.

    Science.gov (United States)

    Altazi, Baderaldeen A; Zhang, Geoffrey G; Fernandez, Daniel C; Montejo, Michael E; Hunt, Dylan; Werner, Joan; Biagioli, Matthew C; Moros, Eduardo G

    2017-11-01

    Site-specific investigations of the role of radiomics in cancer diagnosis and therapy are emerging. We evaluated the reproducibility of radiomic features extracted from 18 Flourine-fluorodeoxyglucose ( 18 F-FDG) PET images for three parameters: manual versus computer-aided segmentation methods, gray-level discretization, and PET image reconstruction algorithms. Our cohort consisted of pretreatment PET/CT scans from 88 cervical cancer patients. Two board-certified radiation oncologists manually segmented the metabolic tumor volume (MTV 1 and MTV 2 ) for each patient. For comparison, we used a graphical-based method to generate semiautomated segmented volumes (GBSV). To address any perturbations in radiomic feature values, we down-sampled the tumor volumes into three gray-levels: 32, 64, and 128 from the original gray-level of 256. Finally, we analyzed the effect on radiomic features on PET images of eight patients due to four PET 3D-reconstruction algorithms: maximum likelihood-ordered subset expectation maximization (OSEM) iterative reconstruction (IR) method, fourier rebinning-ML-OSEM (FOREIR), FORE-filtered back projection (FOREFBP), and 3D-Reprojection (3DRP) analytical method. We extracted 79 features from all segmentation method, gray-levels of down-sampled volumes, and PET reconstruction algorithms. The features were extracted using gray-level co-occurrence matrices (GLCM), gray-level size zone matrices (GLSZM), gray-level run-length matrices (GLRLM), neighborhood gray-tone difference matrices (NGTDM), shape-based features (SF), and intensity histogram features (IHF). We computed the Dice coefficient between each MTV and GBSV to measure segmentation accuracy. Coefficient values close to one indicate high agreement, and values close to zero indicate low agreement. We evaluated the effect on radiomic features by calculating the mean percentage differences (d¯) between feature values measured from each pair of parameter elements (i.e. segmentation methods: MTV

  16. Level Sets and Voronoi based Feature Extraction from any Imagery

    DEFF Research Database (Denmark)

    Sharma, O.; Anton, François; Mioc, Darka

    2012-01-01

    Polygon features are of interest in many GEOProcessing applications like shoreline mapping, boundary delineation, change detection, etc. This paper presents a unique new GPU-based methodology to automate feature extraction combining level sets, or mean shift based segmentation together with Voron...

  17. Evaluating healthcare priority setting at the meso level: A thematic review of empirical literature

    Science.gov (United States)

    Waithaka, Dennis; Tsofa, Benjamin; Barasa, Edwine

    2018-01-01

    Background: Decentralization of health systems has made sub-national/regional healthcare systems the backbone of healthcare delivery. These regions are tasked with the difficult responsibility of determining healthcare priorities and resource allocation amidst scarce resources. We aimed to review empirical literature that evaluated priority setting practice at the meso (sub-national) level of health systems. Methods: We systematically searched PubMed, ScienceDirect and Google scholar databases and supplemented these with manual searching for relevant studies, based on the reference list of selected papers. We only included empirical studies that described and evaluated, or those that only evaluated priority setting practice at the meso-level. A total of 16 papers were identified from LMICs and HICs. We analyzed data from the selected papers by thematic review. Results: Few studies used systematic priority setting processes, and all but one were from HICs. Both formal and informal criteria are used in priority-setting, however, informal criteria appear to be more perverse in LMICs compared to HICs. The priority setting process at the meso-level is a top-down approach with minimal involvement of the community. Accountability for reasonableness was the most common evaluative framework as it was used in 12 of the 16 studies. Efficiency, reallocation of resources and options for service delivery redesign were the most common outcome measures used to evaluate priority setting. Limitations: Our study was limited by the fact that there are very few empirical studies that have evaluated priority setting at the meso-level and there is likelihood that we did not capture all the studies. Conclusions: Improving priority setting practices at the meso level is crucial to strengthening health systems. This can be achieved through incorporating and adapting systematic priority setting processes and frameworks to the context where used, and making considerations of both process

  18. Effective traffic features selection algorithm for cyber-attacks samples

    Science.gov (United States)

    Li, Yihong; Liu, Fangzheng; Du, Zhenyu

    2018-05-01

    By studying the defense scheme of Network attacks, this paper propose an effective traffic features selection algorithm based on k-means++ clustering to deal with the problem of high dimensionality of traffic features which extracted from cyber-attacks samples. Firstly, this algorithm divide the original feature set into attack traffic feature set and background traffic feature set by the clustering. Then, we calculates the variation of clustering performance after removing a certain feature. Finally, evaluating the degree of distinctiveness of the feature vector according to the result. Among them, the effective feature vector is whose degree of distinctiveness exceeds the set threshold. The purpose of this paper is to select out the effective features from the extracted original feature set. In this way, it can reduce the dimensionality of the features so as to reduce the space-time overhead of subsequent detection. The experimental results show that the proposed algorithm is feasible and it has some advantages over other selection algorithms.

  19. Online monitoring of oil film using electrical capacitance tomography and level set method

    International Nuclear Information System (INIS)

    Xue, Q.; Ma, M.; Sun, B. Y.; Cui, Z. Q.; Wang, H. X.

    2015-01-01

    In the application of oil-air lubrication system, electrical capacitance tomography (ECT) provides a promising way for monitoring oil film in the pipelines by reconstructing cross sectional oil distributions in real time. While in the case of small diameter pipe and thin oil film, the thickness of the oil film is hard to be observed visually since the interface of oil and air is not obvious in the reconstructed images. And the existence of artifacts in the reconstructions has seriously influenced the effectiveness of image segmentation techniques such as level set method. Besides, level set method is also unavailable for online monitoring due to its low computation speed. To address these problems, a modified level set method is developed: a distance regularized level set evolution formulation is extended to image two-phase flow online using an ECT system, a narrowband image filter is defined to eliminate the influence of artifacts, and considering the continuity of the oil distribution variation, the detected oil-air interface of a former image can be used as the initial contour for the detection of the subsequent frame; thus, the propagation from the initial contour to the boundary can be greatly accelerated, making it possible for real time tracking. To testify the feasibility of the proposed method, an oil-air lubrication facility with 4 mm inner diameter pipe is measured in normal operation using an 8-electrode ECT system. Both simulation and experiment results indicate that the modified level set method is capable of visualizing the oil-air interface accurately online

  20. Empirical and Statistical Evaluation of the Effectiveness of Four Lossless Data Compression Algorithms

    Directory of Open Access Journals (Sweden)

    N. A. Azeez

    2017-04-01

    Full Text Available Data compression is the process of reducing the size of a file to effectively reduce storage space and communication cost. The evolvement in technology and digital age has led to an unparalleled usage of digital files in this current decade. The usage of data has resulted to an increase in the amount of data being transmitted via various channels of data communication which has prompted the need to look into the current lossless data compression algorithms to check for their level of effectiveness so as to maximally reduce the bandwidth requirement in communication and transfer of data. Four lossless data compression algorithm: Lempel-Ziv Welch algorithm, Shannon-Fano algorithm, Adaptive Huffman algorithm and Run-Length encoding have been selected for implementation. The choice of these algorithms was based on their similarities, particularly in application areas. Their level of efficiency and effectiveness were evaluated using some set of predefined performance evaluation metrics namely compression ratio, compression factor, compression time, saving percentage, entropy and code efficiency. The algorithms implementation was done in the NetBeans Integrated Development Environment using Java as the programming language. Through the statistical analysis performed using Boxplot and ANOVA and comparison made on the four algo

  1. Multi-level learning: improving the prediction of protein, domain and residue interactions by allowing information flow between levels

    Directory of Open Access Journals (Sweden)

    McDermott Drew

    2009-08-01

    Full Text Available Abstract Background Proteins interact through specific binding interfaces that contain many residues in domains. Protein interactions thus occur on three different levels of a concept hierarchy: whole-proteins, domains, and residues. Each level offers a distinct and complementary set of features for computationally predicting interactions, including functional genomic features of whole proteins, evolutionary features of domain families and physical-chemical features of individual residues. The predictions at each level could benefit from using the features at all three levels. However, it is not trivial as the features are provided at different granularity. Results To link up the predictions at the three levels, we propose a multi-level machine-learning framework that allows for explicit information flow between the levels. We demonstrate, using representative yeast interaction networks, that our algorithm is able to utilize complementary feature sets to make more accurate predictions at the three levels than when the three problems are approached independently. To facilitate application of our multi-level learning framework, we discuss three key aspects of multi-level learning and the corresponding design choices that we have made in the implementation of a concrete learning algorithm. 1 Architecture of information flow: we show the greater flexibility of bidirectional flow over independent levels and unidirectional flow; 2 Coupling mechanism of the different levels: We show how this can be accomplished via augmenting the training sets at each level, and discuss the prevention of error propagation between different levels by means of soft coupling; 3 Sparseness of data: We show that the multi-level framework compounds data sparsity issues, and discuss how this can be dealt with by building local models in information-rich parts of the data. Our proof-of-concept learning algorithm demonstrates the advantage of combining levels, and opens up

  2. A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks

    Science.gov (United States)

    De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio

    2016-05-01

    This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.

  3. Robust MST-Based Clustering Algorithm.

    Science.gov (United States)

    Liu, Qidong; Zhang, Ruisheng; Zhao, Zhili; Wang, Zhenghai; Jiao, Mengyao; Wang, Guangjing

    2018-06-01

    Minimax similarity stresses the connectedness of points via mediating elements rather than favoring high mutual similarity. The grouping principle yields superior clustering results when mining arbitrarily-shaped clusters in data. However, it is not robust against noises and outliers in the data. There are two main problems with the grouping principle: first, a single object that is far away from all other objects defines a separate cluster, and second, two connected clusters would be regarded as two parts of one cluster. In order to solve such problems, we propose robust minimum spanning tree (MST)-based clustering algorithm in this letter. First, we separate the connected objects by applying a density-based coarsening phase, resulting in a low-rank matrix in which the element denotes the supernode by combining a set of nodes. Then a greedy method is presented to partition those supernodes through working on the low-rank matrix. Instead of removing the longest edges from MST, our algorithm groups the data set based on the minimax similarity. Finally, the assignment of all data points can be achieved through their corresponding supernodes. Experimental results on many synthetic and real-world data sets show that our algorithm consistently outperforms compared clustering algorithms.

  4. Parameter Selection for Ant Colony Algorithm Based on Bacterial Foraging Algorithm

    Directory of Open Access Journals (Sweden)

    Peng Li

    2016-01-01

    Full Text Available The optimal performance of the ant colony algorithm (ACA mainly depends on suitable parameters; therefore, parameter selection for ACA is important. We propose a parameter selection method for ACA based on the bacterial foraging algorithm (BFA, considering the effects of coupling between different parameters. Firstly, parameters for ACA are mapped into a multidimensional space, using a chemotactic operator to ensure that each parameter group approaches the optimal value, speeding up the convergence for each parameter set. Secondly, the operation speed for optimizing the entire parameter set is accelerated using a reproduction operator. Finally, the elimination-dispersal operator is used to strengthen the global optimization of the parameters, which avoids falling into a local optimal solution. In order to validate the effectiveness of this method, the results were compared with those using a genetic algorithm (GA and a particle swarm optimization (PSO, and simulations were conducted using different grid maps for robot path planning. The results indicated that parameter selection for ACA based on BFA was the superior method, able to determine the best parameter combination rapidly, accurately, and effectively.

  5. Level set methods for inverse scattering—some recent developments

    International Nuclear Information System (INIS)

    Dorn, Oliver; Lesselier, Dominique

    2009-01-01

    We give an update on recent techniques which use a level set representation of shapes for solving inverse scattering problems, completing in that matter the exposition made in (Dorn and Lesselier 2006 Inverse Problems 22 R67) and (Dorn and Lesselier 2007 Deformable Models (New York: Springer) pp 61–90), and bringing it closer to the current state of the art

  6. Power Constrained High-Level Synthesis of Battery Powered Digital Systems

    DEFF Research Database (Denmark)

    Nielsen, Sune Fallgaard; Madsen, Jan

    2003-01-01

    We present a high-level synthesis algorithm solving the combined scheduling, allocation and binding problem minimizing area under both latency and maximum power per clock-cycle constraints. Our approach eliminates the large power spikes, resulting in an increased battery lifetime, a property...... of utmost importance for battery powered embedded systems. Our approach extends the partial-clique partitioning algorithm by introducing power awareness through a heuristic algorithm which bounds the design space to those of power feasible schedules. We have applied our algorithm on a set of dataflow graphs...

  7. Learning algorithms and automatic processing of languages; Algorithmes a apprentissage et traitement automatique des langues

    Energy Technology Data Exchange (ETDEWEB)

    Fluhr, Christian Yves Andre

    1977-06-15

    This research thesis concerns the field of artificial intelligence. It addresses learning algorithms applied to automatic processing of languages. The author first briefly describes some mechanisms of human intelligence in order to describe how these mechanisms are simulated on a computer. He outlines the specific role of learning in various manifestations of intelligence. Then, based on the Markov's algorithm theory, the author discusses the notion of learning algorithm. Two main types of learning algorithms are then addressed: firstly, an 'algorithm-teacher dialogue' type sanction-based algorithm which aims at learning how to solve grammatical ambiguities in submitted texts; secondly, an algorithm related to a document system which structures semantic data automatically obtained from a set of texts in order to be able to understand by references to any question on the content of these texts.

  8. Contact-impact algorithms on parallel computers

    International Nuclear Information System (INIS)

    Zhong Zhihua; Nilsson, Larsgunnar

    1994-01-01

    Contact-impact algorithms on parallel computers are discussed within the context of explicit finite element analysis. The algorithms concerned include a contact searching algorithm and an algorithm for contact force calculations. The contact searching algorithm is based on the territory concept of the general HITA algorithm. However, no distinction is made between different contact bodies, or between different contact surfaces. All contact segments from contact boundaries are taken as a single set. Hierarchy territories and contact territories are expanded. A three-dimensional bucket sort algorithm is used to sort contact nodes. The defence node algorithm is used in the calculation of contact forces. Both the contact searching algorithm and the defence node algorithm are implemented on the connection machine CM-200. The performance of the algorithms is examined under different circumstances, and numerical results are presented. ((orig.))

  9. Structure and performance of a real-time algorithm to detect tsunami or tsunami-like alert conditions based on sea-level records analysis

    Directory of Open Access Journals (Sweden)

    L. Bressan

    2011-05-01

    Full Text Available The goal of this paper is to present an original real-time algorithm devised for detection of tsunami or tsunami-like waves we call TEDA (Tsunami Early Detection Algorithm, and to introduce a methodology to evaluate its performance. TEDA works on the sea level records of a single station and implements two distinct modules running concurrently: one to assess the presence of tsunami waves ("tsunami detection" and the other to identify high-amplitude long waves ("secure detection". Both detection methods are based on continuously updated time functions depending on a number of parameters that can be varied according to the application. In order to select the most adequate parameter setting for a given station, a methodology to evaluate TEDA performance has been devised, that is based on a number of indicators and that is simple to use. In this paper an example of TEDA application is given by using data from a tide gauge located at the Adak Island in Alaska, USA, that resulted in being quite suitable since it recorded several tsunamis in the last years using the sampling rate of 1 min.

  10. Implementation of a partitioned algorithm for simulation of large CSI problems

    Science.gov (United States)

    Alvin, Kenneth F.; Park, K. C.

    1991-01-01

    The implementation of a partitioned numerical algorithm for determining the dynamic response of coupled structure/controller/estimator finite-dimensional systems is reviewed. The partitioned approach leads to a set of coupled first and second-order linear differential equations which are numerically integrated with extrapolation and implicit step methods. The present software implementation, ACSIS, utilizes parallel processing techniques at various levels to optimize performance on a shared-memory concurrent/vector processing system. A general procedure for the design of controller and filter gains is also implemented, which utilizes the vibration characteristics of the structure to be solved. Also presented are: example problems; a user's guide to the software; the procedures and algorithm scripts; a stability analysis for the algorithm; and the source code for the parallel implementation.

  11. Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution

    Science.gov (United States)

    Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria

    2009-01-01

    The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship's flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm's design, along with mathematical models of the algorithm's performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.

  12. PSCAD modeling of a two-level space vector pulse width modulation algorithm for power electronics education

    Directory of Open Access Journals (Sweden)

    Ahmet Mete Vural

    2016-09-01

    Full Text Available This paper presents the design details of a two-level space vector pulse width modulation algorithm in PSCAD that is able to generate pulses for three-phase two-level DC/AC converters with two different switching patterns. The presented FORTRAN code is generic and can be easily modified to meet many other kinds of space vector modulation strategies. The code is also editable for hardware programming. The new component is tested and verified by comparing its output as six gating signals with those of a similar component in MATLAB library. Moreover the component is used to generate digital signals for closed-loop control of STATCOM for reactive power compensation in PSCAD. This add-on can be an effective tool to give students better understanding of the space vector modulation algorithm for different control tasks in power electronics area, and can motivate them for learning.

  13. Novel multimodality segmentation using level sets and Jensen-Renyi divergence

    NARCIS (Netherlands)

    Markel, Daniel; Zaidi, Habib; El Naqa, Issam

    2013-01-01

    Purpose: Positron emission tomography (PET) is playing an increasing role in radiotherapy treatment planning. However, despite progress, robust algorithms for PET and multimodal image segmentation are still lacking, especially if the algorithm were extended to image-guided and adaptive radiotherapy

  14. Level-set simulations of buoyancy-driven motion of single and multiple bubbles

    International Nuclear Information System (INIS)

    Balcázar, Néstor; Lehmkuhl, Oriol; Jofre, Lluís; Oliva, Assensi

    2015-01-01

    Highlights: • A conservative level-set method is validated and verified. • An extensive study of buoyancy-driven motion of single bubbles is performed. • The interactions of two spherical and ellipsoidal bubbles is studied. • The interaction of multiple bubbles is simulated in a vertical channel. - Abstract: This paper presents a numerical study of buoyancy-driven motion of single and multiple bubbles by means of the conservative level-set method. First, an extensive study of the hydrodynamics of single bubbles rising in a quiescent liquid is performed, including its shape, terminal velocity, drag coefficients and wake patterns. These results are validated against experimental and numerical data well established in the scientific literature. Then, a further study on the interaction of two spherical and ellipsoidal bubbles is performed for different orientation angles. Finally, the interaction of multiple bubbles is explored in a periodic vertical channel. The results show that the conservative level-set approach can be used for accurate modelling of bubble dynamics. Moreover, it is demonstrated that the present method is numerically stable for a wide range of Morton and Reynolds numbers.

  15. Predicting Coastal Flood Severity using Random Forest Algorithm

    Science.gov (United States)

    Sadler, J. M.; Goodall, J. L.; Morsy, M. M.; Spencer, K.

    2017-12-01

    Coastal floods have become more common recently and are predicted to further increase in frequency and severity due to sea level rise. Predicting floods in coastal cities can be difficult due to the number of environmental and geographic factors which can influence flooding events. Built stormwater infrastructure and irregular urban landscapes add further complexity. This paper demonstrates the use of machine learning algorithms in predicting street flood occurrence in an urban coastal setting. The model is trained and evaluated using data from Norfolk, Virginia USA from September 2010 - October 2016. Rainfall, tide levels, water table levels, and wind conditions are used as input variables. Street flooding reports made by city workers after named and unnamed storm events, ranging from 1-159 reports per event, are the model output. Results show that Random Forest provides predictive power in estimating the number of flood occurrences given a set of environmental conditions with an out-of-bag root mean squared error of 4.3 flood reports and a mean absolute error of 0.82 flood reports. The Random Forest algorithm performed much better than Poisson regression. From the Random Forest model, total daily rainfall was by far the most important factor in flood occurrence prediction, followed by daily low tide and daily higher high tide. The model demonstrated here could be used to predict flood severity based on forecast rainfall and tide conditions and could be further enhanced using more complete street flooding data for model training.

  16. Transmission dose estimation algorithm for in vivo dosimetry

    International Nuclear Information System (INIS)

    Yun, Hyong Geun; Shin, Kyo Chul; Huh, Soon Nyung; Woo, Hong Gyun; Ha, Sung Whan; Lee, Hyoung Koo

    2002-01-01

    Measurement of transmission dose is useful for in vivo dosimetry of QA purpose. The objective of this study is to develope an algorithm for estimation of tumor dose using measured transmission dose for open radiation field. Transmission dose was measured with various field size (FS), phantom thickness (Tp), and phantom chamber distance (PCD) with an acrylic phantom for 6 MV and 10 MV X-ray. Source to chamber distance (SCD) was set to 150 cm. Measurement was conducted with a 0.6 cc Farmer type ion chamber. Using measured data and regression analysis, an algorithm was developed for estimation of expected reading of transmission dose. Accuracy of the algorithm was tested with flat solid phantom with various settings. The algorithm consisted of quadratic function of log(A/P) (where A/P is area-perimeter ratio) and tertiary function of PCD. The algorithm could estimate dose with very high accuracy for open square field, with errors within ±0.5%. For elongated radiation field, the errors were limited to ±1.0%. The developed algorithm can accurately estimate the transmission dose in open radiation fields with various treatment settings

  17. Transmission dose estimation algorithm for in vivo dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Hyong Geun; Shin, Kyo Chul [Dankook Univ., Seoul (Korea, Republic of); Huh, Soon Nyung; Woo, Hong Gyun; Ha, Sung Whan [Seoul National Univ., Seoul (Korea, Republic of); Lee, Hyoung Koo [Catholic Univ., Seoul (Korea, Republic of)

    2002-07-01

    Measurement of transmission dose is useful for in vivo dosimetry of QA purpose. The objective of this study is to develope an algorithm for estimation of tumor dose using measured transmission dose for open radiation field. Transmission dose was measured with various field size (FS), phantom thickness (Tp), and phantom chamber distance (PCD) with an acrylic phantom for 6 MV and 10 MV X-ray. Source to chamber distance (SCD) was set to 150 cm. Measurement was conducted with a 0.6 cc Farmer type ion chamber. Using measured data and regression analysis, an algorithm was developed for estimation of expected reading of transmission dose. Accuracy of the algorithm was tested with flat solid phantom with various settings. The algorithm consisted of quadratic function of log(A/P) (where A/P is area-perimeter ratio) and tertiary function of PCD. The algorithm could estimate dose with very high accuracy for open square field, with errors within {+-}0.5%. For elongated radiation field, the errors were limited to {+-}1.0%. The developed algorithm can accurately estimate the transmission dose in open radiation fields with various treatment settings.

  18. Unsupervised Classification Using Immune Algorithm

    OpenAIRE

    Al-Muallim, M. T.; El-Kouatly, R.

    2012-01-01

    Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...

  19. Mouse obesity network reconstruction with a variational Bayes algorithm to employ aggressive false positive control

    Directory of Open Access Journals (Sweden)

    Logsdon Benjamin A

    2012-04-01

    Full Text Available Abstract Background We propose a novel variational Bayes network reconstruction algorithm to extract the most relevant disease factors from high-throughput genomic data-sets. Our algorithm is the only scalable method for regularized network recovery that employs Bayesian model averaging and that can internally estimate an appropriate level of sparsity to ensure few false positives enter the model without the need for cross-validation or a model selection criterion. We use our algorithm to characterize the effect of genetic markers and liver gene expression traits on mouse obesity related phenotypes, including weight, cholesterol, glucose, and free fatty acid levels, in an experiment previously used for discovery and validation of network connections: an F2 intercross between the C57BL/6 J and C3H/HeJ mouse strains, where apolipoprotein E is null on the background. Results We identified eleven genes, Gch1, Zfp69, Dlgap1, Gna14, Yy1, Gabarapl1, Folr2, Fdft1, Cnr2, Slc24a3, and Ccl19, and a quantitative trait locus directly connected to weight, glucose, cholesterol, or free fatty acid levels in our network. None of these genes were identified by other network analyses of this mouse intercross data-set, but all have been previously associated with obesity or related pathologies in independent studies. In addition, through both simulations and data analysis we demonstrate that our algorithm achieves superior performance in terms of power and type I error control than other network recovery algorithms that use the lasso and have bounds on type I error control. Conclusions Our final network contains 118 previously associated and novel genes affecting weight, cholesterol, glucose, and free fatty acid levels that are excellent obesity risk candidates.

  20. A stationary wavelet transform and a time-frequency based spike detection algorithm for extracellular recorded data

    Science.gov (United States)

    Lieb, Florian; Stark, Hans-Georg; Thielemann, Christiane

    2017-06-01

    Objective. Spike detection from extracellular recordings is a crucial preprocessing step when analyzing neuronal activity. The decision whether a specific part of the signal is a spike or not is important for any kind of other subsequent preprocessing steps, like spike sorting or burst detection in order to reduce the classification of erroneously identified spikes. Many spike detection algorithms have already been suggested, all working reasonably well whenever the signal-to-noise ratio is large enough. When the noise level is high, however, these algorithms have a poor performance. Approach. In this paper we present two new spike detection algorithms. The first is based on a stationary wavelet energy operator and the second is based on the time-frequency representation of spikes. Both algorithms are more reliable than all of the most commonly used methods. Main results. The performance of the algorithms is confirmed by using simulated data, resembling original data recorded from cortical neurons with multielectrode arrays. In order to demonstrate that the performance of the algorithms is not restricted to only one specific set of data, we also verify the performance using a simulated publicly available data set. We show that both proposed algorithms have the best performance under all tested methods, regardless of the signal-to-noise ratio in both data sets. Significance. This contribution will redound to the benefit of electrophysiological investigations of human cells. Especially the spatial and temporal analysis of neural network communications is improved by using the proposed spike detection algorithms.

  1. A Level Set Discontinuous Galerkin Method for Free Surface Flows

    DEFF Research Database (Denmark)

    Grooss, Jesper; Hesthaven, Jan

    2006-01-01

    We present a discontinuous Galerkin method on a fully unstructured grid for the modeling of unsteady incompressible fluid flows with free surfaces. The surface is modeled by embedding and represented by a levelset. We discuss the discretization of the flow equations and the level set equation...

  2. Priority setting at the micro-, meso- and macro-levels in Canada, Norway and Uganda.

    Science.gov (United States)

    Kapiriri, Lydia; Norheim, Ole Frithjof; Martin, Douglas K

    2007-06-01

    The objectives of this study were (1) to describe the process of healthcare priority setting in Ontario-Canada, Norway and Uganda at the three levels of decision-making; (2) to evaluate the description using the framework for fair priority setting, accountability for reasonableness; so as to identify lessons of good practices. We carried out case studies involving key informant interviews, with 184 health practitioners and health planners from the macro-level, meso-level and micro-level from Canada-Ontario, Norway and Uganda (selected by virtue of their varying experiences in priority setting). Interviews were audio-recorded, transcribed and analyzed using a modified thematic approach. The descriptions were evaluated against the four conditions of "accountability for reasonableness", relevance, publicity, revisions and enforcement. Areas of adherence to these conditions were identified as lessons of good practices; areas of non-adherence were identified as opportunities for improvement. (i) at the macro-level, in all three countries, cabinet makes most of the macro-level resource allocation decisions and they are influenced by politics, public pressure, and advocacy. Decisions within the ministries of health are based on objective formulae and evidence. International priorities influenced decisions in Uganda. Some priority-setting reasons are publicized through circulars, printed documents and the Internet in Canada and Norway. At the meso-level, hospital priority-setting decisions were made by the hospital managers and were based on national priorities, guidelines, and evidence. Hospital departments that handle emergencies, such as surgery, were prioritized. Some of the reasons are available on the hospital intranet or presented at meetings. Micro-level practitioners considered medical and social worth criteria. These reasons are not publicized. Many practitioners lacked knowledge of the macro- and meso-level priority-setting processes. (ii) Evaluation

  3. An extension of the directed search domain algorithm to bilevel optimization

    Science.gov (United States)

    Wang, Kaiqiang; Utyuzhnikov, Sergey V.

    2017-08-01

    A method is developed for generating a well-distributed Pareto set for the upper level in bilevel multiobjective optimization. The approach is based on the Directed Search Domain (DSD) algorithm, which is a classical approach for generation of a quasi-evenly distributed Pareto set in multiobjective optimization. The approach contains a double-layer optimizer designed in a specific way under the framework of the DSD method. The double-layer optimizer is based on bilevel single-objective optimization and aims to find a unique optimal Pareto solution rather than generate the whole Pareto frontier on the lower level in order to improve the optimization efficiency. The proposed bilevel DSD approach is verified on several test cases, and a relevant comparison against another classical approach is made. It is shown that the approach can generate a quasi-evenly distributed Pareto set for the upper level with relatively low time consumption.

  4. Embedded Real-Time Architecture for Level-Set-Based Active Contours

    Directory of Open Access Journals (Sweden)

    Dejnožková Eva

    2005-01-01

    Full Text Available Methods described by partial differential equations have gained a considerable interest because of undoubtful advantages such as an easy mathematical description of the underlying physics phenomena, subpixel precision, isotropy, or direct extension to higher dimensions. Though their implementation within the level set framework offers other interesting advantages, their vast industrial deployment on embedded systems is slowed down by their considerable computational effort. This paper exploits the high parallelization potential of the operators from the level set framework and proposes a scalable, asynchronous, multiprocessor platform suitable for system-on-chip solutions. We concentrate on obtaining real-time execution capabilities. The performance is evaluated on a continuous watershed and an object-tracking application based on a simple gradient-based attraction force driving the active countour. The proposed architecture can be realized on commercially available FPGAs. It is built around general-purpose processor cores, and can run code developed with usual tools.

  5. A Bi-Level Particle Swarm Optimization Algorithm for Solving Unit Commitment Problems with Wind-EVs Coordinated Dispatch

    Science.gov (United States)

    Song, Lei; Zhang, Bo

    2017-07-01

    Nowadays, the grid faces much more challenges caused by wind power and the accessing of electric vehicles (EVs). Based on the potentiality of coordinated dispatch, a model of wind-EVs coordinated dispatch was developed. Then, A bi-level particle swarm optimization algorithm for solving the model was proposed in this paper. The application of this algorithm to 10-unit test system carried out that coordinated dispatch can benefit the power system from the following aspects: (1) Reducing operating costs; (2) Improving the utilization of wind power; (3) Stabilizing the peak-valley difference.

  6. Computing Convex Coverage Sets for Faster Multi-Objective Coordination

    NARCIS (Netherlands)

    Roijers, D.M.; Whiteson, S.; Oliehoek, F.A.

    2015-01-01

    In this article, we propose new algorithms for multi-objective coordination graphs (MO-CoGs). Key to the efficiency of these algorithms is that they compute a convex coverage set (CCS) instead of a Pareto coverage set (PCS). Not only is a CCS a sufficient solution set for a large class of problems,

  7. Community detection algorithm evaluation with ground-truth data

    Science.gov (United States)

    Jebabli, Malek; Cherifi, Hocine; Cherifi, Chantal; Hamouda, Atef

    2018-02-01

    Community structure is of paramount importance for the understanding of complex networks. Consequently, there is a tremendous effort in order to develop efficient community detection algorithms. Unfortunately, the issue of a fair assessment of these algorithms is a thriving open question. If the ground-truth community structure is available, various clustering-based metrics are used in order to compare it versus the one discovered by these algorithms. However, these metrics defined at the node level are fairly insensitive to the variation of the overall community structure. To overcome these limitations, we propose to exploit the topological features of the 'community graphs' (where the nodes are the communities and the links represent their interactions) in order to evaluate the algorithms. To illustrate our methodology, we conduct a comprehensive analysis of overlapping community detection algorithms using a set of real-world networks with known a priori community structure. Results provide a better perception of their relative performance as compared to classical metrics. Moreover, they show that more emphasis should be put on the topology of the community structure. We also investigate the relationship between the topological properties of the community structure and the alternative evaluation measures (quality metrics and clustering metrics). It appears clearly that they present different views of the community structure and that they must be combined in order to evaluate the effectiveness of community detection algorithms.

  8. Numerical Modelling of Three-Fluid Flow Using The Level-set Method

    Science.gov (United States)

    Li, Hongying; Lou, Jing; Shang, Zhi

    2014-11-01

    This work presents a numerical model for simulation of three-fluid flow involving two different moving interfaces. These interfaces are captured using the level-set method via two different level-set functions. A combined formulation with only one set of conservation equations for the whole physical domain, consisting of the three different immiscible fluids, is employed. Numerical solution is performed on a fixed mesh using the finite volume method. Surface tension effect is incorporated using the Continuum Surface Force model. Validation of the present model is made against available results for stratified flow and rising bubble in a container with a free surface. Applications of the present model are demonstrated by a variety of three-fluid flow systems including (1) three-fluid stratified flow, (2) two-fluid stratified flow carrying the third fluid in the form of drops and (3) simultaneous rising and settling of two drops in a stationary third fluid. The work is supported by a Thematic and Strategic Research from A*STAR, Singapore (Ref. #: 1021640075).

  9. Combinatorial Clustering Algorithm of Quantum-Behaved Particle Swarm Optimization and Cloud Model

    Directory of Open Access Journals (Sweden)

    Mi-Yuan Shan

    2013-01-01

    Full Text Available We propose a combinatorial clustering algorithm of cloud model and quantum-behaved particle swarm optimization (COCQPSO to solve the stochastic problem. The algorithm employs a novel probability model as well as a permutation-based local search method. We are setting the parameters of COCQPSO based on the design of experiment. In the comprehensive computational study, we scrutinize the performance of COCQPSO on a set of widely used benchmark instances. By benchmarking combinatorial clustering algorithm with state-of-the-art algorithms, we can show that its performance compares very favorably. The fuzzy combinatorial optimization algorithm of cloud model and quantum-behaved particle swarm optimization (FCOCQPSO in vague sets (IVSs is more expressive than the other fuzzy sets. Finally, numerical examples show the clustering effectiveness of COCQPSO and FCOCQPSO clustering algorithms which are extremely remarkable.

  10. Acute stroke: automatic perfusion lesion outlining using level sets.

    Science.gov (United States)

    Mouridsen, Kim; Nagenthiraja, Kartheeban; Jónsdóttir, Kristjana Ýr; Ribe, Lars R; Neumann, Anders B; Hjort, Niels; Østergaard, Leif

    2013-11-01

    To develop a user-independent algorithm for the delineation of hypoperfused tissue on perfusion-weighted images and evaluate its performance relative to a standard threshold method in simulated data, as well as in acute stroke patients. The study was approved by the local ethics committee, and patients gave written informed consent prior to their inclusion in the study. The algorithm identifies hypoperfused tissue in mean transit time maps by simultaneously minimizing the mean square error between individual and mean perfusion values inside and outside a smooth boundary. In 14 acute stroke patients, volumetric agreement between automated outlines and manual outlines determined in consensus among four neuroradiologists was assessed with Bland-Altman analysis, while spatial agreement was quantified by using lesion overlap relative to mean lesion volume (Dice coefficient). Performance improvement relative to a standard threshold approach was tested with the Wilcoxon signed rank test. The mean difference in lesion volume between automated outlines and manual outlines was -9.0 mL ± 44.5 (standard deviation). The lowest mean volume difference for the threshold approach was -25.8 mL ± 88.2. A significantly higher Dice coefficient was observed with the algorithm (0.71; interquartile range [IQR], 0.42-0.75) compared with the threshold approach (0.50; IQR, 0.27- 0.57; P , .001). The corresponding agreement among experts was 0.79 (IQR, 0.69-0.83). The perfusion lesions outlined by the automated algorithm agreed well with those defined manually in consensus by four experts and were superior to those obtained by using the standard threshold approach. This user-independent algorithm may improve the assessment of perfusion images as part of acute stroke treatment. http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.13121622/-/DC1. RSNA, 2013

  11. Fast-Solving Quasi-Optimal LS-S3VM Based on an Extended Candidate Set.

    Science.gov (United States)

    Ma, Yuefeng; Liang, Xun; Kwok, James T; Li, Jianping; Zhou, Xiaoping; Zhang, Haiyan

    2018-04-01

    The semisupervised least squares support vector machine (LS-S 3 VM) is an important enhancement of least squares support vector machines in semisupervised learning. Given that most data collected from the real world are without labels, semisupervised approaches are more applicable than standard supervised approaches. Although a few training methods for LS-S 3 VM exist, the problem of deriving the optimal decision hyperplane efficiently and effectually has not been solved. In this paper, a fully weighted model of LS-S 3 VM is proposed, and a simple integer programming (IP) model is introduced through an equivalent transformation to solve the model. Based on the distances between the unlabeled data and the decision hyperplane, a new indicator is designed to represent the possibility that the label of an unlabeled datum should be reversed in each iteration during training. Using the indicator, we construct an extended candidate set consisting of the indices of unlabeled data with high possibilities, which integrates more information from unlabeled data. Our algorithm is degenerated into a special scenario of the previous algorithm when the extended candidate set is reduced into a set with only one element. Two strategies are utilized to determine the descent directions based on the extended candidate set. Furthermore, we developed a novel method for locating a good starting point based on the properties of the equivalent IP model. Combined with the extended candidate set and the carefully computed starting point, a fast algorithm to solve LS-S 3 VM quasi-optimally is proposed. The choice of quasi-optimal solutions results in low computational cost and avoidance of overfitting. Experiments show that our algorithm equipped with the two designed strategies is more effective than other algorithms in at least one of the following three aspects: 1) computational complexity; 2) generalization ability; and 3) flexibility. However, our algorithm and other algorithms have

  12. Sequential function approximation on arbitrarily distributed point sets

    Science.gov (United States)

    Wu, Kailiang; Xiu, Dongbin

    2018-02-01

    We present a randomized iterative method for approximating unknown function sequentially on arbitrary point set. The method is based on a recently developed sequential approximation (SA) method, which approximates a target function using one data point at each step and avoids matrix operations. The focus of this paper is on data sets with highly irregular distribution of the points. We present a nearest neighbor replacement (NNR) algorithm, which allows one to sample the irregular data sets in a near optimal manner. We provide mathematical justification and error estimates for the NNR algorithm. Extensive numerical examples are also presented to demonstrate that the NNR algorithm can deliver satisfactory convergence for the SA method on data sets with high irregularity in their point distributions.

  13. An Adaptive Tradeoff Algorithm for Multi-issue SLA Negotiation

    Science.gov (United States)

    Son, Seokho; Sim, Kwang Mong

    Since participants in a Cloud may be independent bodies, mechanisms are necessary for resolving different preferences in leasing Cloud services. Whereas there are currently mechanisms that support service-level agreement negotiation, there is little or no negotiation support for concurrent price and timeslot for Cloud service reservations. For the concurrent price and timeslot negotiation, a tradeoff algorithm to generate and evaluate a proposal which consists of price and timeslot proposal is necessary. The contribution of this work is thus to design an adaptive tradeoff algorithm for multi-issue negotiation mechanism. The tradeoff algorithm referred to as "adaptive burst mode" is especially designed to increase negotiation speed and total utility and to reduce computational load by adaptively generating concurrent set of proposals. The empirical results obtained from simulations carried out using a testbed suggest that due to the concurrent price and timeslot negotiation mechanism with adaptive tradeoff algorithm: 1) both agents achieve the best performance in terms of negotiation speed and utility; 2) the number of evaluations of each proposal is comparatively lower than previous scheme (burst-N).

  14. Trigger Algorithms for Alignment and Calibration at the CMS Experiment

    CERN Document Server

    Fernandez Perez Tomei, Thiago Rafael

    2017-01-01

    The data needs of the Alignment and Calibration group at the CMS experiment are reasonably different from those of the physics studies groups. Data are taken at CMS through the online event selection system, which is implemented in two steps. The Level-1 Trigger is implemented on custom-made electronics and dedicated to analyse the detector information at a coarse-grained scale, while the High Level Trigger (HLT) is implemented as a series of software algorithms, running in a computing farm, that have access to the full detector information. In this paper we describe the set of trigger algorithms that is deployed to address the needs of the Alignment and Calibration group, how it fits in the general infrastructure of the HLT, and how it feeds the Prompt Calibration Loop (PCL), allowing for a fast turnaround for the alignment and calibration constants.

  15. Optimizing graph algorithms on pregel-like systems

    KAUST Repository

    Salihoglu, Semih; Widom, Jennifer

    2014-01-01

    We study the problem of implementing graph algorithms efficiently on Pregel-like systems, which can be surprisingly challenging. Standard graph algorithms in this setting can incur unnecessary inefficiencies such as slow convergence or high

  16. AUTOMATIC RETINA EXUDATES SEGMENTATION WITHOUT A MANUALLY LABELLED TRAINING SET

    Energy Technology Data Exchange (ETDEWEB)

    Giancardo, Luca [ORNL; Meriaudeau, Fabrice [ORNL; Karnowski, Thomas Paul [ORNL; Li, Yaquin [University of Tennessee, Knoxville (UTK); Tobin Jr, Kenneth William [ORNL; Chaum, Edward [University of Tennessee, Knoxville (UTK)

    2011-01-01

    Diabetic macular edema (DME) is a common vision threatening complication of diabetic retinopathy which can be assessed by detecting exudates (a type of bright lesion) in fundus images. In this work, two new methods for the detection of exudates are presented which do not use a supervised learning step and therefore do not require ground-truthed lesion training sets which are time consuming to create, difficult to obtain, and prone to human error. We introduce a new dataset of fundus images from various ethnic groups and levels of DME which we have made publicly available. We evaluate our algorithm with this dataset and compare our results with two recent exudate segmentation algorithms. In all of our tests, our algorithms perform better or comparable with an order of magnitude reduction in computational time.

  17. Ensemble of hybrid genetic algorithm for two-dimensional phase unwrapping

    Science.gov (United States)

    Balakrishnan, D.; Quan, C.; Tay, C. J.

    2013-06-01

    The phase unwrapping is the final and trickiest step in any phase retrieval technique. Phase unwrapping by artificial intelligence methods (optimization algorithms) such as hybrid genetic algorithm, reverse simulated annealing, particle swarm optimization, minimum cost matching showed better results than conventional phase unwrapping methods. In this paper, Ensemble of hybrid genetic algorithm with parallel populations is proposed to solve the branch-cut phase unwrapping problem. In a single populated hybrid genetic algorithm, the selection, cross-over and mutation operators are applied to obtain new population in every generation. The parameters and choice of operators will affect the performance of the hybrid genetic algorithm. The ensemble of hybrid genetic algorithm will facilitate to have different parameters set and different choice of operators simultaneously. Each population will use different set of parameters and the offspring of each population will compete against the offspring of all other populations, which use different set of parameters. The effectiveness of proposed algorithm is demonstrated by phase unwrapping examples and advantages of the proposed method are discussed.

  18. Level set method for image segmentation based on moment competition

    Science.gov (United States)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  19. Topological Hausdorff dimension and level sets of generic continuous functions on fractals

    International Nuclear Information System (INIS)

    Balka, Richárd; Buczolich, Zoltán; Elekes, Márton

    2012-01-01

    Highlights: ► We examine a new fractal dimension, the so called topological Hausdorff dimension. ► The generic continuous function has a level set of maximal Hausdorff dimension. ► This maximal dimension is the topological Hausdorff dimension minus one. ► Homogeneity implies that “most” level sets are of this dimension. ► We calculate the various dimensions of the graph of the generic function. - Abstract: In an earlier paper we introduced a new concept of dimension for metric spaces, the so called topological Hausdorff dimension. For a compact metric space K let dim H K and dim tH K denote its Hausdorff and topological Hausdorff dimension, respectively. We proved that this new dimension describes the Hausdorff dimension of the level sets of the generic continuous function on K, namely sup{ dim H f -1 (y):y∈R} =dim tH K-1 for the generic f ∈ C(K), provided that K is not totally disconnected, otherwise every non-empty level set is a singleton. We also proved that if K is not totally disconnected and sufficiently homogeneous then dim H f −1 (y) = dim tH K − 1 for the generic f ∈ C(K) and the generic y ∈ f(K). The most important goal of this paper is to make these theorems more precise. As for the first result, we prove that the supremum is actually attained on the left hand side of the first equation above, and also show that there may only be a unique level set of maximal Hausdorff dimension. As for the second result, we characterize those compact metric spaces for which for the generic f ∈ C(K) and the generic y ∈ f(K) we have dim H f −1 (y) = dim tH K − 1. We also generalize a result of B. Kirchheim by showing that if K is self-similar then for the generic f ∈ C(K) for every y∈intf(K) we have dim H f −1 (y) = dim tH K − 1. Finally, we prove that the graph of the generic f ∈ C(K) has the same Hausdorff and topological Hausdorff dimension as K.

  20. The global Minmax k-means algorithm.

    Science.gov (United States)

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.

  1. Relationships between college settings and student alcohol use before, during and after events: a multi-level study.

    Science.gov (United States)

    Paschall, Mallie J; Saltz, Robert F

    2007-11-01

    We examined how alcohol risk is distributed based on college students' drinking before, during and after they go to certain settings. Students attending 14 California public universities (N=10,152) completed a web-based or mailed survey in the fall 2003 semester, which included questions about how many drinks they consumed before, during and after the last time they went to six settings/events: fraternity or sorority party, residence hall party, campus event (e.g. football game), off-campus party, bar/restaurant and outdoor setting (referent). Multi-level analyses were conducted in hierarchical linear modeling (HLM) to examine relationships between type of setting and level of alcohol use before, during and after going to the setting, and possible age and gender differences in these relationships. Drinking episodes (N=24,207) were level 1 units, students were level 2 units and colleges were level 3 units. The highest drinking levels were observed during all settings/events except campus events, with the highest number of drinks being consumed at off-campus parties, followed by residence hall and fraternity/sorority parties. The number of drinks consumed before a fraternity/sorority party was higher than other settings/events. Age group and gender differences in relationships between type of setting/event and 'before,''during' and 'after' drinking levels also were observed. For example, going to a bar/restaurant (relative to an outdoor setting) was positively associated with 'during' drinks among students of legal drinking age while no relationship was observed for underage students. Findings of this study indicate differences in the extent to which college settings are associated with student drinking levels before, during and after related events, and may have implications for intervention strategies targeting different types of settings.

  2. Individual-and Setting-Level Correlates of Secondary Traumatic Stress in Rape Crisis Center Staff.

    Science.gov (United States)

    Dworkin, Emily R; Sorell, Nicole R; Allen, Nicole E

    2016-02-01

    Secondary traumatic stress (STS) is an issue of significant concern among providers who work with survivors of sexual assault. Although STS has been studied in relation to individual-level characteristics of a variety of types of trauma responders, less research has focused specifically on rape crisis centers as environments that might convey risk or protection from STS, and no research to knowledge has modeled setting-level variation in correlates of STS. The current study uses a sample of 164 staff members representing 40 rape crisis centers across a single Midwestern state to investigate the staff member-and agency-level correlates of STS. Results suggest that correlates exist at both levels of analysis. Younger age and greater severity of sexual assault history were statistically significant individual-level predictors of increased STS. Greater frequency of supervision was more strongly related to secondary stress for non-advocates than for advocates. At the setting level, lower levels of supervision and higher client loads agency-wide accounted for unique variance in staff members' STS. These findings suggest that characteristics of both providers and their settings are important to consider when understanding their STS. © The Author(s) 2014.

  3. SparCLeS: dynamic l₁ sparse classifiers with level sets for robust beard/moustache detection and segmentation.

    Science.gov (United States)

    Le, T Hoang Ngan; Luu, Khoa; Savvides, Marios

    2013-08-01

    Robust facial hair detection and segmentation is a highly valued soft biometric attribute for carrying out forensic facial analysis. In this paper, we propose a novel and fully automatic system, called SparCLeS, for beard/moustache detection and segmentation in challenging facial images. SparCLeS uses the multiscale self-quotient (MSQ) algorithm to preprocess facial images and deal with illumination variation. Histogram of oriented gradients (HOG) features are extracted from the preprocessed images and a dynamic sparse classifier is built using these features to classify a facial region as either containing skin or facial hair. A level set based approach, which makes use of the advantages of both global and local information, is then used to segment the regions of a face containing facial hair. Experimental results demonstrate the effectiveness of our proposed system in detecting and segmenting facial hair regions in images drawn from three databases, i.e., the NIST Multiple Biometric Grand Challenge (MBGC) still face database, the NIST Color Facial Recognition Technology FERET database, and the Labeled Faces in the Wild (LFW) database.

  4. A Semisupervised Support Vector Machines Algorithm for BCI Systems

    Science.gov (United States)

    Qin, Jianzhao; Li, Yuanqing; Sun, Wei

    2007-01-01

    As an emerging technology, brain-computer interfaces (BCIs) bring us new communication interfaces which translate brain activities into control signals for devices like computers, robots, and so forth. In this study, we propose a semisupervised support vector machine (SVM) algorithm for brain-computer interface (BCI) systems, aiming at reducing the time-consuming training process. In this algorithm, we apply a semisupervised SVM for translating the features extracted from the electrical recordings of brain into control signals. This SVM classifier is built from a small labeled data set and a large unlabeled data set. Meanwhile, to reduce the time for training semisupervised SVM, we propose a batch-mode incremental learning method, which can also be easily applied to the online BCI systems. Additionally, it is suggested in many studies that common spatial pattern (CSP) is very effective in discriminating two different brain states. However, CSP needs a sufficient labeled data set. In order to overcome the drawback of CSP, we suggest a two-stage feature extraction method for the semisupervised learning algorithm. We apply our algorithm to two BCI experimental data sets. The offline data analysis results demonstrate the effectiveness of our algorithm. PMID:18368141

  5. Analysing the performance of dynamic multi-objective optimisation algorithms

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available and the goal of the algorithm is to track a set of tradeoff solutions over time. Analysing the performance of a dynamic multi-objective optimisation algorithm (DMOA) is not a trivial task. For each environment (before a change occurs) the DMOA has to find a set...

  6. Genetic Optimization Algorithm for Metabolic Engineering Revisited

    Directory of Open Access Journals (Sweden)

    Tobias B. Alter

    2018-05-01

    Full Text Available To date, several independent methods and algorithms exist for exploiting constraint-based stoichiometric models to find metabolic engineering strategies that optimize microbial production performance. Optimization procedures based on metaheuristics facilitate a straightforward adaption and expansion of engineering objectives, as well as fitness functions, while being particularly suited for solving problems of high complexity. With the increasing interest in multi-scale models and a need for solving advanced engineering problems, we strive to advance genetic algorithms, which stand out due to their intuitive optimization principles and the proven usefulness in this field of research. A drawback of genetic algorithms is that premature convergence to sub-optimal solutions easily occurs if the optimization parameters are not adapted to the specific problem. Here, we conducted comprehensive parameter sensitivity analyses to study their impact on finding optimal strain designs. We further demonstrate the capability of genetic algorithms to simultaneously handle (i multiple, non-linear engineering objectives; (ii the identification of gene target-sets according to logical gene-protein-reaction associations; (iii minimization of the number of network perturbations; and (iv the insertion of non-native reactions, while employing genome-scale metabolic models. This framework adds a level of sophistication in terms of strain design robustness, which is exemplarily tested on succinate overproduction in Escherichia coli.

  7. A Parametric k-Means Algorithm

    Science.gov (United States)

    Tarpey, Thaddeus

    2007-01-01

    Summary The k points that optimally represent a distribution (usually in terms of a squared error loss) are called the k principal points. This paper presents a computationally intensive method that automatically determines the principal points of a parametric distribution. Cluster means from the k-means algorithm are nonparametric estimators of principal points. A parametric k-means approach is introduced for estimating principal points by running the k-means algorithm on a very large simulated data set from a distribution whose parameters are estimated using maximum likelihood. Theoretical and simulation results are presented comparing the parametric k-means algorithm to the usual k-means algorithm and an example on determining sizes of gas masks is used to illustrate the parametric k-means algorithm. PMID:17917692

  8. Automatic bounding estimation in modified NLMS algorithm

    International Nuclear Information System (INIS)

    Shahtalebi, K.; Doost-Hoseini, A.M.

    2002-01-01

    Modified Normalized Least Mean Square algorithm, which is a sign form of Nlm based on set-membership (S M) theory in the class of optimal bounding ellipsoid (OBE) algorithms, requires a priori knowledge of error bounds that is unknown in most applications. In a special but popular case of measurement noise, a simple algorithm has been proposed. With some simulation examples the performance of algorithm is compared with Modified Normalized Least Mean Square

  9. Case Study: A Bio-Inspired Control Algorithm for a Robotic Foot-Ankle Prosthesis Provides Adaptive Control of Level Walking and Stair Ascent

    Directory of Open Access Journals (Sweden)

    Uzma Tahir

    2018-04-01

    Full Text Available Powered ankle-foot prostheses assist users through plantarflexion during stance and dorsiflexion during swing. Provision of motor power permits faster preferred walking speeds than passive devices, but use of active motor power raises the issue of control. While several commercially available algorithms provide torque control for many intended activities and variations of terrain, control approaches typically exhibit no inherent adaptation. In contrast, muscles adapt instantaneously to changes in load without sensory feedback due to the intrinsic property that their stiffness changes with length and velocity. We previously developed a “winding filament” hypothesis (WFH for muscle contraction that accounts for intrinsic muscle properties by incorporating the giant titin protein. The goals of this study were to develop a WFH-based control algorithm for a powered prosthesis and to test its robustness during level walking and stair ascent in a case study of two subjects with 4–5 years of experience using a powered prosthesis. In the WFH algorithm, ankle moments produced by virtual muscles are calculated based on muscle length and activation. Net ankle moment determines the current applied to the motor. Using this algorithm implemented in a BiOM T2 prosthesis, we tested subjects during level walking and stair ascent. During level walking at variable speeds, the WFH algorithm produced plantarflexion angles (range = −8 to −19° and ankle moments (range = 1 to 1.5 Nm/kg similar to those produced by the BiOM T2 stock controller and to people with no amputation. During stair ascent, the WFH algorithm produced plantarflexion angles (range −15 to −19° that were similar to persons with no amputation and were ~5 times larger on average at 80 steps/min than those produced by the stock controller. This case study provides proof-of-concept that, by emulating muscle properties, the WFH algorithm provides robust, adaptive control of level walking at

  10. The improved Apriori algorithm based on matrix pruning and weight analysis

    Science.gov (United States)

    Lang, Zhenhong

    2018-04-01

    This paper uses the matrix compression algorithm and weight analysis algorithm for reference and proposes an improved matrix pruning and weight analysis Apriori algorithm. After the transactional database is scanned for only once, the algorithm will construct the boolean transaction matrix. Through the calculation of one figure in the rows and columns of the matrix, the infrequent item set is pruned, and a new candidate item set is formed. Then, the item's weight and the transaction's weight as well as the weight support for items are calculated, thus the frequent item sets are gained. The experimental result shows that the improved Apriori algorithm not only reduces the number of repeated scans of the database, but also improves the efficiency of data correlation mining.

  11. Bi-objective branch-and-cut algorithms

    DEFF Research Database (Denmark)

    Gadegaard, Sune Lauth; Ehrgott, Matthias; Nielsen, Lars Relund

    Most real-world optimization problems are of a multi-objective nature, involving objectives which are conflicting and incomparable. Solving a multi-objective optimization problem requires a method which can generate the set of rational compromises between the objectives. In this paper, we propose...... are strengthened by cutting planes. In addition, we suggest an extension of the branching strategy "Pareto branching''. Extensive computational results obtained for the bi-objective single source capacitated facility location problem prove the effectiveness of the algorithms....... and compares it to an upper bound set. The implicit bound set based algorithm, on the other hand, fathoms branching nodes by generating a single point on the lower bound set for each local nadir point. We outline several approaches for fathoming branching nodes and we propose an updating scheme for the lower...

  12. A Novel Clustering Algorithm Inspired by Membrane Computing

    Directory of Open Access Journals (Sweden)

    Hong Peng

    2015-01-01

    Full Text Available P systems are a class of distributed parallel computing models; this paper presents a novel clustering algorithm, which is inspired from mechanism of a tissue-like P system with a loop structure of cells, called membrane clustering algorithm. The objects of the cells express the candidate centers of clusters and are evolved by the evolution rules. Based on the loop membrane structure, the communication rules realize a local neighborhood topology, which helps the coevolution of the objects and improves the diversity of objects in the system. The tissue-like P system can effectively search for the optimal partitioning with the help of its parallel computing advantage. The proposed clustering algorithm is evaluated on four artificial data sets and six real-life data sets. Experimental results show that the proposed clustering algorithm is superior or competitive to k-means algorithm and several evolutionary clustering algorithms recently reported in the literature.

  13. Empirical validation of the S-Score algorithm in the analysis of gene expression data

    Directory of Open Access Journals (Sweden)

    Archer Kellie J

    2006-03-01

    Full Text Available Abstract Background Current methods of analyzing Affymetrix GeneChip® microarray data require the estimation of probe set expression summaries, followed by application of statistical tests to determine which genes are differentially expressed. The S-Score algorithm described by Zhang and colleagues is an alternative method that allows tests of hypotheses directly from probe level data. It is based on an error model in which the detected signal is proportional to the probe pair signal for highly expressed genes, but approaches a background level (rather than 0 for genes with low levels of expression. This model is used to calculate relative change in probe pair intensities that converts probe signals into multiple measurements with equalized errors, which are summed over a probe set to form the S-Score. Assuming no expression differences between chips, the S-Score follows a standard normal distribution, allowing direct tests of hypotheses to be made. Using spike-in and dilution datasets, we validated the S-Score method against comparisons of gene expression utilizing the more recently developed methods RMA, dChip, and MAS5. Results The S-score showed excellent sensitivity and specificity in detecting low-level gene expression changes. Rank ordering of S-Score values more accurately reflected known fold-change values compared to other algorithms. Conclusion The S-score method, utilizing probe level data directly, offers significant advantages over comparisons using only probe set expression summaries.

  14. Maximum likelihood positioning algorithm for high-resolution PET scanners

    International Nuclear Information System (INIS)

    Gross-Weege, Nicolas; Schug, David; Hallen, Patrick; Schulz, Volkmar

    2016-01-01

    Purpose: In high-resolution positron emission tomography (PET), lightsharing elements are incorporated into typical detector stacks to read out scintillator arrays in which one scintillator element (crystal) is smaller than the size of the readout channel. In order to identify the hit crystal by means of the measured light distribution, a positioning algorithm is required. One commonly applied positioning algorithm uses the center of gravity (COG) of the measured light distribution. The COG algorithm is limited in spatial resolution by noise and intercrystal Compton scatter. The purpose of this work is to develop a positioning algorithm which overcomes this limitation. Methods: The authors present a maximum likelihood (ML) algorithm which compares a set of expected light distributions given by probability density functions (PDFs) with the measured light distribution. Instead of modeling the PDFs by using an analytical model, the PDFs of the proposed ML algorithm are generated assuming a single-gamma-interaction model from measured data. The algorithm was evaluated with a hot-rod phantom measurement acquired with the preclinical HYPERION II D PET scanner. In order to assess the performance with respect to sensitivity, energy resolution, and image quality, the ML algorithm was compared to a COG algorithm which calculates the COG from a restricted set of channels. The authors studied the energy resolution of the ML and the COG algorithm regarding incomplete light distributions (missing channel information caused by detector dead time). Furthermore, the authors investigated the effects of using a filter based on the likelihood values on sensitivity, energy resolution, and image quality. Results: A sensitivity gain of up to 19% was demonstrated in comparison to the COG algorithm for the selected operation parameters. Energy resolution and image quality were on a similar level for both algorithms. Additionally, the authors demonstrated that the performance of the ML

  15. Vectorised Spreading Activation algorithm for centrality measurement

    Directory of Open Access Journals (Sweden)

    Alexander Troussov

    2011-01-01

    Full Text Available Spreading Activation is a family of graph-based algorithms widely used in areas such as information retrieval, epidemic models, and recommender systems. In this paper we introduce a novel Spreading Activation (SA method that we call Vectorised Spreading Activation (VSA. VSA algorithms, like “traditional” SA algorithms, iteratively propagate the activation from the initially activated set of nodes to the other nodes in a network through outward links. The level of the node’s activation could be used as a centrality measurement in accordance with dynamic model-based view of centrality that focuses on the outcomes for nodes in a network where something is flowing from node to node across the edges. Representing the activation by vectors allows the use of the information about various dimensionalities of the flow and the dynamic of the flow. In this capacity, VSA algorithms can model multitude of complex multidimensional network flows. We present the results of numerical simulations on small synthetic social networks and multi­dimensional network models of folksonomies which show that the results of VSA propagation are more sensitive to the positions of the initial seed and to the community structure of the network than the results produced by traditional SA algorithms. We tentatively conclude that the VSA methods could be instrumental to develop scalable and computationally efficient algorithms which could achieve synergy between computation of centrality indexes with detection of community structures in networks. Based on our preliminary results and on improvements made over previous studies, we foresee advances and applications in the current state of the art of this family of algorithms and their applications to centrality measurement.

  16. Predictive Control of Hydronic Floor Heating Systems using Neural Networks and Genetic Algorithms

    DEFF Research Database (Denmark)

    Vinther, Kasper; Green, Torben; Østergaard, Søren

    2017-01-01

    This paper presents the use a neural network and a micro genetic algorithm to optimize future set-points in existing hydronic floor heating systems for improved energy efficiency. The neural network can be trained to predict the impact of changes in set-points on future room temperatures. Additio...... space is not guaranteed. Evaluation of the performance of multiple neural networks is performed, using different levels of information, and optimization results are presented on a detailed house simulation model....

  17. Choosing algorithms for TB screening: a modelling study to compare yield, predictive value and diagnostic burden.

    Science.gov (United States)

    Van't Hoog, Anna H; Onozaki, Ikushi; Lonnroth, Knut

    2014-10-19

    To inform the choice of an appropriate screening and diagnostic algorithm for tuberculosis (TB) screening initiatives in different epidemiological settings, we compare algorithms composed of currently available methods. Of twelve algorithms composed of screening for symptoms (prolonged cough or any TB symptom) and/or chest radiography abnormalities, and either sputum-smear microscopy (SSM) or Xpert MTB/RIF (XP) as confirmatory test we model algorithm outcomes and summarize the yield, number needed to screen (NNS) and positive predictive value (PPV) for different levels of TB prevalence. Screening for prolonged cough has low yield, 22% if confirmatory testing is by SSM and 32% if XP, and a high NNS, exceeding 1000 if TB prevalence is ≤0.5%. Due to low specificity the PPV of screening for any TB symptom followed by SSM is less than 50%, even if TB prevalence is 2%. CXR screening for TB abnormalities followed by XP has the highest case detection (87%) and lowest NNS, but is resource intensive. CXR as a second screen for symptom screen positives improves efficiency. The ideal algorithm does not exist. The choice will be setting specific, for which this study provides guidance. Generally an algorithm composed of CXR screening followed by confirmatory testing with XP can achieve the lowest NNS and highest PPV, and is the least amenable to setting-specific variation. However resource requirements for tests and equipment may be prohibitive in some settings and a reason to opt for symptom screening and SSM. To better inform disease control programs we need empirical data to confirm the modeled yield, cost-effectiveness studies, transmission models and a better screening test.

  18. Set-Oriented Mining for Association Rules in Relational Databases

    NARCIS (Netherlands)

    Houtsma, M.A.W.; Houtsma, M.A.W.; Swami, A.

    1995-01-01

    Describe set-oriented algorithms for mining association rules. Such algorithms imply performing multiple joins and may appear to be inherently less efficient than special-purpose algorithms. We develop new algorithms that can be expressed as SQL queries, and discuss the optimization of these

  19. Hierarchical sets: analyzing pangenome structure through scalable set visualizations

    Science.gov (United States)

    2017-01-01

    Abstract Motivation: The increase in available microbial genome sequences has resulted in an increase in the size of the pangenomes being analyzed. Current pangenome visualizations are not intended for the pangenome sizes possible today and new approaches are necessary in order to convert the increase in available information to increase in knowledge. As the pangenome data structure is essentially a collection of sets we explore the potential for scalable set visualization as a tool for pangenome analysis. Results: We present a new hierarchical clustering algorithm based on set arithmetics that optimizes the intersection sizes along the branches. The intersection and union sizes along the hierarchy are visualized using a composite dendrogram and icicle plot, which, in pangenome context, shows the evolution of pangenome and core size along the evolutionary hierarchy. Outlying elements, i.e. elements whose presence pattern do not correspond with the hierarchy, can be visualized using hierarchical edge bundles. When applied to pangenome data this plot shows putative horizontal gene transfers between the genomes and can highlight relationships between genomes that is not represented by the hierarchy. We illustrate the utility of hierarchical sets by applying it to a pangenome based on 113 Escherichia and Shigella genomes and find it provides a powerful addition to pangenome analysis. Availability and Implementation: The described clustering algorithm and visualizations are implemented in the hierarchicalSets R package available from CRAN (https://cran.r-project.org/web/packages/hierarchicalSets) Contact: thomasp85@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28130242

  20. Two-pass imputation algorithm for missing value estimation in gene expression time series.

    Science.gov (United States)

    Tsiporkova, Elena; Boeva, Veselka

    2007-10-01

    Gene expression microarray experiments frequently generate datasets with multiple values missing. However, most of the analysis, mining, and classification methods for gene expression data require a complete matrix of gene array values. Therefore, the accurate estimation of missing values in such datasets has been recognized as an important issue, and several imputation algorithms have already been proposed to the biological community. Most of these approaches, however, are not particularly suitable for time series expression profiles. In view of this, we propose a novel imputation algorithm, which is specially suited for the estimation of missing values in gene expression time series data. The algorithm utilizes Dynamic Time Warping (DTW) distance in order to measure the similarity between time expression profiles, and subsequently selects for each gene expression profile with missing values a dedicated set of candidate profiles for estimation. Three different DTW-based imputation (DTWimpute) algorithms have been considered: position-wise, neighborhood-wise, and two-pass imputation. These have initially been prototyped in Perl, and their accuracy has been evaluated on yeast expression time series data using several different parameter settings. The experiments have shown that the two-pass algorithm consistently outperforms, in particular for datasets with a higher level of missing entries, the neighborhood-wise and the position-wise algorithms. The performance of the two-pass DTWimpute algorithm has further been benchmarked against the weighted K-Nearest Neighbors algorithm, which is widely used in the biological community; the former algorithm has appeared superior to the latter one. Motivated by these findings, indicating clearly the added value of the DTW techniques for missing value estimation in time series data, we have built an optimized C++ implementation of the two-pass DTWimpute algorithm. The software also provides for a choice between three different

  1. Efficient motif finding algorithms for large-alphabet inputs

    Directory of Open Access Journals (Sweden)

    Pavlovic Vladimir

    2010-10-01

    Full Text Available Abstract Background We consider the problem of identifying motifs, recurring or conserved patterns, in the biological sequence data sets. To solve this task, we present a new deterministic algorithm for finding patterns that are embedded as exact or inexact instances in all or most of the input strings. Results The proposed algorithm (1 improves search efficiency compared to existing algorithms, and (2 scales well with the size of alphabet. On a synthetic planted DNA motif finding problem our algorithm is over 10× more efficient than MITRA, PMSPrune, and RISOTTO for long motifs. Improvements are orders of magnitude higher in the same setting with large alphabets. On benchmark TF-binding site problems (FNP, CRP, LexA we observed reduction in running time of over 12×, with high detection accuracy. The algorithm was also successful in rapidly identifying protein motifs in Lipocalin, Zinc metallopeptidase, and supersecondary structure motifs for Cadherin and Immunoglobin families. Conclusions Our algorithm reduces computational complexity of the current motif finding algorithms and demonstrate strong running time improvements over existing exact algorithms, especially in important and difficult cases of large-alphabet sequences.

  2. Set Theory Correlation Free Algorithm for HRRR Target Tracking

    National Research Council Canada - National Science Library

    Blasch, Erik

    1999-01-01

    .... Recently a few fusionists including Mahler 1 and Mori 2 are using a set theory approach for a unified data fusion theory which is a correlation free paradigm 3 This paper uses the set theory approach...

  3. Wu’s algorithm and its possible application in cryptanalysis

    CSIR Research Space (South Africa)

    Grobler, TL

    2012-01-01

    Full Text Available now classic book Differential Algebra: Ritt (1950). Later it was independently rediscovered and improved by the Chinese mathematician Wu Wen-Ts?n in the late seventies (Wen-Ts?n, 1978, 1984, 1986). Other resources describing Wu?s algorithm... set that is constructed from a polynomial set so that the characteristic set shares some of the properties of the original set (Gallo and Mishra, 1990b; Ritt, 1950). The algorithm has found wide usage in mechanical theorem proving (Wen-Ts?n, 1986...

  4. Subcubic Control Flow Analysis Algorithms

    DEFF Research Database (Denmark)

    Midtgaard, Jan; Van Horn, David

    We give the first direct subcubic algorithm for performing control flow analysis of higher-order functional programs. Despite the long held belief that inclusion-based flow analysis could not surpass the ``cubic bottleneck, '' we apply known set compression techniques to obtain an algorithm...... that runs in time O(n^3/log n) on a unit cost random-access memory model machine. Moreover, we refine the initial flow analysis into two more precise analyses incorporating notions of reachability. We give subcubic algorithms for these more precise analyses and relate them to an existing analysis from...

  5. Final Report: Sampling-Based Algorithms for Estimating Structure in Big Data.

    Energy Technology Data Exchange (ETDEWEB)

    Matulef, Kevin Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    The purpose of this project was to develop sampling-based algorithms to discover hidden struc- ture in massive data sets. Inferring structure in large data sets is an increasingly common task in many critical national security applications. These data sets come from myriad sources, such as network traffic, sensor data, and data generated by large-scale simulations. They are often so large that traditional data mining techniques are time consuming or even infeasible. To address this problem, we focus on a class of algorithms that do not compute an exact answer, but instead use sampling to compute an approximate answer using fewer resources. The particular class of algorithms that we focus on are streaming algorithms , so called because they are designed to handle high-throughput streams of data. Streaming algorithms have only a small amount of working storage - much less than the size of the full data stream - so they must necessarily use sampling to approximate the correct answer. We present two results: * A streaming algorithm called HyperHeadTail , that estimates the degree distribution of a graph (i.e., the distribution of the number of connections for each node in a network). The degree distribution is a fundamental graph property, but prior work on estimating the degree distribution in a streaming setting was impractical for many real-world application. We improve upon prior work by developing an algorithm that can handle streams with repeated edges, and graph structures that evolve over time. * An algorithm for the task of maintaining a weighted subsample of items in a stream, when the items must be sampled according to their weight, and the weights are dynamically changing. To our knowledge, this is the first such algorithm designed for dynamically evolving weights. We expect it may be useful as a building block for other streaming algorithms on dynamic data sets.

  6. An Enhanced Genetic Algorithm for the Generalized Traveling Salesman Problem

    Directory of Open Access Journals (Sweden)

    H. Jafarzadeh

    2017-12-01

    Full Text Available The generalized traveling salesman problem (GTSP deals with finding the minimum-cost tour in a clustered set of cities. In this problem, the traveler is interested in finding the best path that goes through all clusters. As this problem is NP-hard, implementing a metaheuristic algorithm to solve the large scale problems is inevitable. The performance of these algorithms can be intensively promoted by other heuristic algorithms. In this study, a search method is developed that improves the quality of the solutions and competition time considerably in comparison with Genetic Algorithm. In the proposed algorithm, the genetic algorithms with the Nearest Neighbor Search (NNS are combined and a heuristic mutation operator is applied. According to the experimental results on a set of standard test problems with symmetric distances, the proposed algorithm finds the best solutions in most cases with the least computational time. The proposed algorithm is highly competitive with the published until now algorithms in both solution quality and running time.

  7. Developing an Enhanced Lightning Jump Algorithm for Operational Use

    Science.gov (United States)

    Schultz, Christopher J.; Petersen, Walter A.; Carey, Lawrence D.

    2009-01-01

    Overall Goals: 1. Build on the lightning jump framework set through previous studies. 2. Understand what typically occurs in nonsevere convection with respect to increases in lightning. 3. Ultimately develop a lightning jump algorithm for use on the Geostationary Lightning Mapper (GLM). 4 Lightning jump algorithm configurations were developed (2(sigma), 3(sigma), Threshold 10 and Threshold 8). 5 algorithms were tested on a population of 47 nonsevere and 38 severe thunderstorms. Results indicate that the 2(sigma) algorithm performed best over the entire thunderstorm sample set with a POD of 87%, a far of 35%, a CSI of 59% and a HSS of 75%.

  8. Parallel Algorithm Solves Coupled Differential Equations

    Science.gov (United States)

    Hayashi, A.

    1987-01-01

    Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.

  9. BIND – An algorithm for loss-less compression of nucleotide ...

    Indian Academy of Sciences (India)

    constituting the FNA data set. Supplementary table 2. Original and compressed file sizes (obtained using various compression algorithms) for 2679 files constituting the FFN data set. Supplementary table 3. Original and compressed file sizes (obtained using various compression algorithms) for 25 files constituting the ...

  10. A Modified MinMax k-Means Algorithm Based on PSO.

    Science.gov (United States)

    Wang, Xiaoyan; Bai, Yanping

    The MinMax k -means algorithm is widely used to tackle the effect of bad initialization by minimizing the maximum intraclustering errors. Two parameters, including the exponent parameter and memory parameter, are involved in the executive process. Since different parameters have different clustering errors, it is crucial to choose appropriate parameters. In the original algorithm, a practical framework is given. Such framework extends the MinMax k -means to automatically adapt the exponent parameter to the data set. It has been believed that if the maximum exponent parameter has been set, then the programme can reach the lowest intraclustering errors. However, our experiments show that this is not always correct. In this paper, we modified the MinMax k -means algorithm by PSO to determine the proper values of parameters which can subject the algorithm to attain the lowest clustering errors. The proposed clustering method is tested on some favorite data sets in several different initial situations and is compared to the k -means algorithm and the original MinMax k -means algorithm. The experimental results indicate that our proposed algorithm can reach the lowest clustering errors automatically.

  11. Deconvolution algorithms applied in ultrasonics

    International Nuclear Information System (INIS)

    Perrot, P.

    1993-12-01

    In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs

  12. Quadtree of TIN: a new algorithm of dynamic LOD

    Science.gov (United States)

    Zhang, Junfeng; Fei, Lifan; Chen, Zhen

    2009-10-01

    Currently, Real-time visualization of large-scale digital elevation model mainly employs the regular structure of GRID based on quadtree and triangle simplification methods based on irregular triangulated network (TIN). TIN is a refined means to express the terrain surface in the computer science, compared with GRID. However, the data structure of TIN model is complex, and is difficult to realize view-dependence representation of level of detail (LOD) quickly. GRID is a simple method to realize the LOD of terrain, but contains more triangle count. A new algorithm, which takes full advantage of the two methods' merit, is presented in this paper. This algorithm combines TIN with quadtree structure to realize the view-dependence LOD controlling over the irregular sampling point sets, and holds the details through the distance of viewpoint and the geometric error of terrain. Experiments indicate that this approach can generate an efficient quadtree triangulation hierarchy over any irregular sampling point sets and achieve dynamic and visual multi-resolution performance of large-scale terrain at real-time.

  13. Modification of transmission dose algorithm for irregularly shaped radiation field and tissue deficit

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Hyong Geon; Shin, Kyo Chul [Dankook Univ., College of Medicine, Seoul (Korea, Republic of); Huh, Soon Nyung; Woo, Hong Gyun; Ha, Sung Whan [Seoul National Univ., College of Medicine, Seoul (Korea, Republic of); Lee, Hyoung Koo [The Catholic Univ., College of Medicine, Seoul (Korea, Republic of)

    2002-07-01

    Algorithm for estimation of transmission dose was modified for use in partially blocked radiation fields and in cases with tissue deficit. The beam data was measured with flat solid phantom in various conditions of beam block. And an algorithm for correction of transmission dose in cases of partially blocked radiation field was developed from the measured data. The algorithm was tested in some clinical settings with irregular shaped field. Also, another algorithm for correction of transmission dose for tissue deficit was developed by physical reasoning. This algorithm was tested in experimental settings with irregular contours mimicking breast cancer patients by using multiple sheets of solid phantoms. The algorithm for correction of beam block could accurately reflect the effect of beam block, with error within {+-}1.0%, both with square fields and irregularly shaped fields. The correction algorithm for tissue deficit could accurately reflect the effect of tissue deficit with errors within {+-}1.0% in most situations and within {+-}3.0% in experimental settings with irregular contours mimicking breast cancer treatment set-up. Developed algorithms could accurately estimate the transmission dose in most radiation treatment settings including irregularly shaped field and irregularly shaped body contour with tissue deficit in transmission dosimetry.

  14. Modification of transmission dose algorithm for irregularly shaped radiation field and tissue deficit

    International Nuclear Information System (INIS)

    Yun, Hyong Geon; Shin, Kyo Chul; Huh, Soon Nyung; Woo, Hong Gyun; Ha, Sung Whan; Lee, Hyoung Koo

    2002-01-01

    Algorithm for estimation of transmission dose was modified for use in partially blocked radiation fields and in cases with tissue deficit. The beam data was measured with flat solid phantom in various conditions of beam block. And an algorithm for correction of transmission dose in cases of partially blocked radiation field was developed from the measured data. The algorithm was tested in some clinical settings with irregular shaped field. Also, another algorithm for correction of transmission dose for tissue deficit was developed by physical reasoning. This algorithm was tested in experimental settings with irregular contours mimicking breast cancer patients by using multiple sheets of solid phantoms. The algorithm for correction of beam block could accurately reflect the effect of beam block, with error within ±1.0%, both with square fields and irregularly shaped fields. The correction algorithm for tissue deficit could accurately reflect the effect of tissue deficit with errors within ±1.0% in most situations and within ±3.0% in experimental settings with irregular contours mimicking breast cancer treatment set-up. Developed algorithms could accurately estimate the transmission dose in most radiation treatment settings including irregularly shaped field and irregularly shaped body contour with tissue deficit in transmission dosimetry

  15. Joint level-set and spatio-temporal motion detection for cell segmentation.

    Science.gov (United States)

    Boukari, Fatima; Makrogiannis, Sokratis

    2016-08-10

    Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan

  16. Ultrafuzziness Optimization Based on Type II Fuzzy Sets for Image Thresholding

    Directory of Open Access Journals (Sweden)

    Hudan Studiawan

    2010-11-01

    Full Text Available Image thresholding is one of the processing techniques to provide high quality preprocessed image. Image vagueness and bad illumination are common obstacles yielding in a poor image thresholding output. By assuming image as fuzzy sets, several different fuzzy thresholding techniques have been proposed to remove these obstacles during threshold selection. In this paper, we proposed an algorithm for thresholding image using ultrafuzziness optimization to decrease uncertainty in fuzzy system by common fuzzy sets like type II fuzzy sets. Optimization was conducted by involving ultrafuzziness measurement for background and object fuzzy sets separately. Experimental results demonstrated that the proposed image thresholding method had good performances for images with high vagueness, low level contrast, and grayscale ambiguity.

  17. Towards heterogeneous robot team path planning: acquisition of multiple routes with a modified spline-based algorithm

    Directory of Open Access Journals (Sweden)

    Lavrenov Roman

    2017-01-01

    Full Text Available Our research focuses on operation of a heterogeneous robotic group that carries out point-to point navigation in GPS-denied dynamic environment, applying a combined local and global planning approach. In this paper, we introduce a homotopy-based high-level planner, which uses a modified splinebased path-planning algorithm. The algorithm utilizes Voronoi graph for global planning and a set of optimization criteria for local improvements of selected paths. The simulation was implemented in Matlab environment.

  18. Do country-specific preference weights matter in the choice of mapping algorithms? The case of mapping the Diabetes-39 onto eight country-specific EQ-5D-5L value sets.

    Science.gov (United States)

    Lamu, Admassu N; Chen, Gang; Gamst-Klaussen, Thor; Olsen, Jan Abel

    2018-03-22

    To develop mapping algorithms that transform Diabetes-39 (D-39) scores onto EQ-5D-5L utility values for each of eight recently published country-specific EQ-5D-5L value sets, and to compare mapping functions across the EQ-5D-5L value sets. Data include 924 individuals with self-reported diabetes from six countries. The D-39 dimensions, age and gender were used as potential predictors for EQ-5D-5L utilities, which were scored using value sets from eight countries (England, Netherland, Spain, Canada, Uruguay, China, Japan and Korea). Ordinary least squares, generalised linear model, beta binomial regression, fractional regression, MM estimation and censored least absolute deviation were used to estimate the mapping algorithms. The optimal algorithm for each country-specific value set was primarily selected based on normalised root mean square error (NRMSE), normalised mean absolute error (NMAE) and adjusted-r 2 . Cross-validation with fivefold approach was conducted to test the generalizability of each model. The fractional regression model with loglog as a link function consistently performed best in all country-specific value sets. For instance, the NRMSE (0.1282) and NMAE (0.0914) were the lowest, while adjusted-r 2 was the highest (52.5%) when the English value set was considered. Among D-39 dimensions, the energy and mobility was the only one that was consistently significant for all models. The D-39 can be mapped onto the EQ-5D-5L utilities with good predictive accuracy. The fractional regression model, which is appropriate for handling bounded outcomes, outperformed other candidate methods in all country-specific value sets. However, the regression coefficients differed reflecting preference heterogeneity across countries.

  19. Automatic Algorithm Selection for Complex Simulation Problems

    CERN Document Server

    Ewald, Roland

    2012-01-01

    To select the most suitable simulation algorithm for a given task is often difficult. This is due to intricate interactions between model features, implementation details, and runtime environment, which may strongly affect the overall performance. An automated selection of simulation algorithms supports users in setting up simulation experiments without demanding expert knowledge on simulation. Roland Ewald analyzes and discusses existing approaches to solve the algorithm selection problem in the context of simulation. He introduces a framework for automatic simulation algorithm selection and

  20. Multi-level decision making models, methods and applications

    CERN Document Server

    Zhang, Guangquan; Gao, Ya

    2015-01-01

    This monograph presents new developments in multi-level decision-making theory, technique and method in both modeling and solution issues. It especially presents how a decision support system can support managers in reaching a solution to a multi-level decision problem in practice. This monograph combines decision theories, methods, algorithms and applications effectively. It discusses in detail the models and solution algorithms of each issue of bi-level and tri-level decision-making, such as multi-leaders, multi-followers, multi-objectives, rule-set-based, and fuzzy parameters. Potential readers include organizational managers and practicing professionals, who can use the methods and software provided to solve their real decision problems; PhD students and researchers in the areas of bi-level and multi-level decision-making and decision support systems; students at an advanced undergraduate, master’s level in information systems, business administration, or the application of computer science.  

  1. An improved multi-domain convolution tracking algorithm

    Science.gov (United States)

    Sun, Xin; Wang, Haiying; Zeng, Yingsen

    2018-04-01

    Along with the wide application of the Deep Learning in the field of Computer vision, Deep learning has become a mainstream direction in the field of object tracking. The tracking algorithm in this paper is based on the improved multidomain convolution neural network, and the VOT video set is pre-trained on the network by multi-domain training strategy. In the process of online tracking, the network evaluates candidate targets sampled from vicinity of the prediction target in the previous with Gaussian distribution, and the candidate target with the highest score is recognized as the prediction target of this frame. The Bounding Box Regression model is introduced to make the prediction target closer to the ground-truths target box of the test set. Grouping-update strategy is involved to extract and select useful update samples in each frame, which can effectively prevent over fitting. And adapt to changes in both target and environment. To improve the speed of the algorithm while maintaining the performance, the number of candidate target succeed in adjusting dynamically with the help of Self-adaption parameter Strategy. Finally, the algorithm is tested by OTB set, compared with other high-performance tracking algorithms, and the plot of success rate and the accuracy are drawn. which illustrates outstanding performance of the tracking algorithm in this paper.

  2. Quantum approximate optimization algorithm for MaxCut: A fermionic view

    Science.gov (United States)

    Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.

    2018-02-01

    Farhi et al. recently proposed a class of quantum algorithms, the quantum approximate optimization algorithm (QAOA), for approximately solving combinatorial optimization problems (E. Farhi et al., arXiv:1411.4028; arXiv:1412.6062; arXiv:1602.07674). A level-p QAOA circuit consists of p steps; in each step a classical Hamiltonian, derived from the cost function, is applied followed by a mixing Hamiltonian. The 2 p times for which these two Hamiltonians are applied are the parameters of the algorithm, which are to be optimized classically for the best performance. As p increases, parameter optimization becomes inefficient due to the curse of dimensionality. The success of the QAOA approach will depend, in part, on finding effective parameter-setting strategies. Here we analytically and numerically study parameter setting for the QAOA applied to MaxCut. For the level-1 QAOA, we derive an analytical expression for a general graph. In principle, expressions for higher p could be derived, but the number of terms quickly becomes prohibitive. For a special case of MaxCut, the "ring of disagrees," or the one-dimensional antiferromagnetic ring, we provide an analysis for an arbitrarily high level. Using a fermionic representation, the evolution of the system under the QAOA translates into quantum control of an ensemble of independent spins. This treatment enables us to obtain analytical expressions for the performance of the QAOA for any p . It also greatly simplifies the numerical search for the optimal values of the parameters. By exploring symmetries, we identify a lower-dimensional submanifold of interest; the search effort can be accordingly reduced. This analysis also explains an observed symmetry in the optimal parameter values. Further, we numerically investigate the parameter landscape and show that it is a simple one in the sense of having no local optima.

  3. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  4. Quantum mechanics over sets

    Science.gov (United States)

    Ellerman, David

    2014-03-01

    In models of QM over finite fields (e.g., Schumacher's ``modal quantum theory'' MQT), one finite field stands out, Z2, since Z2 vectors represent sets. QM (finite-dimensional) mathematics can be transported to sets resulting in quantum mechanics over sets or QM/sets. This gives a full probability calculus (unlike MQT with only zero-one modalities) that leads to a fulsome theory of QM/sets including ``logical'' models of the double-slit experiment, Bell's Theorem, QIT, and QC. In QC over Z2 (where gates are non-singular matrices as in MQT), a simple quantum algorithm (one gate plus one function evaluation) solves the Parity SAT problem (finding the parity of the sum of all values of an n-ary Boolean function). Classically, the Parity SAT problem requires 2n function evaluations in contrast to the one function evaluation required in the quantum algorithm. This is quantum speedup but with all the calculations over Z2 just like classical computing. This shows definitively that the source of quantum speedup is not in the greater power of computing over the complex numbers, and confirms the idea that the source is in superposition.

  5. Fast compact algorithms and software for spline smoothing

    CERN Document Server

    Weinert, Howard L

    2012-01-01

    Fast Compact Algorithms and Software for Spline Smoothing investigates algorithmic alternatives for computing cubic smoothing splines when the amount of smoothing is determined automatically by minimizing the generalized cross-validation score. These algorithms are based on Cholesky factorization, QR factorization, or the fast Fourier transform. All algorithms are implemented in MATLAB and are compared based on speed, memory use, and accuracy. An overall best algorithm is identified, which allows very large data sets to be processed quickly on a personal computer.

  6. Performance Analysis of Iterative Decoding Algorithms for PEG LDPC Codes in Nakagami Fading Channels

    Directory of Open Access Journals (Sweden)

    O. Al Rasheed

    2013-11-01

    Full Text Available In this paper we give a comparative analysis of decoding algorithms of Low Density Parity Check (LDPC codes in a channel with the Nakagami distribution of the fading envelope. We consider the Progressive Edge-Growth (PEG method and Improved PEG method for the parity check matrix construction, which can be used to avoid short girths, small trapping sets and a high level of error floor. A comparative analysis of several classes of LDPC codes in various propagation conditions and decoded using different decoding algorithms is also presented.

  7. A Framework To Support Management Of HIVAIDS Using K-Means And Random Forest Algorithm

    Directory of Open Access Journals (Sweden)

    Gladys Iseu

    2017-06-01

    Full Text Available Healthcare industry generates large amounts of complex data about patients hospital resources disease management electronic patient records and medical devices among others. The availability of these huge amounts of medical data creates a need for powerful mining tools to support health care professionals in diagnosis treatment and management of HIVAIDS. Several data mining techniques have been used in management of different data sets. Data mining techniques have been categorized into regression algorithms segmentation algorithms association algorithms sequence analysis algorithms and classification algorithms. In the medical field there has not been a specific study that has incorporated two or more data mining algorithms hence limiting decision making levels by medical practitioners. This study identified the extent to which K-means algorithm cluster patient characteristics it has also evaluated the extent to which random forest algorithm can classify the data for informed decision making as well as design a framework to support medical decision making in the treatment of HIVAIDS related diseases in Kenya. The paper further used random forest classification algorithm to compute proximities between pairs of cases that can be used in clustering locating outliers or by scaling to give interesting views of the data.

  8. Accelerated EM-based clustering of large data sets

    NARCIS (Netherlands)

    Verbeek, J.J.; Nunnink, J.R.J.; Vlassis, N.

    2006-01-01

    Motivated by the poor performance (linear complexity) of the EM algorithm in clustering large data sets, and inspired by the successful accelerated versions of related algorithms like k-means, we derive an accelerated variant of the EM algorithm for Gaussian mixtures that: (1) offers speedups that

  9. PROPOSAL OF ALGORITHM FOR ROUTE OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Robert Ramon de Carvalho Sousa

    2016-06-01

    Full Text Available This article uses “Six Sigma” methodology for the elaboration of an algorithm for routing problems which is able to obtain more efficient results than those from Clarke and Wright´s (CW algorithm (1964 in situations of random increase of product delivery demands, facing the incapability of service level increase . In some situations, the algorithm proposed obtained more efficient results than the CW algorithm. The key factor was a reduction in the number of mistakes (one way routes and in the level of result variation.

  10. Multisensor data fusion algorithm development

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  11. Algorithmic and user study of an autocompletion algorithm on a large medical vocabulary.

    Science.gov (United States)

    Sevenster, Merlijn; van Ommering, Rob; Qian, Yuechen

    2012-02-01

    Autocompletion supports human-computer interaction in software applications that let users enter textual data. We will be inspired by the use case in which medical professionals enter ontology concepts, catering the ongoing demand for structured and standardized data in medicine. Goal is to give an algorithmic analysis of one particular autocompletion algorithm, called multi-prefix matching algorithm, which suggests terms whose words' prefixes contain all words in the string typed by the user, e.g., in this sense, opt ner me matches optic nerve meningioma. Second we aim to investigate how well it supports users entering concepts from a large and comprehensive medical vocabulary (snomed ct). We give a concise description of the multi-prefix algorithm, and sketch how it can be optimized to meet required response time. Performance will be compared to a baseline algorithm, which gives suggestions that extend the string typed by the user to the right, e.g. optic nerve m gives optic nerve meningioma, but opt ner me does not. We conduct a user experiment in which 12 participants are invited to complete 40 snomed ct terms with the baseline algorithm and another set of 40 snomed ct terms with the multi-prefix algorithm. Our results show that users need significantly fewer keystrokes when supported by the multi-prefix algorithm than when supported by the baseline algorithm. The proposed algorithm is a competitive candidate for searching and retrieving terms from a large medical ontology. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. The Top Ten Algorithms in Data Mining

    CERN Document Server

    Wu, Xindong

    2009-01-01

    From classification and clustering to statistical learning, association analysis, and link mining, this book covers the most important topics in data mining research. It presents the ten most influential algorithms used in the data mining community today. Each chapter provides a detailed description of the algorithm, a discussion of available software implementation, advanced topics, and exercises. With a simple data set, examples illustrate how each algorithm works and highlight the overall performance of each algorithm in a real-world application. Featuring contributions from leading researc

  13. Earthquake—explosion discrimination using genetic algorithm-based boosting approach

    Science.gov (United States)

    Orlic, Niksa; Loncaric, Sven

    2010-02-01

    An important and challenging problem in seismic data processing is to discriminate between natural seismic events such as earthquakes and artificial seismic events such as explosions. Many automatic techniques for seismogram classification have been proposed in the literature. Most of these methods have a similar approach to seismogram classification: a predefined set of features based on ad-hoc feature selection criteria is extracted from the seismogram waveform or spectral data and these features are used for signal classification. In this paper we propose a novel approach for seismogram classification. A specially formulated genetic algorithm has been employed to automatically search for a near-optimal seismogram feature set, instead of using ad-hoc feature selection criteria. A boosting method is added to the genetic algorithm when searching for multiple features in order to improve classification performance. A learning set of seismogram data is used by the genetic algorithm to discover a near-optimal feature set. The feature set identified by the genetic algorithm is then used for seismogram classification. The described method is developed to classify seismograms in two groups, whereas a brief overview of method extension for multiple group classification is given. For method verification, a learning set consisting of 40 local earthquake seismograms and 40 explosion seismograms was used. The method was validated on seismogram set consisting of 60 local earthquake seismograms and 60 explosion seismograms, with correct classification of 85%.

  14. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  15. KM-FCM: A fuzzy clustering optimization algorithm based on Mahalanobis distance

    Directory of Open Access Journals (Sweden)

    Zhiwen ZU

    2018-04-01

    Full Text Available The traditional fuzzy clustering algorithm uses Euclidean distance as the similarity criterion, which is disadvantageous to the multidimensional data processing. In order to solve this situation, Mahalanobis distance is used instead of the traditional Euclidean distance, and the optimization of fuzzy clustering algorithm based on Mahalanobis distance is studied to enhance the clustering effect and ability. With making the initialization means by Heuristic search algorithm combined with k-means algorithm, and in terms of the validity function which could automatically adjust the optimal clustering number, an optimization algorithm KM-FCM is proposed. The new algorithm is compared with FCM algorithm, FCM-M algorithm and M-FCM algorithm in three standard data sets. The experimental results show that the KM-FCM algorithm is effective. It has higher clustering accuracy than FCM, FCM-M and M-FCM, recognizing high-dimensional data clustering well. It has global optimization effect, and the clustering number has no need for setting in advance. The new algorithm provides a reference for the optimization of fuzzy clustering algorithm based on Mahalanobis distance.

  16. Level Set Projection Method for Incompressible Navier-Stokes on Arbitrary Boundaries

    KAUST Repository

    Williams-Rioux, Bertrand

    2012-01-12

    Second order level set projection method for incompressible Navier-Stokes equations is proposed to solve flow around arbitrary geometries. We used rectilinear grid with collocated cell centered velocity and pressure. An explicit Godunov procedure is used to address the nonlinear advection terms, and an implicit Crank-Nicholson method to update viscous effects. An approximate pressure projection is implemented at the end of the time stepping using multigrid as a conventional fast iterative method. The level set method developed by Osher and Sethian [17] is implemented to address real momentum and pressure boundary conditions by the advection of a distance function, as proposed by Aslam [3]. Numerical results for the Strouhal number and drag coefficients validated the model with good accuracy for flow over a cylinder in the parallel shedding regime (47 < Re < 180). Simulations for an array of cylinders and an oscillating cylinder were performed, with the latter demonstrating our methods ability to handle dynamic boundary conditions.

  17. Evidence-based algorithm for heparin dosing before cardiopulmonary bypass. Part 1: Development of the algorithm.

    Science.gov (United States)

    McKinney, Mark C; Riley, Jeffrey B

    2007-12-01

    The incidence of heparin resistance during adult cardiac surgery with cardiopulmonary bypass has been reported at 15%-20%. The consistent use of a clinical decision-making algorithm may increase the consistency of patient care and likely reduce the total required heparin dose and other problems associated with heparin dosing. After a directed survey of practicing perfusionists regarding treatment of heparin resistance and a literature search for high-level evidence regarding the diagnosis and treatment of heparin resistance, an evidence-based decision-making algorithm was constructed. The face validity of the algorithm decisive steps and logic was confirmed by a second survey of practicing perfusionists. The algorithm begins with review of the patient history to identify predictors for heparin resistance. The definition for heparin resistance contained in the algorithm is an activated clotting time 450 IU/kg heparin loading dose. Based on the literature, the treatment for heparin resistance used in the algorithm is anti-thrombin III supplement. The algorithm seems to be valid and is supported by high-level evidence and clinician opinion. The next step is a human randomized clinical trial to test the clinical procedure guideline algorithm vs. current standard clinical practice.

  18. Parallelization of TMVA Machine Learning Algorithms

    CERN Document Server

    Hajili, Mammad

    2017-01-01

    This report reflects my work on Parallelization of TMVA Machine Learning Algorithms integrated to ROOT Data Analysis Framework during summer internship at CERN. The report consists of 4 impor- tant part - data set used in training and validation, algorithms that multiprocessing applied on them, parallelization techniques and re- sults of execution time changes due to number of workers.

  19. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem

    Directory of Open Access Journals (Sweden)

    Shi-hua Zhan

    2016-01-01

    Full Text Available Simulated annealing (SA algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters’ setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA algorithm to solve traveling salesman problem (TSP. LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.

  20. Stochastic level-set variational implicit-solvent approach to solute-solvent interfacial fluctuations

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Shenggao, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Mathematical Center for Interdiscipline Research, Soochow University, 1 Shizi Street, Jiangsu, Suzhou 215006 (China); Sun, Hui; Cheng, Li-Tien [Department of Mathematics, University of California, San Diego, La Jolla, California 92093-0112 (United States); Dzubiella, Joachim [Soft Matter and Functional Materials, Helmholtz-Zentrum Berlin, 14109 Berlin, Germany and Institut für Physik, Humboldt-Universität zu Berlin, 12489 Berlin (Germany); Li, Bo, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Quantitative Biology Graduate Program, University of California, San Diego, La Jolla, California 92093-0112 (United States); McCammon, J. Andrew [Department of Chemistry and Biochemistry, Department of Pharmacology, Howard Hughes Medical Institute, University of California, San Diego, La Jolla, California 92093-0365 (United States)

    2016-08-07

    Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the

  1. Fast Dating Using Least-Squares Criteria and Algorithms.

    Science.gov (United States)

    To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

    2016-01-01

    Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley-Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley-Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that

  2. Variation in efficiency of parallel algorithms. [for study of stiffness matrices in planar trusses

    Science.gov (United States)

    Hayashi, A.; Melosh, R. J.; Utku, S.; Salama, M.

    1985-01-01

    The present study has the objective to investigate some iterative parallel-processor linear equation solving algorithms with respect to efficiency for analyses of typical linear engineering systems. Attention is given to a set of n linear equations, Ku = p, where K = an n x n positive definite, sparsely populated, symmetric matrix, u = an n x 1 vector of unknown responses, and p = an n x 1 vector of prescribed constants. This study is concerned with a hybrid method in which iteration is used to solve the problem, while a direct method is used on the local processor level. Variations in the efficiency of parallel algorithms are explored. Measures of the efficiency are based on computer experiments regarding the algorithms. For all the algorithms, the wall clock time is found to decrease as the number of processors increases.

  3. Algorithm Sorts Groups Of Data

    Science.gov (United States)

    Evans, J. D.

    1987-01-01

    For efficient sorting, algorithm finds set containing minimum or maximum most significant data. Sets of data sorted as desired. Sorting process simplified by reduction of each multielement set of data to single representative number. First, each set of data expressed as polynomial with suitably chosen base, using elements of set as coefficients. Most significant element placed in term containing largest exponent. Base selected by examining range in value of data elements. Resulting series summed to yield single representative number. Numbers easily sorted, and each such number converted back to original set of data by successive division. Program written in BASIC.

  4. The Algorithm of Link Prediction on Social Network

    Directory of Open Access Journals (Sweden)

    Liyan Dong

    2013-01-01

    Full Text Available At present, most link prediction algorithms are based on the similarity between two entities. Social network topology information is one of the main sources to design the similarity function between entities. But the existing link prediction algorithms do not apply the network topology information sufficiently. For lack of traditional link prediction algorithms, we propose two improved algorithms: CNGF algorithm based on local information and KatzGF algorithm based on global information network. For the defect of the stationary of social network, we also provide the link prediction algorithm based on nodes multiple attributes information. Finally, we verified these algorithms on DBLP data set, and the experimental results show that the performance of the improved algorithm is superior to that of the traditional link prediction algorithm.

  5. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    Science.gov (United States)

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Diagnostic Assessment of Deep Learning Algorithms for Detection of Lymph Node Metastases in Women With Breast Cancer.

    Science.gov (United States)

    Ehteshami Bejnordi, Babak; Veta, Mitko; Johannes van Diest, Paul; van Ginneken, Bram; Karssemeijer, Nico; Litjens, Geert; van der Laak, Jeroen A W M; Hermsen, Meyke; Manson, Quirine F; Balkenhol, Maschenka; Geessink, Oscar; Stathonikos, Nikolaos; van Dijk, Marcory Crf; Bult, Peter; Beca, Francisco; Beck, Andrew H; Wang, Dayong; Khosla, Aditya; Gargeya, Rishab; Irshad, Humayun; Zhong, Aoxiao; Dou, Qi; Li, Quanzheng; Chen, Hao; Lin, Huang-Jing; Heng, Pheng-Ann; Haß, Christian; Bruni, Elia; Wong, Quincy; Halici, Ugur; Öner, Mustafa Ümit; Cetin-Atalay, Rengul; Berseth, Matt; Khvatkov, Vitali; Vylegzhanin, Alexei; Kraus, Oren; Shaban, Muhammad; Rajpoot, Nasir; Awan, Ruqayya; Sirinukunwattana, Korsuk; Qaiser, Talha; Tsang, Yee-Wah; Tellez, David; Annuscheit, Jonas; Hufnagl, Peter; Valkonen, Mira; Kartasalo, Kimmo; Latonen, Leena; Ruusuvuori, Pekka; Liimatainen, Kaisa; Albarqouni, Shadi; Mungal, Bharti; George, Ami; Demirci, Stefanie; Navab, Nassir; Watanabe, Seiryo; Seno, Shigeto; Takenaka, Yoichi; Matsuda, Hideo; Ahmady Phoulady, Hady; Kovalev, Vassili; Kalinovsky, Alexander; Liauchuk, Vitali; Bueno, Gloria; Fernandez-Carrobles, M Milagro; Serrano, Ismael; Deniz, Oscar; Racoceanu, Daniel; Venâncio, Rui

    2017-12-12

    Application of deep learning algorithms to whole-slide pathology images can potentially improve diagnostic accuracy and efficiency. Assess the performance of automated deep learning algorithms at detecting metastases in hematoxylin and eosin-stained tissue sections of lymph nodes of women with breast cancer and compare it with pathologists' diagnoses in a diagnostic setting. Researcher challenge competition (CAMELYON16) to develop automated solutions for detecting lymph node metastases (November 2015-November 2016). A training data set of whole-slide images from 2 centers in the Netherlands with (n = 110) and without (n = 160) nodal metastases verified by immunohistochemical staining were provided to challenge participants to build algorithms. Algorithm performance was evaluated in an independent test set of 129 whole-slide images (49 with and 80 without metastases). The same test set of corresponding glass slides was also evaluated by a panel of 11 pathologists with time constraint (WTC) from the Netherlands to ascertain likelihood of nodal metastases for each slide in a flexible 2-hour session, simulating routine pathology workflow, and by 1 pathologist without time constraint (WOTC). Deep learning algorithms submitted as part of a challenge competition or pathologist interpretation. The presence of specific metastatic foci and the absence vs presence of lymph node metastasis in a slide or image using receiver operating characteristic curve analysis. The 11 pathologists participating in the simulation exercise rated their diagnostic confidence as definitely normal, probably normal, equivocal, probably tumor, or definitely tumor. The area under the receiver operating characteristic curve (AUC) for the algorithms ranged from 0.556 to 0.994. The top-performing algorithm achieved a lesion-level, true-positive fraction comparable with that of the pathologist WOTC (72.4% [95% CI, 64.3%-80.4%]) at a mean of 0.0125 false-positives per normal whole-slide image

  7. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  8. The Parallel Algorithm Based on Genetic Algorithm for Improving the Performance of Cognitive Radio

    Directory of Open Access Journals (Sweden)

    Liu Miao

    2018-01-01

    Full Text Available The intercarrier interference (ICI problem of cognitive radio (CR is severe. In this paper, the machine learning algorithm is used to obtain the optimal interference subcarriers of an unlicensed user (un-LU. Masking the optimal interference subcarriers can suppress the ICI of CR. Moreover, the parallel ICI suppression algorithm is designed to improve the calculation speed and meet the practical requirement of CR. Simulation results show that the data transmission rate threshold of un-LU can be set, the data transmission quality of un-LU can be ensured, the ICI of a licensed user (LU is suppressed, and the bit error rate (BER performance of LU is improved by implementing the parallel suppression algorithm. The ICI problem of CR is solved well by the new machine learning algorithm. The computing performance of the algorithm is improved by designing a new parallel structure and the communication performance of CR is enhanced.

  9. a Fast Segmentation Algorithm for C-V Model Based on Exponential Image Sequence Generation

    Science.gov (United States)

    Hu, J.; Lu, L.; Xu, J.; Zhang, J.

    2017-09-01

    For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1) the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2) the initial value of SDF (Signal Distance Function) and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3) the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.

  10. Benchmarking motion planning algorithms for bin-picking applications

    DEFF Research Database (Denmark)

    Iversen, Thomas Fridolin; Ellekilde, Lars-Peter

    2017-01-01

    Purpose For robot motion planning there exists a large number of different algorithms, each appropriate for a certain domain, and the right choice of planner depends on the specific use case. The purpose of this paper is to consider the application of bin picking and benchmark a set of motion...... planning algorithms to identify which are most suited in the given context. Design/methodology/approach The paper presents a selection of motion planning algorithms and defines benchmarks based on three different bin-picking scenarios. The evaluation is done based on a fixed set of tasks, which are planned...... and executed on a real and a simulated robot. Findings The benchmarking shows a clear difference between the planners and generally indicates that algorithms integrating optimization, despite longer planning time, perform better due to a faster execution. Originality/value The originality of this work lies...

  11. Set of Frequent Word Item sets as Feature Representation for Text with Indonesian Slang

    Science.gov (United States)

    Sa'adillah Maylawati, Dian; Putri Saptawati, G. A.

    2017-01-01

    Indonesian slang are commonly used in social media. Due to their unstructured syntax, it is difficult to extract their features based on Indonesian grammar for text mining. To do so, we propose Set of Frequent Word Item sets (SFWI) as text representation which is considered match for Indonesian slang. Besides, SFWI is able to keep the meaning of Indonesian slang with regard to the order of appearance sentence. We use FP-Growth algorithm with adding separation sentence function into the algorithm to extract the feature of SFWI. The experiments is done with text data from social media such as Facebook, Twitter, and personal website. The result of experiments shows that Indonesian slang were more correctly interpreted based on SFWI.

  12. Improving Limit Surface Search Algorithms in RAVEN Using Acceleration Schemes: Level II Milestone

    Energy Technology Data Exchange (ETDEWEB)

    Alfonsi, Andrea [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Cogliati, Joshua Joseph [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Sen, Ramazan Sonat [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Smith, Curtis Lee [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2015-07-01

    The RAVEN code is becoming a comprehensive tool to perform Probabilistic Risk Assessment (PRA); Uncertainty Quantification (UQ) and Propagation; and Verification and Validation (V&V). The RAVEN code is being developed to support the Risk-Informed Safety Margin Characterization (RISMC) pathway by developing an advanced set of methodologies and algorithms for use in advanced risk analysis. The RISMC approach uses system simulator codes applied to stochastic analysis tools. The fundamental idea behind this coupling approach to perturb (by employing sampling strategies) timing and sequencing of events, internal parameters of the system codes (i.e., uncertain parameters of the physics model) and initial conditions to estimate values ranges and associated probabilities of figures of merit of interest for engineering and safety (e.g. core damage probability, etc.). This approach applied to complex systems such as nuclear power plants requires performing a series of computationally expensive simulation runs. The large computational burden is caused by the large set of (uncertain) parameters characterizing those systems. Consequently, exploring the uncertain/parametric domain, with a good level of confidence, is generally not affordable, considering the limited computational resources that are currently available. In addition, the recent tendency to develop newer tools, characterized by higher accuracy and larger computational resources (if compared with the presently used legacy codes, that have been developed decades ago), has made this issue even more compelling. In order to overcome to these limitations, the strategy for the exploration of the uncertain/parametric space needs to use at best the computational resources focusing the computational effort in those regions of the uncertain/parametric space that are “interesting” (e.g., risk-significant regions of the input space) with respect the targeted Figures Of Merit (FOM): for example, the failure of the system

  13. Streaming Algorithms for Line Simplification

    DEFF Research Database (Denmark)

    Abam, Mohammad; de Berg, Mark; Hachenberger, Peter

    2010-01-01

    this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our...... simplification to the error of the optimal simplification with k points. We obtain the algorithms with O(1) competitive ratio for three cases: convex paths, where the error is measured using the Hausdorff distance (or Fréchet distance), xy-monotone paths, where the error is measured using the Hausdorff distance...... (or Fréchet distance), and general paths, where the error is measured using the Fréchet distance. In the first case the algorithm needs O(k) additional storage, and in the latter two cases the algorithm needs O(k 2) additional storage....

  14. Learning algorithms and automatic processing of languages

    International Nuclear Information System (INIS)

    Fluhr, Christian Yves Andre

    1977-01-01

    This research thesis concerns the field of artificial intelligence. It addresses learning algorithms applied to automatic processing of languages. The author first briefly describes some mechanisms of human intelligence in order to describe how these mechanisms are simulated on a computer. He outlines the specific role of learning in various manifestations of intelligence. Then, based on the Markov's algorithm theory, the author discusses the notion of learning algorithm. Two main types of learning algorithms are then addressed: firstly, an 'algorithm-teacher dialogue' type sanction-based algorithm which aims at learning how to solve grammatical ambiguities in submitted texts; secondly, an algorithm related to a document system which structures semantic data automatically obtained from a set of texts in order to be able to understand by references to any question on the content of these texts

  15. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    Science.gov (United States)

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  16. Optical flow optimization using parallel genetic algorithm

    Science.gov (United States)

    Zavala-Romero, Olmo; Botella, Guillermo; Meyer-Bäse, Anke; Meyer Base, Uwe

    2011-06-01

    A new approach to optimize the parameters of a gradient-based optical flow model using a parallel genetic algorithm (GA) is proposed. The main characteristics of the optical flow algorithm are its bio-inspiration and robustness against contrast, static patterns and noise, besides working consistently with several optical illusions where other algorithms fail. This model depends on many parameters which conform the number of channels, the orientations required, the length and shape of the kernel functions used in the convolution stage, among many more. The GA is used to find a set of parameters which improve the accuracy of the optical flow on inputs where the ground-truth data is available. This set of parameters helps to understand which of them are better suited for each type of inputs and can be used to estimate the parameters of the optical flow algorithm when used with videos that share similar characteristics. The proposed implementation takes into account the embarrassingly parallel nature of the GA and uses the OpenMP Application Programming Interface (API) to speedup the process of estimating an optimal set of parameters. The information obtained in this work can be used to dynamically reconfigure systems, with potential applications in robotics, medical imaging and tracking.

  17. A new algorithm to construct phylogenetic networks from trees.

    Science.gov (United States)

    Wang, J

    2014-03-06

    Developing appropriate methods for constructing phylogenetic networks from tree sets is an important problem, and much research is currently being undertaken in this area. BIMLR is an algorithm that constructs phylogenetic networks from tree sets. The algorithm can construct a much simpler network than other available methods. Here, we introduce an improved version of the BIMLR algorithm, QuickCass. QuickCass changes the selection strategy of the labels of leaves below the reticulate nodes, i.e., the nodes with an indegree of at least 2 in BIMLR. We show that QuickCass can construct simpler phylogenetic networks than BIMLR. Furthermore, we show that QuickCass is a polynomial-time algorithm when the output network that is constructed by QuickCass is binary.

  18. ON A NUMERICAL ALGORITHM FOR UNCERTAIN SYSTEM ∫ Φ ...

    African Journals Online (AJOL)

    Administrator

    Science World Journal Vol 7 (No 1) 2012 www.scienceworldjournal.org. ISSN 1597-6343. On a Numerical Algorithm for Uncertain System. Newton's Algorithm. Step 1 Calculate. )(),().(k k k. xAxgxF. Step 2. Check if ε. <. )(k xg for a predetermined ,ε if so stop, else. Step3. Set k k. PxA. )( = )(k xg. -. Step4. Set k k k. Px x. +. = +1.

  19. Quantum algorithms for topological and geometric analysis of data

    Science.gov (United States)

    Lloyd, Seth; Garnerone, Silvano; Zanardi, Paolo

    2016-01-01

    Extracting useful information from large data sets can be a daunting task. Topological methods for analysing data sets provide a powerful technique for extracting such information. Persistent homology is a sophisticated tool for identifying topological features and for determining how such features persist as the data is viewed at different scales. Here we present quantum machine learning algorithms for calculating Betti numbers—the numbers of connected components, holes and voids—in persistent homology, and for finding eigenvectors and eigenvalues of the combinatorial Laplacian. The algorithms provide an exponential speed-up over the best currently known classical algorithms for topological data analysis. PMID:26806491

  20. Large-scale sequential quadratic programming algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  1. Algorithm for programming function generators

    International Nuclear Information System (INIS)

    Bozoki, E.

    1981-01-01

    The present paper deals with a mathematical problem, encountered when driving a fully programmable μ-processor controlled function generator. An algorithm is presented to approximate a desired function by a set of straight segments in such a way that additional restrictions (hardware imposed) are also satisfied. A computer program which incorporates this algorithm and automatically generates the necessary input for the function generator for a broad class of desired functions is also described

  2. A physics-enabled flow restoration algorithm for sparse PIV and PTV measurements

    Science.gov (United States)

    Vlasenko, Andrey; Steele, Edward C. C.; Nimmo-Smith, W. Alex M.

    2015-06-01

    The gaps and noise present in particle image velocimetry (PIV) and particle tracking velocimetry (PTV) measurements affect the accuracy of the data collected. Existing algorithms developed for the restoration of such data are only applicable to experimental measurements collected under well-prepared laboratory conditions (i.e. where the pattern of the velocity flow field is known), and the distribution, size and type of gaps and noise may be controlled by the laboratory set-up. However, in many cases, such as PIV and PTV measurements of arbitrarily turbid coastal waters, the arrangement of such conditions is not possible. When the size of gaps or the level of noise in these experimental measurements become too large, their successful restoration with existing algorithms becomes questionable. Here, we outline a new physics-enabled flow restoration algorithm (PEFRA), specially designed for the restoration of such velocity data. Implemented as a ‘black box’ algorithm, where no user-background in fluid dynamics is necessary, the physical structure of the flow in gappy or noisy data is able to be restored in accordance with its hydrodynamical basis. The use of this is not dependent on types of flow, types of gaps or noise in measurements. The algorithm will operate on any data time-series containing a sequence of velocity flow fields recorded by PIV or PTV. Tests with numerical flow fields established that this method is able to successfully restore corrupted PIV and PTV measurements with different levels of sparsity and noise. This assessment of the algorithm performance is extended with an example application to in situ submersible 3D-PTV measurements collected in the bottom boundary layer of the coastal ocean, where the naturally-occurring plankton and suspended sediments used as tracers causes an increase in the noise level that, without such denoising, will contaminate the measurements.

  3. A physics-enabled flow restoration algorithm for sparse PIV and PTV measurements

    International Nuclear Information System (INIS)

    Vlasenko, Andrey; Steele, Edward C C; Nimmo-Smith, W Alex M

    2015-01-01

    The gaps and noise present in particle image velocimetry (PIV) and particle tracking velocimetry (PTV) measurements affect the accuracy of the data collected. Existing algorithms developed for the restoration of such data are only applicable to experimental measurements collected under well-prepared laboratory conditions (i.e. where the pattern of the velocity flow field is known), and the distribution, size and type of gaps and noise may be controlled by the laboratory set-up. However, in many cases, such as PIV and PTV measurements of arbitrarily turbid coastal waters, the arrangement of such conditions is not possible. When the size of gaps or the level of noise in these experimental measurements become too large, their successful restoration with existing algorithms becomes questionable. Here, we outline a new physics-enabled flow restoration algorithm (PEFRA), specially designed for the restoration of such velocity data. Implemented as a ‘black box’ algorithm, where no user-background in fluid dynamics is necessary, the physical structure of the flow in gappy or noisy data is able to be restored in accordance with its hydrodynamical basis. The use of this is not dependent on types of flow, types of gaps or noise in measurements. The algorithm will operate on any data time-series containing a sequence of velocity flow fields recorded by PIV or PTV. Tests with numerical flow fields established that this method is able to successfully restore corrupted PIV and PTV measurements with different levels of sparsity and noise. This assessment of the algorithm performance is extended with an example application to in situ submersible 3D-PTV measurements collected in the bottom boundary layer of the coastal ocean, where the naturally-occurring plankton and suspended sediments used as tracers causes an increase in the noise level that, without such denoising, will contaminate the measurements. (paper)

  4. Antibiotic prophylaxis in cataract surgery in the setting of penicillin allergy: A decision-making algorithm.

    Science.gov (United States)

    LaHood, Benjamin R; Andrew, Nicholas H; Goggin, Michael

    Cataract surgery is the most commonly performed surgical procedure in many developed countries. Postoperative endophthalmitis is a rare complication with potentially devastating visual outcomes. Currently, there is no global consensus regarding antibiotic prophylaxis in cataract surgery despite growing evidence of the benefits of prophylactic intracameral cefuroxime at the conclusion of surgery. The decision about which antibiotic regimen to use is further complicated in patients reporting penicillin allergy. Historic statistics suggesting crossreactivity of penicillins and cephalosporins have persisted into modern surgery. It is important for ophthalmologists to consider all available antibiotic options and have an up-to-date knowledge of antibiotic crossreactivity when faced with the dilemma of choosing appropriate antibiotic prophylaxis for patients undergoing cataract surgery with a history of penicillin allergy. Each option carries risks, and the choice may have medicolegal implications in the event of an adverse outcome. We assess the options for antibiotic prophylaxis in cataract surgery in the setting of penicillin allergy and provide an algorithm to assist decision-making for individual patients. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.

  5. Overlap Algorithms in Flexible Job-shop Scheduling

    Directory of Open Access Journals (Sweden)

    Celia Gutierrez

    2014-06-01

    Full Text Available The flexible Job-shop Scheduling Problem (fJSP considers the execution of jobs by a set of candidate resources while satisfying time and technological constraints. This work, that follows the hierarchical architecture, is based on an algorithm where each objective (resource allocation, start-time assignment is solved by a genetic algorithm (GA that optimizes a particular fitness function, and enhances the results by the execution of a set of heuristics that evaluate and repair each scheduling constraint on each operation. The aim of this work is to analyze the impact of some algorithmic features of the overlap constraint heuristics, in order to achieve the objectives at a highest degree. To demonstrate the efficiency of this approach, experimentation has been performed and compared with similar cases, tuning the GA parameters correctly.

  6. Chemical optimization algorithm for fuzzy controller design

    CERN Document Server

    Astudillo, Leslie; Castillo, Oscar

    2014-01-01

    In this book, a novel optimization method inspired by a paradigm from nature is introduced. The chemical reactions are used as a paradigm to propose an optimization method that simulates these natural processes. The proposed algorithm is described in detail and then a set of typical complex benchmark functions is used to evaluate the performance of the algorithm. Simulation results show that the proposed optimization algorithm can outperform other methods in a set of benchmark functions. This chemical reaction optimization paradigm is also applied to solve the tracking problem for the dynamic model of a unicycle mobile robot by integrating a kinematic and a torque controller based on fuzzy logic theory. Computer simulations are presented confirming that this optimization paradigm is able to outperform other optimization techniques applied to this particular robot application

  7. Algorithms for optimal dyadic decision trees

    Energy Technology Data Exchange (ETDEWEB)

    Hush, Don [Los Alamos National Laboratory; Porter, Reid [Los Alamos National Laboratory

    2009-01-01

    A new algorithm for constructing optimal dyadic decision trees was recently introduced, analyzed, and shown to be very effective for low dimensional data sets. This paper enhances and extends this algorithm by: introducing an adaptive grid search for the regularization parameter that guarantees optimal solutions for all relevant trees sizes, revising the core tree-building algorithm so that its run time is substantially smaller for most regularization parameter values on the grid, and incorporating new data structures and data pre-processing steps that provide significant run time enhancement in practice.

  8. Handling Imbalanced Data Sets in Multistage Classification

    Science.gov (United States)

    López, M.

    Multistage classification is a logical approach, based on a divide-and-conquer solution, for dealing with problems with a high number of classes. The classification problem is divided into several sequential steps, each one associated to a single classifier that works with subgroups of the original classes. In each level, the current set of classes is split into smaller subgroups of classes until they (the subgroups) are composed of only one class. The resulting chain of classifiers can be represented as a tree, which (1) simplifies the classification process by using fewer categories in each classifier and (2) makes it possible to combine several algorithms or use different attributes in each stage. Most of the classification algorithms can be biased in the sense of selecting the most populated class in overlapping areas of the input space. This can degrade a multistage classifier performance if the training set sample frequencies do not reflect the real prevalence in the population. Several techniques such as applying prior probabilities, assigning weights to the classes, or replicating instances have been developed to overcome this handicap. Most of them are designed for two-class (accept-reject) problems. In this article, we evaluate several of these techniques as applied to multistage classification and analyze how they can be useful for astronomy. We compare the results obtained by classifying a data set based on Hipparcos with and without these methods.

  9. Optimization to the Culture Conditions for Phellinus Production with Regression Analysis and Gene-Set Based Genetic Algorithm

    Science.gov (United States)

    Li, Zhongwei; Xin, Yuezhen; Wang, Xun; Sun, Beibei; Xia, Shengyu; Li, Hui

    2016-01-01

    Phellinus is a kind of fungus and is known as one of the elemental components in drugs to avoid cancers. With the purpose of finding optimized culture conditions for Phellinus production in the laboratory, plenty of experiments focusing on single factor were operated and large scale of experimental data were generated. In this work, we use the data collected from experiments for regression analysis, and then a mathematical model of predicting Phellinus production is achieved. Subsequently, a gene-set based genetic algorithm is developed to optimize the values of parameters involved in culture conditions, including inoculum size, PH value, initial liquid volume, temperature, seed age, fermentation time, and rotation speed. These optimized values of the parameters have accordance with biological experimental results, which indicate that our method has a good predictability for culture conditions optimization. PMID:27610365

  10. Correction of Magnetic Optics and Beam Trajectory Using LOCO Based Algorithm with Expanded Experimental Data Sets

    Energy Technology Data Exchange (ETDEWEB)

    Romanov, A.; Edstrom, D.; Emanov, F. A.; Koop, I. A.; Perevedentsev, E. A.; Rogovsky, Yu. A.; Shwartz, D. B.; Valishev, A.

    2017-03-28

    Precise beam based measurement and correction of magnetic optics is essential for the successful operation of accelerators. The LOCO algorithm is a proven and reliable tool, which in some situations can be improved by using a broader class of experimental data. The standard data sets for LOCO include the closed orbit responses to dipole corrector variation, dispersion, and betatron tunes. This paper discusses the benefits from augmenting the data with four additional classes of experimental data: the beam shape measured with beam profile monitors; responses of closed orbit bumps to focusing field variations; betatron tune responses to focusing field variations; BPM-to-BPM betatron phase advances and beta functions in BPMs from turn-by-turn coordinates of kicked beam. All of the described features were implemented in the Sixdsimulation software that was used to correct the optics of the VEPP-2000 collider, the VEPP-5 injector booster ring, and the FAST linac.

  11. Optimization of Multipurpose Reservoir Operation with Application Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Elahe Fallah Mehdipour

    2012-12-01

    Full Text Available Optimal operation of multipurpose reservoirs is one of the complex and sometimes nonlinear problems in the field of multi-objective optimization. Evolutionary algorithms are optimization tools that search decision space using simulation of natural biological evolution and present a set of points as the optimum solutions of problem. In this research, application of multi-objective particle swarm optimization (MOPSO in optimal operation of Bazoft reservoir with different objectives, including generating hydropower energy, supplying downstream demands (drinking, industry and agriculture, recreation and flood control have been considered. In this regard, solution sets of the MOPSO algorithm in bi-combination of objectives and compromise programming (CP using different weighting and power coefficients have been first compared that the MOPSO algorithm in all combinations of objectives is more capable than the CP to find solution with appropriate distribution and these solutions have dominated the CP solutions. Then, ending points of solution set from the MOPSO algorithm and nonlinear programming (NLP results have been compared. Results showed that the MOPSO algorithm with 0.3 percent difference from the NLP results has more capability to present optimum solutions in the ending points of solution set.

  12. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    Energy Technology Data Exchange (ETDEWEB)

    Maier, Joscha, E-mail: joscha.maier@dkfz.de [Medical Physics in Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg (Germany); Sawall, Stefan; Kachelrieß, Marc [Medical Physics in Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany and Institute of Medical Physics, University of Erlangen–Nürnberg, 91052 Erlangen (Germany)

    2014-05-15

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the

  13. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    International Nuclear Information System (INIS)

    Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc

    2014-01-01

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the

  14. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure.

    Science.gov (United States)

    Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc

    2014-05-01

    Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the

  15. Phase recovering algorithms for extended objects encoded in digitally recorded holograms

    Directory of Open Access Journals (Sweden)

    Peng Z.

    2010-06-01

    Full Text Available The paper presents algorithms to recover the optical phase of digitally encoded holograms. Algorithms are based on the use of a numerical spherical reconstructing wave. Proof of the validity of the concept is performed through an experimental off axis digital holographic set-up. Two-color digital holographic reconstruction is also investigated. Application of the color set-up and algorithms concerns the simultaneous two-dimensional deformation measurement of an object submitted to a mechanical loading.

  16. Combinatorial optimization algorithms and complexity

    CERN Document Server

    Papadimitriou, Christos H

    1998-01-01

    This clearly written, mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NP-complete problems, more. All chapters are supplemented by thought-provoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering.

  17. An algorithm for ranking assignments using reoptimization

    DEFF Research Database (Denmark)

    Pedersen, Christian Roed; Nielsen, Lars Relund; Andersen, Kim Allan

    2008-01-01

    We consider the problem of ranking assignments according to cost in the classical linear assignment problem. An algorithm partitioning the set of possible assignments, as suggested by Murty, is presented where, for each partition, the optimal assignment is calculated using a new reoptimization...... technique. Computational results for the new algorithm are presented...

  18. A Constrained Algorithm Based NMFα for Image Representation

    Directory of Open Access Journals (Sweden)

    Chenxue Yang

    2014-01-01

    Full Text Available Nonnegative matrix factorization (NMF is a useful tool in learning a basic representation of image data. However, its performance and applicability in real scenarios are limited because of the lack of image information. In this paper, we propose a constrained matrix decomposition algorithm for image representation which contains parameters associated with the characteristics of image data sets. Particularly, we impose label information as additional hard constraints to the α-divergence-NMF unsupervised learning algorithm. The resulted algorithm is derived by using Karush-Kuhn-Tucker (KKT conditions as well as the projected gradient and its monotonic local convergence is proved by using auxiliary functions. In addition, we provide a method to select the parameters to our semisupervised matrix decomposition algorithm in the experiment. Compared with the state-of-the-art approaches, our method with the parameters has the best classification accuracy on three image data sets.

  19. Improved Collaborative Filtering Algorithm using Topic Model

    Directory of Open Access Journals (Sweden)

    Liu Na

    2016-01-01

    Full Text Available Collaborative filtering algorithms make use of interactions rates between users and items for generating recommendations. Similarity among users or items is calculated based on rating mostly, without considering explicit properties of users or items involved. In this paper, we proposed collaborative filtering algorithm using topic model. We describe user-item matrix as document-word matrix and user are represented as random mixtures over item, each item is characterized by a distribution over users. The experiments showed that the proposed algorithm achieved better performance compared the other state-of-the-art algorithms on Movie Lens data sets.

  20. Maximizing the Lifetime of Wireless Sensor Networks Using Multiple Sets of Rendezvous

    Directory of Open Access Journals (Sweden)

    Bo Li

    2015-01-01

    Full Text Available In wireless sensor networks (WSNs, there is a “crowded center effect” where the energy of nodes located near a data sink drains much faster than other nodes resulting in a short network lifetime. To mitigate the “crowded center effect,” rendezvous points (RPs are used to gather data from other nodes. In order to prolong the lifetime of WSN further, we propose using multiple sets of RPs in turn to average the energy consumption of the RPs. The problem is how to select the multiple sets of RPs and how long to use each set of RPs. An optimal algorithm and a heuristic algorithm are proposed to address this problem. The optimal algorithm is highly complex and only suitable for small scale WSN. The performance of the proposed algorithms is evaluated through simulations. The simulation results indicate that the heuristic algorithm approaches the optimal one and that using multiple RP sets can significantly prolong network lifetime.

  1. Implementation of perceptual aspects in a face recognition algorithm

    International Nuclear Information System (INIS)

    Crenna, F; Bovio, L; Rossi, G B; Zappa, E; Testa, R; Gasparetto, M

    2013-01-01

    Automatic face recognition is a biometric technique particularly appreciated in security applications. In fact face recognition presents the opportunity to operate at a low invasive level without the collaboration of the subjects under tests, with face images gathered either from surveillance systems or from specific cameras located in strategic points. The automatic recognition algorithms perform a measurement, on the face images, of a set of specific characteristics of the subject and provide a recognition decision based on the measurement results. Unfortunately several quantities may influence the measurement of the face geometry such as its orientation, the lighting conditions, the expression and so on, affecting the recognition rate. On the other hand human recognition of face is a very robust process far less influenced by the surrounding conditions. For this reason it may be interesting to insert perceptual aspects in an automatic facial-based recognition algorithm to improve its robustness. This paper presents a first study in this direction investigating the correlation between the results of a perception experiment and the facial geometry, estimated by means of the position of a set of repere points

  2. Adiabatic quantum search algorithm for structured problems

    International Nuclear Information System (INIS)

    Roland, Jeremie; Cerf, Nicolas J.

    2003-01-01

    The study of quantum computation has been motivated by the hope of finding efficient quantum algorithms for solving classically hard problems. In this context, quantum algorithms by local adiabatic evolution have been shown to solve an unstructured search problem with a quadratic speedup over a classical search, just as Grover's algorithm. In this paper, we study how the structure of the search problem may be exploited to further improve the efficiency of these quantum adiabatic algorithms. We show that by nesting a partial search over a reduced set of variables into a global search, it is possible to devise quantum adiabatic algorithms with a complexity that, although still exponential, grows with a reduced order in the problem size

  3. RELAXATION HEURISTICS FOR THE SET COVERING PROBLEM

    OpenAIRE

    Umetani, Shunji; Yagiura, Mutsunori; 柳浦, 睦憲

    2007-01-01

    The set covering problem (SCP) is one of representative combinatorial optimization problems, which has many practical applications. The continuous development of mathematical programming has derived a number of impressive heuristic algorithms as well as exact branch-and-bound algorithms, which can solve huge SCP instances of bus, railway and airline crew scheduling problems. We survey heuristic algorithms for SCP focusing mainly on contributions of mathematical programming techniques to heuri...

  4. The application of quadtree algorithm for information integration in the high-level radioactive waste geological disposal

    International Nuclear Information System (INIS)

    Gao Min; Zhong Xia; Huang Shutao

    2008-01-01

    A multi-source database for high-level radioactive waste geological disposal, aims to promote the information process of the geological of HLW. In the periods of the multi-dimensional and multi-source and the integration of information and applications, it also relates to computer software and hardware, the paper preliminary analysises the data resources Beishan area, Gansu Province. The paper introduces a theory based on GIS technology and methods and open source code GDAL application, at the same time, it discusses the technical methods how to finish the application of the Quadtree algorithm in the area of information resources management system, fully sharing, rapid retrieval and so on. A more detailed description of the characteristics of existing data resources, space-related data retrieval algorithm theory, programming design and implementation of ideas are showed in the paper. (authors)

  5. Automatic macroscopic characterization of diesel sprays by means of a new image processing algorithm

    Science.gov (United States)

    Rubio-Gómez, Guillermo; Martínez-Martínez, S.; Rua-Mojica, Luis F.; Gómez-Gordo, Pablo; de la Garza, Oscar A.

    2018-05-01

    A novel algorithm is proposed for the automatic segmentation of diesel spray images and the calculation of their macroscopic parameters. The algorithm automatically detects each spray present in an image, and therefore it is able to work with diesel injectors with a different number of nozzle holes without any modification. The main characteristic of the algorithm is that it splits each spray into three different regions and then segments each one with an individually calculated binarization threshold. Each threshold level is calculated from the analysis of a representative luminosity profile of each region. This approach makes it robust to irregular light distribution along a single spray and between different sprays of an image. Once the sprays are segmented, the macroscopic parameters of each one are calculated. The algorithm is tested with two sets of diesel spray images taken under normal and irregular illumination setups.

  6. Level-1 pixel based tracking trigger algorithm for LHC upgrade

    CERN Document Server

    Moon, Chang-Seong

    2015-01-01

    The Pixel Detector is the innermost detector of the tracking system of the Compact Muon Solenoid (CMS) experiment at CERN Large Hadron Collider (LHC). It precisely determines the interaction point (primary vertex) of the events and the possible secondary vertexes due to heavy flavours ($b$ and $c$ quarks); it is part of the overall tracking system that allows reconstructing the tracks of the charged particles in the events and combined with the magnetic field to measure their impulsion. The pixel detector allows measuring the tracks in the region closest to the interaction point. The Level-1 (real-time) pixel based tracking trigger is a novel trigger system that is currently being studied for the LHC upgrade. An important goal is developing real-time track reconstruction algorithms able to cope with very high rates and high flux of data in a very harsh environment. The pixel detector has an especially crucial role in precisely identifying the primary vertex of the rare physics events from the large pile-up (P...

  7. A simultaneous navigation and radiation evasion algorithm (SNARE)

    Energy Technology Data Exchange (ETDEWEB)

    Khasawneh, Mohammed A., E-mail: mkha@ieee.org [Department of Electrical Engineering, Jordan University of Science and Technology, Irbid 221 10 (Jordan); Jaradat, Mohammad A., E-mail: majaradat@just.edu.jo [Department of Mechanical Engineering, Jordan University of Science and Technology, Irbid 221 10 (Jordan); Al-Shboul, Zeina Aman M., E-mail: xeinaaman@gmail.com [Department of Electrical Engineering, Jordan University of Science and Technology, Irbid 221 10 (Jordan)

    2013-12-15

    Highlights: • A new navigation algorithm for radiation evasion around nuclear facilities. • An optimization criteria minimized under algorithm operation. • A man-borne device guiding the occupational worker towards paths that warrant least radiation × time products. • Benefits of using localized navigation as opposed to global navigation schemas. • A path discrimination function for finding the navigational paths exhibiting the least amounts of radiation. - Abstract: In this paper, we address the issue of localization as pertains to indoor navigation under radiation contaminated environments. In this context, navigation, in the absence of any GPS signals, is guided by the location of the sensors that make up the entire wireless sensor network in a given locality within a nuclear facility. It, also, draws on the radiation levels as measured by the sensors around a given locale. Here, localization is inherently embedded into the algorithm presented in (Khasawneh et al., 2011a, 2011b) which was designed to provide navigational guidance to optimize any of two criteria: “Radiation Evasion” and “Nearest Exit”. As such, the algorithm can either be applied to setting a navigational “lowest” radiation exposure path from an initial point A to some other point B; a case typical of occupational workers performing maintenance operations around the facility; or providing a radiation-safe passage from point A to the nearest exit. Algorithm's navigational performance is tested under statistical reference, wherein for a given number of runs (trials) algorithm performance is evaluated as a function of the number of steps of look-ahead it uses to acquire navigational information, and is compared against the performance of the renowned Dijkstra global navigation algorithm. This is done with reference to the amount of (radiation × time) product and that of the time needed to reach an exit point, under the two optimization criteria. To evaluate algorithm

  8. A simultaneous navigation and radiation evasion algorithm (SNARE)

    International Nuclear Information System (INIS)

    Khasawneh, Mohammed A.; Jaradat, Mohammad A.; Al-Shboul, Zeina Aman M.

    2013-01-01

    Highlights: • A new navigation algorithm for radiation evasion around nuclear facilities. • An optimization criteria minimized under algorithm operation. • A man-borne device guiding the occupational worker towards paths that warrant least radiation × time products. • Benefits of using localized navigation as opposed to global navigation schemas. • A path discrimination function for finding the navigational paths exhibiting the least amounts of radiation. - Abstract: In this paper, we address the issue of localization as pertains to indoor navigation under radiation contaminated environments. In this context, navigation, in the absence of any GPS signals, is guided by the location of the sensors that make up the entire wireless sensor network in a given locality within a nuclear facility. It, also, draws on the radiation levels as measured by the sensors around a given locale. Here, localization is inherently embedded into the algorithm presented in (Khasawneh et al., 2011a, 2011b) which was designed to provide navigational guidance to optimize any of two criteria: “Radiation Evasion” and “Nearest Exit”. As such, the algorithm can either be applied to setting a navigational “lowest” radiation exposure path from an initial point A to some other point B; a case typical of occupational workers performing maintenance operations around the facility; or providing a radiation-safe passage from point A to the nearest exit. Algorithm's navigational performance is tested under statistical reference, wherein for a given number of runs (trials) algorithm performance is evaluated as a function of the number of steps of look-ahead it uses to acquire navigational information, and is compared against the performance of the renowned Dijkstra global navigation algorithm. This is done with reference to the amount of (radiation × time) product and that of the time needed to reach an exit point, under the two optimization criteria. To evaluate algorithm

  9. Detection of Cheating by Decimation Algorithm

    Science.gov (United States)

    Yamanaka, Shogo; Ohzeki, Masayuki; Decelle, Aurélien

    2015-02-01

    We expand the item response theory to study the case of "cheating students" for a set of exams, trying to detect them by applying a greedy algorithm of inference. This extended model is closely related to the Boltzmann machine learning. In this paper we aim to infer the correct biases and interactions of our model by considering a relatively small number of sets of training data. Nevertheless, the greedy algorithm that we employed in the present study exhibits good performance with a few number of training data. The key point is the sparseness of the interactions in our problem in the context of the Boltzmann machine learning: the existence of cheating students is expected to be very rare (possibly even in real world). We compare a standard approach to infer the sparse interactions in the Boltzmann machine learning to our greedy algorithm and we find the latter to be superior in several aspects.

  10. Algorithm for the generation of nuclear spin species and nuclear spin statistical weights

    International Nuclear Information System (INIS)

    Balasubramanian, K.

    1982-01-01

    A set of algorithms for the computer generation of nuclear spin species and nuclear spin statistical weights potentially useful in molecular spectroscopy is developed. These algorithms generate the nuclear spin species from group structures known as generalized character cycle indices (GCCIs). Thus the required input for these algorithms is just the set of all GCCIs for the symmetry group of the molecule which can be computed easily from the character table. The algorithms are executed and illustrated with examples

  11. Comparison of algorithms to quantify muscle fatigue in upper limb muscles based on sEMG signals.

    Science.gov (United States)

    Kahl, Lorenz; Hofmann, Ulrich G

    2016-11-01

    This work compared the performance of six different fatigue detection algorithms quantifying muscle fatigue based on electromyographic signals. Surface electromyography (sEMG) was obtained by an experiment from upper arm contractions at three different load levels from twelve volunteers. Fatigue detection algorithms mean frequency (MNF), spectral moments ratio (SMR), the wavelet method WIRM1551, sample entropy (SampEn), fuzzy approximate entropy (fApEn) and recurrence quantification analysis (RQA%DET) were calculated. The resulting fatigue signals were compared considering the disturbances incorporated in fatiguing situations as well as according to the possibility to differentiate the load levels based on the fatigue signals. Furthermore we investigated the influence of the electrode locations on the fatigue detection quality and whether an optimized channel set is reasonable. The results of the MNF, SMR, WIRM1551 and fApEn algorithms fell close together. Due to the small amount of subjects in this study significant differences could not be found. In terms of disturbances the SMR algorithm showed a slight tendency to out-perform the others. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  12. Quantum Adiabatic Algorithms and Large Spin Tunnelling

    Science.gov (United States)

    Boulatov, A.; Smelyanskiy, V. N.

    2003-01-01

    We provide a theoretical study of the quantum adiabatic evolution algorithm with different evolution paths proposed in this paper. The algorithm is applied to a random binary optimization problem (a version of the 3-Satisfiability problem) where the n-bit cost function is symmetric with respect to the permutation of individual bits. The evolution paths are produced, using the generic control Hamiltonians H (r) that preserve the bit symmetry of the underlying optimization problem. In the case where the ground state of H(0) coincides with the totally-symmetric state of an n-qubit system the algorithm dynamics is completely described in terms of the motion of a spin-n/2. We show that different control Hamiltonians can be parameterized by a set of independent parameters that are expansion coefficients of H (r) in a certain universal set of operators. Only one of these operators can be responsible for avoiding the tunnelling in the spin-n/2 system during the quantum adiabatic algorithm. We show that it is possible to select a coefficient for this operator that guarantees a polynomial complexity of the algorithm for all problem instances. We show that a successful evolution path of the algorithm always corresponds to the trajectory of a classical spin-n/2 and provide a complete characterization of such paths.

  13. Comparison of greedy algorithms for α-decision tree construction

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2011-01-01

    A comparison among different heuristics that are used by greedy algorithms which constructs approximate decision trees (α-decision trees) is presented. The comparison is conducted using decision tables based on 24 data sets from UCI Machine Learning Repository [2]. Complexity of decision trees is estimated relative to several cost functions: depth, average depth, number of nodes, number of nonterminal nodes, and number of terminal nodes. Costs of trees built by greedy algorithms are compared with minimum costs calculated by an algorithm based on dynamic programming. The results of experiments assign to each cost function a set of potentially good heuristics that minimize it. © 2011 Springer-Verlag.

  14. Multi-objective hierarchical genetic algorithms for multilevel redundancy allocation optimization

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Ranjan [Department of Aeronautics and Astronautics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501 (Japan)], E-mail: ranjan.k@ks3.ecs.kyoto-u.ac.jp; Izui, Kazuhiro [Department of Aeronautics and Astronautics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501 (Japan)], E-mail: izui@prec.kyoto-u.ac.jp; Yoshimura, Masataka [Department of Aeronautics and Astronautics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501 (Japan)], E-mail: yoshimura@prec.kyoto-u.ac.jp; Nishiwaki, Shinji [Department of Aeronautics and Astronautics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501 (Japan)], E-mail: shinji@prec.kyoto-u.ac.jp

    2009-04-15

    Multilevel redundancy allocation optimization problems (MRAOPs) occur frequently when attempting to maximize the system reliability of a hierarchical system, and almost all complex engineering systems are hierarchical. Despite their practical significance, limited research has been done concerning the solving of simple MRAOPs. These problems are not only NP hard but also involve hierarchical design variables. Genetic algorithms (GAs) have been applied in solving MRAOPs, since they are computationally efficient in solving such problems, unlike exact methods, but their applications has been confined to single-objective formulation of MRAOPs. This paper proposes a multi-objective formulation of MRAOPs and a methodology for solving such problems. In this methodology, a hierarchical GA framework for multi-objective optimization is proposed by introducing hierarchical genotype encoding for design variables. In addition, we implement the proposed approach by integrating the hierarchical genotype encoding scheme with two popular multi-objective genetic algorithms (MOGAs)-the strength Pareto evolutionary genetic algorithm (SPEA2) and the non-dominated sorting genetic algorithm (NSGA-II). In the provided numerical examples, the proposed multi-objective hierarchical approach is applied to solve two hierarchical MRAOPs, a 4- and a 3-level problems. The proposed method is compared with a single-objective optimization method that uses a hierarchical genetic algorithm (HGA), also applied to solve the 3- and 4-level problems. The results show that a multi-objective hierarchical GA (MOHGA) that includes elitism and mechanism for diversity preserving performed better than a single-objective GA that only uses elitism, when solving large-scale MRAOPs. Additionally, the experimental results show that the proposed method with NSGA-II outperformed the proposed method with SPEA2 in finding useful Pareto optimal solution sets.

  15. Multi-objective hierarchical genetic algorithms for multilevel redundancy allocation optimization

    International Nuclear Information System (INIS)

    Kumar, Ranjan; Izui, Kazuhiro; Yoshimura, Masataka; Nishiwaki, Shinji

    2009-01-01

    Multilevel redundancy allocation optimization problems (MRAOPs) occur frequently when attempting to maximize the system reliability of a hierarchical system, and almost all complex engineering systems are hierarchical. Despite their practical significance, limited research has been done concerning the solving of simple MRAOPs. These problems are not only NP hard but also involve hierarchical design variables. Genetic algorithms (GAs) have been applied in solving MRAOPs, since they are computationally efficient in solving such problems, unlike exact methods, but their applications has been confined to single-objective formulation of MRAOPs. This paper proposes a multi-objective formulation of MRAOPs and a methodology for solving such problems. In this methodology, a hierarchical GA framework for multi-objective optimization is proposed by introducing hierarchical genotype encoding for design variables. In addition, we implement the proposed approach by integrating the hierarchical genotype encoding scheme with two popular multi-objective genetic algorithms (MOGAs)-the strength Pareto evolutionary genetic algorithm (SPEA2) and the non-dominated sorting genetic algorithm (NSGA-II). In the provided numerical examples, the proposed multi-objective hierarchical approach is applied to solve two hierarchical MRAOPs, a 4- and a 3-level problems. The proposed method is compared with a single-objective optimization method that uses a hierarchical genetic algorithm (HGA), also applied to solve the 3- and 4-level problems. The results show that a multi-objective hierarchical GA (MOHGA) that includes elitism and mechanism for diversity preserving performed better than a single-objective GA that only uses elitism, when solving large-scale MRAOPs. Additionally, the experimental results show that the proposed method with NSGA-II outperformed the proposed method with SPEA2 in finding useful Pareto optimal solution sets

  16. Modified Projection Algorithms for Solving the Split Equality Problems

    Directory of Open Access Journals (Sweden)

    Qiao-Li Dong

    2014-01-01

    proposed a CQ algorithm for solving it. In this paper, we propose a modification for the CQ algorithm, which computes the stepsize adaptively and performs an additional projection step onto two half-spaces in each iteration. We further propose a relaxation scheme for the self-adaptive projection algorithm by using projections onto half-spaces instead of those onto the original convex sets, which is much more practical. Weak convergence results for both algorithms are analyzed.

  17. Parameter Estimation for Traffic Noise Models Using a Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    Deok-Soon An

    2013-01-01

    Full Text Available A technique has been developed for predicting road traffic noise for environmental assessment, taking into account traffic volume as well as road surface conditions. The ASJ model (ASJ Prediction Model for Road Traffic Noise, 1999, which is based on the sound power level of the noise emitted by the interaction between the road surface and tires, employs regression models for two road surface types: dense-graded asphalt (DGA and permeable asphalt (PA. However, these models are not applicable to other types of road surfaces. Accordingly, this paper introduces a parameter estimation procedure for ASJ-based noise prediction models, utilizing a harmony search (HS algorithm. Traffic noise measurement data for four different vehicle types were used in the algorithm to determine the regression parameters for several road surface types. The parameters of the traffic noise prediction models were evaluated using another measurement set, and good agreement was observed between the predicted and measured sound power levels.

  18. Reconstruction of incomplete cell paths through a 3D-2D level set segmentation

    Science.gov (United States)

    Hariri, Maia; Wan, Justin W. L.

    2012-02-01

    Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.

  19. Algorithms and Their Explanations

    NARCIS (Netherlands)

    Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.

    2014-01-01

    By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of

  20. Bernstein Algorithm for Vertical Normalization to 3NF Using Synthesis

    Directory of Open Access Journals (Sweden)

    Matija Varga

    2013-07-01

    Full Text Available This paper demonstrates the use of Bernstein algorithm for vertical normalization to 3NF using synthesis. The aim of the paper is to provide an algorithm for database normalization and present a set of steps which minimize redundancy in order to increase the database management efficiency, and specify tests and algorithms for testing and proving the reversibility (i.e., proving that the normalization did not cause loss of information. Using Bernstein algorithm steps, the paper gives examples of vertical normalization to 3NF through synthesis and proposes a test and an algorithm to demonstrate decomposition reversibility. This paper also sets out to explain that the reasons for generating normal forms are to facilitate data search, eliminate data redundancy as well as delete, insert and update anomalies and explain how anomalies develop using examples-

  1. Implications of sea-level rise in a modern carbonate ramp setting

    Science.gov (United States)

    Lokier, Stephen W.; Court, Wesley M.; Onuma, Takumi; Paul, Andreas

    2018-03-01

    This study addresses a gap in our understanding of the effects of sea-level rise on the sedimentary systems and morphological development of recent and ancient carbonate ramp settings. Many ancient carbonate sequences are interpreted as having been deposited in carbonate ramp settings. These settings are poorly-represented in the Recent. The study documents the present-day transgressive flooding of the Abu Dhabi coastline at the southern shoreline of the Arabian/Persian Gulf, a carbonate ramp depositional system that is widely employed as a Recent analogue for numerous ancient carbonate systems. Fourteen years of field-based observations are integrated with historical and recent high-resolution satellite imagery in order to document and assess the onset of flooding. Predicted rates of transgression (i.e. landward movement of the shoreline) of 2.5 m yr- 1 (± 0.2 m yr- 1) based on global sea-level rise alone were far exceeded by the flooding rate calculated from the back-stepping of coastal features (10-29 m yr- 1). This discrepancy results from the dynamic nature of the flooding with increased water depth exposing the coastline to increased erosion and, thereby, enhancing back-stepping. A non-accretionary transgressive shoreline trajectory results from relatively rapid sea-level rise coupled with a low-angle ramp geometry and a paucity of sediments. The flooding is represented by the landward migration of facies belts, a range of erosive features and the onset of bioturbation. Employing Intergovernmental Panel on Climate Change (Church et al., 2013) predictions for 21st century sea-level rise, and allowing for the post-flooding lag time that is typical for the start-up of carbonate factories, it is calculated that the coastline will continue to retrograde for the foreseeable future. Total passive flooding (without considering feedback in the modification of the shoreline) by the year 2100 is calculated to likely be between 340 and 571 m with a flooding rate of 3

  2. Anomaly Detection and Diagnosis Algorithms for Discrete Symbols

    Data.gov (United States)

    National Aeronautics and Space Administration — We present a set of novel algorithms which we call sequenceMiner that detect and characterize anomalies in large sets of high-dimensional symbol sequences that arise...

  3. Training nuclei detection algorithms with simple annotations

    Directory of Open Access Journals (Sweden)

    Henning Kost

    2017-01-01

    Full Text Available Background: Generating good training datasets is essential for machine learning-based nuclei detection methods. However, creating exhaustive nuclei contour annotations, to derive optimal training data from, is often infeasible. Methods: We compared different approaches for training nuclei detection methods solely based on nucleus center markers. Such markers contain less accurate information, especially with regard to nuclear boundaries, but can be produced much easier and in greater quantities. The approaches use different automated sample extraction methods to derive image positions and class labels from nucleus center markers. In addition, the approaches use different automated sample selection methods to improve the detection quality of the classification algorithm and reduce the run time of the training process. We evaluated the approaches based on a previously published generic nuclei detection algorithm and a set of Ki-67-stained breast cancer images. Results: A Voronoi tessellation-based sample extraction method produced the best performing training sets. However, subsampling of the extracted training samples was crucial. Even simple class balancing improved the detection quality considerably. The incorporation of active learning led to a further increase in detection quality. Conclusions: With appropriate sample extraction and selection methods, nuclei detection algorithms trained on the basis of simple center marker annotations can produce comparable quality to algorithms trained on conventionally created training sets.

  4. Parametric optimization of ultrasonic machining process using gravitational search and fireworks algorithms

    Directory of Open Access Journals (Sweden)

    Debkalpa Goswami

    2015-03-01

    Full Text Available Ultrasonic machining (USM is a mechanical material removal process used to erode holes and cavities in hard or brittle workpieces by using shaped tools, high-frequency mechanical motion and an abrasive slurry. Unlike other non-traditional machining processes, such as laser beam and electrical discharge machining, USM process does not thermally damage the workpiece or introduce significant levels of residual stress, which is important for survival of materials in service. For having enhanced machining performance and better machined job characteristics, it is often required to determine the optimal control parameter settings of an USM process. The earlier mathematical approaches for parametric optimization of USM processes have mostly yielded near optimal or sub-optimal solutions. In this paper, two almost unexplored non-conventional optimization techniques, i.e. gravitational search algorithm (GSA and fireworks algorithm (FWA are applied for parametric optimization of USM processes. The optimization performance of these two algorithms is compared with that of other popular population-based algorithms, and the effects of their algorithm parameters on the derived optimal solutions and computational speed are also investigated. It is observed that FWA provides the best optimal results for the considered USM processes.

  5. Prostate cancer prediction using the random forest algorithm that takes into account transrectal ultrasound findings, age, and serum levels of prostate-specific antigen.

    Science.gov (United States)

    Xiao, Li-Hong; Chen, Pei-Ran; Gou, Zhong-Ping; Li, Yong-Zhong; Li, Mei; Xiang, Liang-Cheng; Feng, Ping

    2017-01-01

    The aim of this study is to evaluate the ability of the random forest algorithm that combines data on transrectal ultrasound findings, age, and serum levels of prostate-specific antigen to predict prostate carcinoma. Clinico-demographic data were analyzed for 941 patients with prostate diseases treated at our hospital, including age, serum prostate-specific antigen levels, transrectal ultrasound findings, and pathology diagnosis based on ultrasound-guided needle biopsy of the prostate. These data were compared between patients with and without prostate cancer using the Chi-square test, and then entered into the random forest model to predict diagnosis. Patients with and without prostate cancer differed significantly in age and serum prostate-specific antigen levels (P prostate-specific antigen and ultrasound predicted prostate cancer with an accuracy of 83.10%, sensitivity of 65.64%, and specificity of 93.83%. Positive predictive value was 86.72%, and negative predictive value was 81.64%. By integrating age, prostate-specific antigen levels and transrectal ultrasound findings, the random forest algorithm shows better diagnostic performance for prostate cancer than either diagnostic indicator on its own. This algorithm may help improve diagnosis of the disease by identifying patients at high risk for biopsy.

  6. SPECIAL LIBRARIES OF FRAGMENTS OF ALGORITHMIC NETWORKS TO AUTOMATE THE DEVELOPMENT OF ALGORITHMIC MODELS

    Directory of Open Access Journals (Sweden)

    V. E. Marley

    2015-01-01

    Full Text Available Summary. The concept of algorithmic models appeared from the algorithmic approach in which the simulated object, the phenomenon appears in the form of process, subject to strict rules of the algorithm, which placed the process of operation of the facility. Under the algorithmic model is the formalized description of the scenario subject specialist for the simulated process, the structure of which is comparable with the structure of the causal and temporal relationships between events of the process being modeled, together with all information necessary for its software implementation. To represent the structure of algorithmic models used algorithmic network. Normally, they were defined as loaded finite directed graph, the vertices which are mapped to operators and arcs are variables, bound by operators. The language of algorithmic networks has great features, the algorithms that it can display indifference the class of all random algorithms. In existing systems, automation modeling based on algorithmic nets, mainly used by operators working with real numbers. Although this reduces their ability, but enough for modeling a wide class of problems related to economy, environment, transport, technical processes. The task of modeling the execution of schedules and network diagrams is relevant and useful. There are many counting systems, network graphs, however, the monitoring process based analysis of gaps and terms of graphs, no analysis of prediction execution schedule or schedules. The library is designed to build similar predictive models. Specifying source data to obtain a set of projections from which to choose one and take it for a new plan.

  7. Robust boundary detection of left ventricles on ultrasound images using ASM-level set method.

    Science.gov (United States)

    Zhang, Yaonan; Gao, Yuan; Li, Hong; Teng, Yueyang; Kang, Yan

    2015-01-01

    Level set method has been widely used in medical image analysis, but it has difficulties when being used in the segmentation of left ventricular (LV) boundaries on echocardiography images because the boundaries are not very distinguish, and the signal-to-noise ratio of echocardiography images is not very high. In this paper, we introduce the Active Shape Model (ASM) into the traditional level set method to enforce shape constraints. It improves the accuracy of boundary detection and makes the evolution more efficient. The experiments conducted on the real cardiac ultrasound image sequences show a positive and promising result.

  8. State-set branching

    DEFF Research Database (Denmark)

    Jensen, Rune Møller; Veloso, Manuela M.; Bryant, Randal E.

    2008-01-01

    In this article, we present a framework called state-set branching that combines symbolic search based on reduced ordered Binary Decision Diagrams (BDDs) with best-first search, such as A* and greedy best-first search. The framework relies on an extension of these algorithms from expanding a sing...

  9. How Do Severe Constraints Affect the Search Ability of Multiobjective Evolutionary Algorithms in Water Resources?

    Science.gov (United States)

    Clarkin, T. J.; Kasprzyk, J. R.; Raseman, W. J.; Herman, J. D.

    2015-12-01

    This study contributes a diagnostic assessment of multiobjective evolutionary algorithm (MOEA) search on a set of water resources problem formulations with different configurations of constraints. Unlike constraints in classical optimization modeling, constraints within MOEA simulation-optimization represent limits on acceptable performance that delineate whether solutions within the search problem are feasible. Constraints are relevant because of the emergent pressures on water resources systems: increasing public awareness of their sustainability, coupled with regulatory pressures on water management agencies. In this study, we test several state-of-the-art MOEAs that utilize restricted tournament selection for constraint handling on varying configurations of water resources planning problems. For example, a problem that has no constraints on performance levels will be compared with a problem with several severe constraints, and a problem with constraints that have less severe values on the constraint thresholds. One such problem, Lower Rio Grande Valley (LRGV) portfolio planning, has been solved with a suite of constraints that ensure high reliability, low cost variability, and acceptable performance in a single year severe drought. But to date, it is unclear whether or not the constraints are negatively affecting MOEAs' ability to solve the problem effectively. Two categories of results are explored. The first category uses control maps of algorithm performance to determine if the algorithm's performance is sensitive to user-defined parameters. The second category uses run-time performance metrics to determine the time required for the algorithm to reach sufficient levels of convergence and diversity on the solution sets. Our work exploring the effect of constraints will better enable practitioners to define MOEA problem formulations for real-world systems, especially when stakeholders are concerned with achieving fixed levels of performance according to one or

  10. Algorithms of maximum likelihood data clustering with applications

    Science.gov (United States)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  11. Parallel Evolutionary Optimization Algorithms for Peptide-Protein Docking

    Science.gov (United States)

    Poluyan, Sergey; Ershov, Nikolay

    2018-02-01

    In this study we examine the possibility of using evolutionary optimization algorithms in protein-peptide docking. We present the main assumptions that reduce the docking problem to a continuous global optimization problem and provide a way of using evolutionary optimization algorithms. The Rosetta all-atom force field was used for structural representation and energy scoring. We describe the parallelization scheme and MPI/OpenMP realization of the considered algorithms. We demonstrate the efficiency and the performance for some algorithms which were applied to a set of benchmark tests.

  12. Assimilation of low-level wind in a high-resolution mesoscale model using the back and forth nudging algorithm

    Directory of Open Access Journals (Sweden)

    Jean-François Mahfouf

    2012-06-01

    Full Text Available The performance of a new data assimilation algorithm called back and forth nudging (BFN is evaluated using a high-resolution numerical mesoscale model and simulated wind observations in the boundary layer. This new algorithm, of interest for the assimilation of high-frequency observations provided by ground-based active remote-sensing instruments, is straightforward to implement in a realistic atmospheric model. The convergence towards a steady-state profile can be achieved after five iterations of the BFN algorithm, and the algorithm provides an improved solution with respect to direct nudging. It is shown that the contribution of the nudging term does not dominate over other model physical and dynamical tendencies. Moreover, by running backward integrations with an adiabatic version of the model, the nudging coefficients do not need to be increased in order to stabilise the numerical equations. The ability of BFN to produce model changes upstream from the observations, in a similar way to 4-D-Var assimilation systems, is demonstrated. The capacity of the model to adjust to rapid changes in wind direction with the BFN is a first encouraging step, for example, to improve the detection and prediction of low-level wind shear phenomena through high-resolution mesoscale modelling over airports.

  13. Recursive Algorithm For Linear Regression

    Science.gov (United States)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  14. Approach to estimation of level of information security at enterprise based on genetic algorithm

    Science.gov (United States)

    V, Stepanov L.; V, Parinov A.; P, Korotkikh L.; S, Koltsov A.

    2018-05-01

    In the article, the way of formalization of different types of threats of information security and vulnerabilities of an information system of the enterprise and establishment is considered. In a type of complexity of ensuring information security of application of any new organized system, the concept and decisions in the sphere of information security are expedient. One of such approaches is the method of a genetic algorithm. For the enterprises of any fields of activity, the question of complex estimation of the level of security of information systems taking into account the quantitative and qualitative factors characterizing components of information security is relevant.

  15. An automated land-use mapping comparison of the Bayesian maximum likelihood and linear discriminant analysis algorithms

    Science.gov (United States)

    Tom, C. H.; Miller, L. D.

    1984-01-01

    The Bayesian maximum likelihood parametric classifier has been tested against the data-based formulation designated 'linear discrimination analysis', using the 'GLIKE' decision and "CLASSIFY' classification algorithms in the Landsat Mapping System. Identical supervised training sets, USGS land use/land cover classes, and various combinations of Landsat image and ancilliary geodata variables, were used to compare the algorithms' thematic mapping accuracy on a single-date summer subscene, with a cellularized USGS land use map of the same time frame furnishing the ground truth reference. CLASSIFY, which accepts a priori class probabilities, is found to be more accurate than GLIKE, which assumes equal class occurrences, for all three mapping variable sets and both levels of detail. These results may be generalized to direct accuracy, time, cost, and flexibility advantages of linear discriminant analysis over Bayesian methods.

  16. Using Machine Learning Methods Jointly to Find Better Set of Rules in Data Mining

    Directory of Open Access Journals (Sweden)

    SUG Hyontai

    2017-01-01

    Full Text Available Rough set-based data mining algorithms are one of widely accepted machine learning technologies because of their strong mathematical background and capability of finding optimal rules based on given data sets only without room for prejudiced views to be inserted on the data. But, because the algorithms find rules very precisely, we may confront with the overfitting problem. On the other hand, association rule algorithms find rules of association, where the association resides between sets of items in database. The algorithms find itemsets that occur more than given minimum support, so that they can find the itemsets practically in reasonable time even for very large databases by supplying the minimum support appropriately. In order to overcome the problem of the overfitting problem in rough set-based algorithms, first we find large itemsets, after that we select attributes that cover the large itemsets. By using the selected attributes only, we may find better set of rules based on rough set theory. Results from experiments support our suggested method.

  17. Robust Semi-Supervised Manifold Learning Algorithm for Classification

    Directory of Open Access Journals (Sweden)

    Mingxia Chen

    2018-01-01

    Full Text Available In the recent years, manifold learning methods have been widely used in data classification to tackle the curse of dimensionality problem, since they can discover the potential intrinsic low-dimensional structures of the high-dimensional data. Given partially labeled data, the semi-supervised manifold learning algorithms are proposed to predict the labels of the unlabeled points, taking into account label information. However, these semi-supervised manifold learning algorithms are not robust against noisy points, especially when the labeled data contain noise. In this paper, we propose a framework for robust semi-supervised manifold learning (RSSML to address this problem. The noisy levels of the labeled points are firstly predicted, and then a regularization term is constructed to reduce the impact of labeled points containing noise. A new robust semi-supervised optimization model is proposed by adding the regularization term to the traditional semi-supervised optimization model. Numerical experiments are given to show the improvement and efficiency of RSSML on noisy data sets.

  18. Two parameter-tuned metaheuristic algorithms for the multi-level lot sizing and scheduling problem

    Directory of Open Access Journals (Sweden)

    S.M.T. Fatemi Ghomi

    2012-10-01

    Full Text Available This paper addresses the problem of lot sizing and scheduling problem for n-products and m-machines in flow shop environment where setups among machines are sequence-dependent and can be carried over. Many products must be produced under capacity constraints and allowing backorders. Since lot sizing and scheduling problems are well-known strongly NP-hard, much attention has been given to heuristics and metaheuristics methods. This paper presents two metaheuristics algorithms namely, Genetic Algorithm (GA and Imperialist Competitive Algorithm (ICA. Moreover, Taguchi robust design methodology is employed to calibrate the parameters of the algorithms for different size problems. In addition, the parameter-tuned algorithms are compared against a presented lower bound on randomly generated problems. At the end, comprehensive numerical examples are presented to demonstrate the effectiveness of the proposed algorithms. The results showed that the performance of both GA and ICA are very promising and ICA outperforms GA statistically.

  19. Multi-Level Sensor Fusion Algorithm Approach for BMD Interceptor Applications

    National Research Council Canada - National Science Library

    Allen, Doug

    1998-01-01

    ... through fabrication and testing of advanced sensor hardware concepts and advanced sensor fusion algorithms. Advanced sensor concepts include onboard LADAR in conjunction with a multi-color passive IR sensor...

  20. Topology optimization in acoustics and elasto-acoustics via a level-set method

    Science.gov (United States)

    Desai, J.; Faure, A.; Michailidis, G.; Parry, G.; Estevez, R.

    2018-04-01

    Optimizing the shape and topology (S&T) of structures to improve their acoustic performance is quite challenging. The exact position of the structural boundary is usually of critical importance, which dictates the use of geometric methods for topology optimization instead of standard density approaches. The goal of the present work is to investigate different possibilities for handling topology optimization problems in acoustics and elasto-acoustics via a level-set method. From a theoretical point of view, we detail two equivalent ways to perform the derivation of surface-dependent terms and propose a smoothing technique for treating problems of boundary conditions optimization. In the numerical part, we examine the importance of the surface-dependent term in the shape derivative, neglected in previous studies found in the literature, on the optimal designs. Moreover, we test different mesh adaptation choices, as well as technical details related to the implicit surface definition in the level-set approach. We present results in two and three-space dimensions.