WorldWideScience

Sample records for proximal point algorithm

  1. A proximal point algorithm with generalized proximal distances to BEPs

    OpenAIRE

    Bento, G. C.; Neto, J. X. Cruz; Lopes, J. O.; Soares Jr, P. A.; Soubeyran, A.

    2014-01-01

    We consider a bilevel problem involving two monotone equilibrium bifunctions and we show that this problem can be solved by a proximal point method with generalized proximal distances. We propose a framework for the convergence analysis of the sequences generated by the algorithm. This class of problems is very interesting because it covers mathematical programs and optimization problems under equilibrium constraints. As an application, we consider the problem of the stability and change dyna...

  2. Super-Relaxed ( -Proximal Point Algorithms, Relaxed ( -Proximal Point Algorithms, Linear Convergence Analysis, and Nonlinear Variational Inclusions

    Directory of Open Access Journals (Sweden)

    Agarwal RaviP

    2009-01-01

    Full Text Available We glance at recent advances to the general theory of maximal (set-valued monotone mappings and their role demonstrated to examine the convex programming and closely related field of nonlinear variational inequalities. We focus mostly on applications of the super-relaxed ( -proximal point algorithm to the context of solving a class of nonlinear variational inclusion problems, based on the notion of maximal ( -monotonicity. Investigations highlighted in this communication are greatly influenced by the celebrated work of Rockafellar (1976, while others have played a significant part as well in generalizing the proximal point algorithm considered by Rockafellar (1976 to the case of the relaxed proximal point algorithm by Eckstein and Bertsekas (1992. Even for the linear convergence analysis for the overrelaxed (or super-relaxed ( -proximal point algorithm, the fundamental model for Rockafellar's case does the job. Furthermore, we attempt to explore possibilities of generalizing the Yosida regularization/approximation in light of maximal ( -monotonicity, and then applying to first-order evolution equations/inclusions.

  3. Adaptive Proximal Point Algorithms for Total Variation Image Restoration

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2015-02-01

    Full Text Available Image restoration is a fundamental problem in various areas of imaging sciences. This paper presents a class of adaptive proximal point algorithms (APPA with contraction strategy for total variational image restoration. In each iteration, the proposed methods choose an adaptive proximal parameter matrix which is not necessary symmetric. In fact, there is an inner extrapolation in the prediction step, which is followed by a correction step for contraction. And the inner extrapolation is implemented by an adaptive scheme. By using the framework of contraction method, global convergence result and a convergence rate of O(1/N could be established for the proposed methods. Numerical results are reported to illustrate the efficiency of the APPA methods for solving total variation image restoration problems. Comparisons with the state-of-the-art algorithms demonstrate that the proposed methods are comparable and promising.

  4. Relatively Inexact Proximal Point Algorithm and Linear Convergence Analysis

    Directory of Open Access Journals (Sweden)

    Ram U. Verma

    2009-01-01

    Full Text Available Based on a notion of relatively maximal (m-relaxed monotonicity, the approximation solvability of a general class of inclusion problems is discussed, while generalizing Rockafellar's theorem (1976 on linear convergence using the proximal point algorithm in a real Hilbert space setting. Convergence analysis, based on this new model, is simpler and compact than that of the celebrated technique of Rockafellar in which the Lipschitz continuity at 0 of the inverse of the set-valued mapping is applied. Furthermore, it can be used to generalize the Yosida approximation, which, in turn, can be applied to first-order evolution equations as well as evolution inclusions.

  5. ProxImaL: efficient image optimization using proximal algorithms

    KAUST Repository

    Heide, Felix; Diamond, Steven; Nieß ner, Matthias; Ragan-Kelley, Jonathan; Heidrich, Wolfgang; Wetzstein, Gordon

    2016-01-01

    domain-specific language and compiler for image optimization problems that makes it easy to experiment with different problem formulations and algorithm choices. The language uses proximal operators as the fundamental building blocks of a variety

  6. ProxImaL: efficient image optimization using proximal algorithms

    KAUST Repository

    Heide, Felix

    2016-07-11

    Computational photography systems are becoming increasingly diverse, while computational resources-for example on mobile platforms-are rapidly increasing. As diverse as these camera systems may be, slightly different variants of the underlying image processing tasks, such as demosaicking, deconvolution, denoising, inpainting, image fusion, and alignment, are shared between all of these systems. Formal optimization methods have recently been demonstrated to achieve state-of-the-art quality for many of these applications. Unfortunately, different combinations of natural image priors and optimization algorithms may be optimal for different problems, and implementing and testing each combination is currently a time-consuming and error-prone process. ProxImaL is a domain-specific language and compiler for image optimization problems that makes it easy to experiment with different problem formulations and algorithm choices. The language uses proximal operators as the fundamental building blocks of a variety of linear and nonlinear image formation models and cost functions, advanced image priors, and noise models. The compiler intelligently chooses the best way to translate a problem formulation and choice of optimization algorithm into an efficient solver implementation. In applications to the image processing pipeline, deconvolution in the presence of Poisson-distributed shot noise, and burst denoising, we show that a few lines of ProxImaL code can generate highly efficient solvers that achieve state-of-the-art results. We also show applications to the nonlinear and nonconvex problem of phase retrieval.

  7. Industrial Computed Tomography using Proximal Algorithm

    KAUST Repository

    Zang, Guangming

    2016-04-14

    In this thesis, we present ProxiSART, a flexible proximal framework for robust 3D cone beam tomographic reconstruction based on the Simultaneous Algebraic Reconstruction Technique (SART). We derive the proximal operator for the SART algorithm and use it for minimizing the data term in a proximal algorithm. We show the flexibility of the framework by plugging in different powerful regularizers, and show its robustness in achieving better reconstruction results in the presence of noise and using fewer projections. We compare our framework to state-of-the-art methods and existing popular software tomography reconstruction packages, on both synthetic and real datasets, and show superior reconstruction quality, especially from noisy data and a small number of projections.

  8. Two-Step Proximal Gradient Algorithm for Low-Rank Matrix Completion

    Directory of Open Access Journals (Sweden)

    Qiuyu Wang

    2016-06-01

    Full Text Available In this paper, we  propose a two-step proximal gradient algorithm to solve nuclear norm regularized least squares for the purpose of recovering low-rank data matrix from sampling of its entries. Each iteration generated by the proposed algorithm is a combination of the latest three points, namely, the previous point, the current iterate, and its proximal gradient point. This algorithm preserves the computational simplicity of classical proximal gradient algorithm where a singular value decomposition in proximal operator is involved. Global convergence is followed directly in the literature. Numerical results are reported to show the efficiency of the algorithm.

  9. A proximity algorithm accelerated by Gauss-Seidel iterations for L1/TV denoising models

    Science.gov (United States)

    Li, Qia; Micchelli, Charles A.; Shen, Lixin; Xu, Yuesheng

    2012-09-01

    Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss-Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed.

  10. A proximity algorithm accelerated by Gauss–Seidel iterations for L1/TV denoising models

    International Nuclear Information System (INIS)

    Li, Qia; Shen, Lixin; Xu, Yuesheng; Micchelli, Charles A

    2012-01-01

    Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss–Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed. (paper)

  11. SIFT based algorithm for point feature tracking

    Directory of Open Access Journals (Sweden)

    Adrian BURLACU

    2007-12-01

    Full Text Available In this paper a tracking algorithm for SIFT features in image sequences is developed. For each point feature extracted using SIFT algorithm a descriptor is computed using information from its neighborhood. Using an algorithm based on minimizing the distance between two descriptors tracking point features throughout image sequences is engaged. Experimental results, obtained from image sequences that capture scaling of different geometrical type object, reveal the performances of the tracking algorithm.

  12. Tractable Algorithms for Proximity Search on Large Graphs

    Science.gov (United States)

    2010-07-01

    Education never ends, Watson. It is a series of lessons with the greatest for the last. — Sir Arthur Conan Doyle’s Sherlock Holmes . 2.1 Introduction A...Doyle’s Sherlock Holmes . 5.1 Introduction In this thesis, our main goal is to design fast algorithms for proximity search in large graphs. In chapter 3...Conan Doyle’s Sherlock Holmes . In this thesis our main focus is on investigating some useful random walk based prox- imity measures. We have started

  13. A Regularized Algorithm for the Proximal Split Feasibility Problem

    Directory of Open Access Journals (Sweden)

    Zhangsong Yao

    2014-01-01

    Full Text Available The proximal split feasibility problem has been studied. A regularized method has been presented for solving the proximal split feasibility problem. Strong convergence theorem is given.

  14. Interior point algorithms theory and analysis

    CERN Document Server

    Ye, Yinyu

    2011-01-01

    The first comprehensive review of the theory and practice of one of today's most powerful optimization techniques. The explosive growth of research into and development of interior point algorithms over the past two decades has significantly improved the complexity of linear programming and yielded some of today's most sophisticated computing techniques. This book offers a comprehensive and thorough treatment of the theory, analysis, and implementation of this powerful computational tool. Interior Point Algorithms provides detailed coverage of all basic and advanced aspects of the subject.

  15. Algorithms for solving common fixed point problems

    CERN Document Server

    Zaslavski, Alexander J

    2018-01-01

    This book details approximate solutions to common fixed point problems and convex feasibility problems in the presence of perturbations. Convex feasibility problems search for a common point of a finite collection of subsets in a Hilbert space; common fixed point problems pursue a common fixed point of a finite collection of self-mappings in a Hilbert space. A variety of algorithms are considered in this book for solving both types of problems, the study of which has fueled a rapidly growing area of research. This monograph is timely and highlights the numerous applications to engineering, computed tomography, and radiation therapy planning. Totaling eight chapters, this book begins with an introduction to foundational material and moves on to examine iterative methods in metric spaces. The dynamic string-averaging methods for common fixed point problems in normed space are analyzed in Chapter 3. Dynamic string methods, for common fixed point problems in a metric space are introduced and discussed in Chapter ...

  16. Biomechanical evaluation of straight antegrade nailing in proximal humeral fractures: the rationale of the "proximal anchoring point".

    Science.gov (United States)

    Euler, Simon A; Petri, Maximilian; Venderley, Melanie B; Dornan, Grant J; Schmoelz, Werner; Turnbull, Travis Lee; Plecko, Michael; Kralinger, Franz S; Millett, Peter J

    2017-09-01

    Varus failure is one of the most common failure modes following surgical treatment of proximal humeral fractures. Straight antegrade nails (SAN) theoretically provide increased stability by anchoring to the densest zone of the proximal humerus (subchondral zone) with the end of the nail. The aim of this study was to biomechanically investigate the characteristics of this "proximal anchoring point" (PAP). We hypothesized that the PAP would improve stability compared to the same construct without the PAP. Straight antegrade humeral nailing was performed in 20 matched pairs of human cadaveric humeri for a simulated unstable two-part fracture. Biomechanical testing, with stepwise increasing cyclic axial loading (50-N increments each 100 cycles) at an angle of 20° abduction revealed significantly higher median loads to failure for SAN constructs with the PAP (median, 450 N; range, 200-1.000 N) compared to those without the PAP (median, 325 N; range, 100-500 N; p = 0.009). SAN constructs with press-fit proximal extensions (endcaps) showed similar median loads to failure (median, 400 N; range, 200-650 N), when compared to the undersized, commercially available SAN endcaps (median, 450 N; range, 200-600 N; p = 0.240). The PAP provided significantly increased stability in SAN constructs compared to the same setup without this additional proximal anchoring point. Varus-displacing forces to the humeral head were superiorly reduced in this setting. This study provides biomechanical evidence for the "proximal anchoring point's" rationale. Straight antegrade humeral nailing may be beneficial for patients undergoing surgical treatment for unstable proximal humeral fractures to decrease secondary varus displacement and thus potentially reduce revision rates.

  17. PHOTOJOURNALISM AND PROXIMITY IMAGES: two points of view, two professions?

    Directory of Open Access Journals (Sweden)

    Daniel Thierry

    2011-06-01

    Full Text Available For many decades, classic photojournalistic practice, firmly anchored in a creed established since Lewis Hine (1874-1940, has developed a praxis and a doxa that have barely been affected by the transformations in the various types of journalism. From the search for the “right image” which would be totally transparent by striving to refute its enunciative features from a perspective of maximumobjectivity, to the most seductive photography at supermarkets by photo agencies, the range of images seems to be decidedly framed. However, far from constituting high-powered reportingor excellent photography that is rewarded with numerous international prizes and invitations to the media-artistic world, local press photography remains in the shadows. How does oneoffer a representation of one’s self that can be shared in the local sphere? That is the first question which editors of the local daily and weekly press must grapple with. Using illustrations of the practices, this article proposes an examination of the origins ofthese practices and an analysis grounded on the originality of theauthors of these proximity photographs.

  18. Best Proximity Point Results in Complex Valued Metric Spaces

    Directory of Open Access Journals (Sweden)

    Binayak S. Choudhury

    2014-01-01

    complex valued metric spaces. We treat the problem as that of finding the global optimal solution of a fixed point equation although the exact solution does not in general exist. We also define and use the concept of P-property in such spaces. Our results are illustrated with examples.

  19. Best Proximity Points of Contractive-type and Nonexpansive-type Mappings

    Directory of Open Access Journals (Sweden)

    R. Kavitha

    2018-02-01

    Full Text Available The purpose of this paper is to obtain best proximity point theorems for multivalued nonexpansive-type and contractive-type mappings on complete metric spaces and on certain closed convex subsets of Banach spaces. We obtain a convergence result under some assumptions and we prove the existence of common best proximity points for a sequence of multivalued contractive-type mappings.

  20. Hybrid Proximal-Point Methods for Zeros of Maximal Monotone Operators, Variational Inequalities and Mixed Equilibrium Problems

    Directory of Open Access Journals (Sweden)

    Kriengsak Wattanawitoon

    2011-01-01

    Full Text Available We prove strong and weak convergence theorems of modified hybrid proximal-point algorithms for finding a common element of the zero point of a maximal monotone operator, the set of solutions of equilibrium problems, and the set of solution of the variational inequality operators of an inverse strongly monotone in a Banach space under different conditions. Moreover, applications to complementarity problems are given. Our results modify and improve the recently announced ones by Li and Song (2008 and many authors.

  1. Star point centroid algorithm based on background forecast

    Science.gov (United States)

    Wang, Jin; Zhao, Rujin; Zhu, Nan

    2014-09-01

    The calculation of star point centroid is a key step of improving star tracker measuring error. A star map photoed by APS detector includes several noises which have a great impact on veracity of calculation of star point centroid. Through analysis of characteristic of star map noise, an algorithm of calculation of star point centroid based on background forecast is presented in this paper. The experiment proves the validity of the algorithm. Comparing with classic algorithm, this algorithm not only improves veracity of calculation of star point centroid, but also does not need calibration data memory. This algorithm is applied successfully in a certain star tracker.

  2. Analysing Music with Point-Set Compression Algorithms

    DEFF Research Database (Denmark)

    Meredith, David

    2016-01-01

    Several point-set pattern-discovery and compression algorithms designed for analysing music are reviewed and evaluated. Each algorithm takes as input a point-set representation of a score in which each note is represented as a point in pitch-time space. Each algorithm computes the maximal...... and sections in pieces of classical music. On the first task, the best-performing algorithms achieved success rates of around 84%. In the second task, the best algorithms achieved mean F1 scores of around 0.49, with scores for individual pieces rising as high as 0.71....

  3. Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods

    KAUST Repository

    Loizou, Nicolas

    2017-12-27

    In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global nonassymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.

  4. Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods

    KAUST Repository

    Loizou, Nicolas; Richtarik, Peter

    2017-01-01

    In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global nonassymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.

  5. Nearest Neighbour Corner Points Matching Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Zhang Changlong

    2015-01-01

    Full Text Available Accurate detection towards the corners plays an important part in camera calibration. To deal with the instability and inaccuracies of present corner detection algorithm, the nearest neighbour corners match-ing detection algorithms was brought forward. First, it dilates the binary image of the photographed pictures, searches and reserves quadrilateral outline of the image. Second, the blocks which accord with chess-board-corners are classified into a class. If too many blocks in class, it will be deleted; if not, it will be added, and then let the midpoint of the two vertex coordinates be the rough position of corner. At last, it precisely locates the position of the corners. The Experimental results have shown that the algorithm has obvious advantages on accuracy and validity in corner detection, and it can give security for camera calibration in traffic accident measurement.

  6. Models and algorithms for midterm production planning under uncertainty: application of proximal decomposition methods

    International Nuclear Information System (INIS)

    Lenoir, A.

    2008-01-01

    We focus in this thesis, on the optimization process of large systems under uncertainty, and more specifically on solving the class of so-called deterministic equivalents with the help of splitting methods. The underlying application we have in mind is the electricity unit commitment problem under climate, market and energy consumption randomness, arising at EDF. We set the natural time-space-randomness couplings related to this application and we propose two new discretization schemes to tackle the randomness one, each of them based on non-parametric estimation of conditional expectations. This constitute an alternative to the usual scenario tree construction. We use the mathematical model consisting of the sum of two convex functions, a separable one and a coupling one. On the one hand, this simplified model offers a general framework to study decomposition-coordination algorithms by elapsing technicality due to a particular choice of subsystems. On the other hand, the convexity assumption allows to take advantage of monotone operators theory and to identify proximal methods as fixed point algorithms. We underlie the differential properties of the generalized reactions we are looking for a fixed point in order to derive bounds on the speed of convergence. Then we examine two families of decomposition-coordination algorithms resulting from operator splitting methods, namely Forward-Backward and Rachford methods. We suggest some practical method of acceleration of the Rachford class methods. To this end, we analyze the method from a theoretical point of view, furnishing as a byproduct explanations to some numerical observations. Then we propose as a response some improvements. Among them, an automatic updating strategy of scaling factors can correct a potential bad initial choice. The convergence proof is made easier thanks to stability results of some operator composition with respect to graphical convergence provided before. We also submit the idea of introducing

  7. GASPACHO: a generic automatic solver using proximal algorithms for convex huge optimization problems

    Science.gov (United States)

    Goossens, Bart; Luong, Hiêp; Philips, Wilfried

    2017-08-01

    Many inverse problems (e.g., demosaicking, deblurring, denoising, image fusion, HDR synthesis) share various similarities: degradation operators are often modeled by a specific data fitting function while image prior knowledge (e.g., sparsity) is incorporated by additional regularization terms. In this paper, we investigate automatic algorithmic techniques for evaluating proximal operators. These algorithmic techniques also enable efficient calculation of adjoints from linear operators in a general matrix-free setting. In particular, we study the simultaneous-direction method of multipliers (SDMM) and the parallel proximal algorithm (PPXA) solvers and show that the automatically derived implementations are well suited for both single-GPU and multi-GPU processing. We demonstrate this approach for an Electron Microscopy (EM) deconvolution problem.

  8. Existence and Convergence of Best Proximity Points for Semi Cyclic Contraction Pairs

    Directory of Open Access Journals (Sweden)

    Balwant Singh Thakur

    2014-02-01

    Full Text Available In this article, we introduce the notion of a semi cyclic ϕ-contraction pair of mappings, which contains semi cyclic contraction pairs as a subclass. Existence and convergence results of best proximity points for semi cyclic ϕ- contraction pair of mappings are obtained.

  9. Genetic Algorithm Based Economic Dispatch with Valve Point Effect

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jong Nam; Park, Kyung Won; Kim, Ji Hong; Kim, Jin O [Hanyang University (Korea, Republic of)

    1999-03-01

    This paper presents a new approach on genetic algorithm to economic dispatch problem for valve point discontinuities. Proposed approach in this paper on genetic algorithms improves the performance to solve economic dispatch problem for valve point discontinuities through improved death penalty method, generation-apart elitism, atavism and sexual selection with sexual distinction. Numerical results on a test system consisting of 13 thermal units show that the proposed approach is faster, more robust and powerful than conventional genetic algorithms. (author). 8 refs., 10 figs.

  10. Automated detection and classification of the proximal humerus fracture by using deep learning algorithm.

    Science.gov (United States)

    Chung, Seok Won; Han, Seung Seog; Lee, Ji Whan; Oh, Kyung-Soo; Kim, Na Ra; Yoon, Jong Pil; Kim, Joon Yub; Moon, Sung Hoon; Kwon, Jieun; Lee, Hyo-Jin; Noh, Young-Min; Kim, Youngjun

    2018-03-26

    Background and purpose - We aimed to evaluate the ability of artificial intelligence (a deep learning algorithm) to detect and classify proximal humerus fractures using plain anteroposterior shoulder radiographs. Patients and methods - 1,891 images (1 image per person) of normal shoulders (n = 515) and 4 proximal humerus fracture types (greater tuberosity, 346; surgical neck, 514; 3-part, 269; 4-part, 247) classified by 3 specialists were evaluated. We trained a deep convolutional neural network (CNN) after augmentation of a training dataset. The ability of the CNN, as measured by top-1 accuracy, area under receiver operating characteristics curve (AUC), sensitivity/specificity, and Youden index, in comparison with humans (28 general physicians, 11 general orthopedists, and 19 orthopedists specialized in the shoulder) to detect and classify proximal humerus fractures was evaluated. Results - The CNN showed a high performance of 96% top-1 accuracy, 1.00 AUC, 0.99/0.97 sensitivity/specificity, and 0.97 Youden index for distinguishing normal shoulders from proximal humerus fractures. In addition, the CNN showed promising results with 65-86% top-1 accuracy, 0.90-0.98 AUC, 0.88/0.83-0.97/0.94 sensitivity/specificity, and 0.71-0.90 Youden index for classifying fracture type. When compared with the human groups, the CNN showed superior performance to that of general physicians and orthopedists, similar performance to orthopedists specialized in the shoulder, and the superior performance of the CNN was more marked in complex 3- and 4-part fractures. Interpretation - The use of artificial intelligence can accurately detect and classify proximal humerus fractures on plain shoulder AP radiographs. Further studies are necessary to determine the feasibility of applying artificial intelligence in the clinic and whether its use could improve care and outcomes compared with current orthopedic assessments.

  11. Document localization algorithms based on feature points and straight lines

    Science.gov (United States)

    Skoryukina, Natalya; Shemiakina, Julia; Arlazarov, Vladimir L.; Faradjev, Igor

    2018-04-01

    The important part of the system of a planar rectangular object analysis is the localization: the estimation of projective transform from template image of an object to its photograph. The system also includes such subsystems as the selection and recognition of text fields, the usage of contexts etc. In this paper three localization algorithms are described. All algorithms use feature points and two of them also analyze near-horizontal and near- vertical lines on the photograph. The algorithms and their combinations are tested on a dataset of real document photographs. Also the method of localization quality estimation is proposed that allows configuring the localization subsystem independently of the other subsystems quality.

  12. A superlinear interior points algorithm for engineering design optimization

    Science.gov (United States)

    Herskovits, J.; Asquier, J.

    1990-01-01

    We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.

  13. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    Directory of Open Access Journals (Sweden)

    Haijian Chen

    2015-01-01

    Full Text Available In recent years, Massive Open Online Courses (MOOCs are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate.

  14. Arc-Search Infeasible Interior-Point Algorithm for Linear Programming

    OpenAIRE

    Yang, Yaguang

    2014-01-01

    Mehrotra's algorithm has been the most successful infeasible interior-point algorithm for linear programming since 1990. Most popular interior-point software packages for linear programming are based on Mehrotra's algorithm. This paper proposes an alternative algorithm, arc-search infeasible interior-point algorithm. We will demonstrate, by testing Netlib problems and comparing the test results obtained by arc-search infeasible interior-point algorithm and Mehrotra's algorithm, that the propo...

  15. The Research and Application of SURF Algorithm Based on Feature Point Selection Algorithm

    Directory of Open Access Journals (Sweden)

    Zhang Fang Hu

    2014-04-01

    Full Text Available As the pixel information of depth image is derived from the distance information, when implementing SURF algorithm with KINECT sensor for static sign language recognition, there can be some mismatched pairs in palm area. This paper proposes a feature point selection algorithm, by filtering the SURF feature points step by step based on the number of feature points within adaptive radius r and the distance between the two points, it not only greatly improves the recognition rate, but also ensures the robustness under the environmental factors, such as skin color, illumination intensity, complex background, angle and scale changes. The experiment results show that the improved SURF algorithm can effectively improve the recognition rate, has a good robustness.

  16. BPP: a sequence-based algorithm for branch point prediction.

    Science.gov (United States)

    Zhang, Qing; Fan, Xiaodan; Wang, Yejun; Sun, Ming-An; Shao, Jianlin; Guo, Dianjing

    2017-10-15

    Although high-throughput sequencing methods have been proposed to identify splicing branch points in the human genome, these methods can only detect a small fraction of the branch points subject to the sequencing depth, experimental cost and the expression level of the mRNA. An accurate computational model for branch point prediction is therefore an ongoing objective in human genome research. We here propose a novel branch point prediction algorithm that utilizes information on the branch point sequence and the polypyrimidine tract. Using experimentally validated data, we demonstrate that our proposed method outperforms existing methods. Availability and implementation: https://github.com/zhqingit/BPP. djguo@cuhk.edu.hk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  17. Fixed-point image orthorectification algorithms for reduced computational cost

    Science.gov (United States)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation

  18. Optimal Power Flow by Interior Point and Non Interior Point Modern Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Marcin Połomski

    2013-03-01

    Full Text Available The idea of optimal power flow (OPF is to determine the optimal settings for control variables while respecting various constraints, and in general it is related to power system operational and planning optimization problems. A vast number of optimization methods have been applied to solve the OPF problem, but their performance is highly dependent on the size of a power system being optimized. The development of the OPF recently has tracked significant progress both in numerical optimization techniques and computer techniques application. In recent years, application of interior point methods to solve OPF problem has been paid great attention. This is due to the fact that IP methods are among the fastest algorithms, well suited to solve large-scale nonlinear optimization problems. This paper presents the primal-dual interior point method based optimal power flow algorithm and new variant of the non interior point method algorithm with application to optimal power flow problem. Described algorithms were implemented in custom software. The experiments show the usefulness of computational software and implemented algorithms for solving the optimal power flow problem, including the system model sizes comparable to the size of the National Power System.

  19. Face pose tracking using the four-point algorithm

    Science.gov (United States)

    Fung, Ho Yin; Wong, Kin Hong; Yu, Ying Kin; Tsui, Kwan Pang; Kam, Ho Chuen

    2017-06-01

    In this paper, we have developed an algorithm to track the pose of a human face robustly and efficiently. Face pose estimation is very useful in many applications such as building virtual reality systems and creating an alternative input method for the disabled. Firstly, we have modified a face detection toolbox called DLib for the detection of a face in front of a camera. The detected face features are passed to a pose estimation method, known as the four-point algorithm, for pose computation. The theory applied and the technical problems encountered during system development are discussed in the paper. It is demonstrated that the system is able to track the pose of a face in real time using a consumer grade laptop computer.

  20. A maximum power point tracking algorithm for photovoltaic applications

    Science.gov (United States)

    Nelatury, Sudarshan R.; Gray, Robert

    2013-05-01

    The voltage and current characteristic of a photovoltaic (PV) cell is highly nonlinear and operating a PV cell for maximum power transfer has been a challenge for a long time. Several techniques have been proposed to estimate and track the maximum power point (MPP) in order to improve the overall efficiency of a PV panel. A strategic use of the mean value theorem permits obtaining an analytical expression for a point that lies in a close neighborhood of the true MPP. But hitherto, an exact solution in closed form for the MPP is not published. This problem can be formulated analytically as a constrained optimization, which can be solved using the Lagrange method. This method results in a system of simultaneous nonlinear equations. Solving them directly is quite difficult. However, we can employ a recursive algorithm to yield a reasonably good solution. In graphical terms, suppose the voltage current characteristic and the constant power contours are plotted on the same voltage current plane, the point of tangency between the device characteristic and the constant power contours is the sought for MPP. It is subject to change with the incident irradiation and temperature and hence the algorithm that attempts to maintain the MPP should be adaptive in nature and is supposed to have fast convergence and the least misadjustment. There are two parts in its implementation. First, one needs to estimate the MPP. The second task is to have a DC-DC converter to match the given load to the MPP thus obtained. Availability of power electronics circuits made it possible to design efficient converters. In this paper although we do not show the results from a real circuit, we use MATLAB to obtain the MPP and a buck-boost converter to match the load. Under varying conditions of load resistance and irradiance we demonstrate MPP tracking in case of a commercially available solar panel MSX-60. The power electronics circuit is simulated by PSIM software.

  1. Retinal biometrics based on Iterative Closest Point algorithm.

    Science.gov (United States)

    Hatanaka, Yuji; Tajima, Mikiya; Kawasaki, Ryo; Saito, Koko; Ogohara, Kazunori; Muramatsu, Chisako; Sunayama, Wataru; Fujita, Hiroshi

    2017-07-01

    The pattern of blood vessels in the eye is unique to each person because it rarely changes over time. Therefore, it is well known that retinal blood vessels are useful for biometrics. This paper describes a biometrics method using the Jaccard similarity coefficient (JSC) based on blood vessel regions in retinal image pairs. The retinal image pairs were rough matched by the center of their optic discs. Moreover, the image pairs were aligned using the Iterative Closest Point algorithm based on detailed blood vessel skeletons. For registration, perspective transform was applied to the retinal images. Finally, the pairs were classified as either correct or incorrect using the JSC of the blood vessel region in the image pairs. The proposed method was applied to temporal retinal images, which were obtained in 2009 (695 images) and 2013 (87 images). The 87 images acquired in 2013 were all from persons already examined in 2009. The accuracy of the proposed method reached 100%.

  2. An algorithm for leak point detection of underground pipelines

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin

    2004-01-01

    Leak noise is a good source to identify the exact location of a leak point of underground water pipelines. Water leak generates broadband noise from a leak location and can be propagated to both directions of water pipes. However, the necessity of long-range detection of this leak location makes to identify low-frequency acoustic waves rather than high frequency ones. Acoustic wave propagation coupled with surrounding boundaries including cast iron pipes is theoretically analyzed and the wave velocity was confirmed with experiment. The leak locations were identified both by the acoustic emission (AE) method and the cross-correlation method. In a short-range distance, both the AE method and cross-correlation method are effective to detect leak position. However, the detection for a long-range distance required a lower frequency range accelerometers only because higher frequency waves were attenuated very quickly with the increase of propagation paths. Two algorithms for the cross-correlation function were suggested, and a long-range detection has been achieved at real underground water pipelines longer than 300 m.

  3. A dual exterior point simplex type algorithm for the minimum cost network flow problem

    Directory of Open Access Journals (Sweden)

    Geranis George

    2009-01-01

    Full Text Available A new dual simplex type algorithm for the Minimum Cost Network Flow Problem (MCNFP is presented. The proposed algorithm belongs to a special 'exterior- point simplex type' category. Similarly to the classical network dual simplex algorithm (NDSA, this algorithm starts with a dual feasible tree-solution and reduces the primal infeasibility, iteration by iteration. However, contrary to the NDSA, the new algorithm does not always maintain a dual feasible solution. Instead, the new algorithm might reach a basic point (tree-solution outside the dual feasible area (exterior point - dual infeasible tree.

  4. The implement of Talmud property allocation algorithm based on graphic point-segment way

    Science.gov (United States)

    Cen, Haifeng

    2017-04-01

    Under the guidance of the Talmud allocation scheme's theory, the paper analyzes the algorithm implemented process via the perspective of graphic point-segment way, and designs the point-segment way's Talmud property allocation algorithm. Then it uses Java language to implement the core of allocation algorithm, by using Android programming to build a visual interface.

  5. Spline Trajectory Algorithm Development: Bezier Curve Control Point Generation for UAVs

    Science.gov (United States)

    Howell, Lauren R.; Allen, B. Danette

    2016-01-01

    A greater need for sophisticated autonomous piloting systems has risen in direct correlation with the ubiquity of Unmanned Aerial Vehicle (UAV) technology. Whether surveying unknown or unexplored areas of the world, collecting scientific data from regions in which humans are typically incapable of entering, locating lost or wanted persons, or delivering emergency supplies, an unmanned vehicle moving in close proximity to people and other vehicles, should fly smoothly and predictably. The mathematical application of spline interpolation can play an important role in autopilots' on-board trajectory planning. Spline interpolation allows for the connection of Three-Dimensional Euclidean Space coordinates through a continuous set of smooth curves. This paper explores the motivation, application, and methodology used to compute the spline control points, which shape the curves in such a way that the autopilot trajectory is able to meet vehicle-dynamics limitations. The spline algorithms developed used to generate these curves supply autopilots with the information necessary to compute vehicle paths through a set of coordinate waypoints.

  6. An efficient algorithm to compute subsets of points in ℤ n

    OpenAIRE

    Pacheco Martínez, Ana María; Real Jurado, Pedro

    2012-01-01

    In this paper we show a more efficient algorithm than that in [8] to compute subsets of points non-congruent by isometries. This algorithm can be used to reconstruct the object from the digital image. Both algorithms are compared, highlighting the improvements obtained in terms of CPU time.

  7. The Homogeneous Interior-Point Algorithm: Nonsymmetric Cones, Warmstarting, and Applications

    DEFF Research Database (Denmark)

    Skajaa, Anders

    algorithms for these problems is still limited. The goal of this thesis is to investigate and shed light on two computational aspects of homogeneous interior-point algorithms for convex conic optimization: The first part studies the possibility of devising a homogeneous interior-point method aimed at solving...... problems involving constraints that require nonsymmetric cones in their formulation. The second part studies the possibility of warmstarting the homogeneous interior-point algorithm for conic problems. The main outcome of the first part is the introduction of a completely new homogeneous interior......-point algorithm designed to solve nonsymmetric convex conic optimization problems. The algorithm is presented in detail and then analyzed. We prove its convergence and complexity. From a theoretical viewpoint, it is fully competitive with other algorithms and from a practical viewpoint, we show that it holds lots...

  8. Error tolerance in an NMR implementation of Grover's fixed-point quantum search algorithm

    International Nuclear Information System (INIS)

    Xiao Li; Jones, Jonathan A.

    2005-01-01

    We describe an implementation of Grover's fixed-point quantum search algorithm on a nuclear magnetic resonance quantum computer, searching for either one or two matching items in an unsorted database of four items. In this algorithm the target state (an equally weighted superposition of the matching states) is a fixed point of the recursive search operator, so that the algorithm always moves towards the desired state. The effects of systematic errors in the implementation are briefly explored

  9. A curvature-based weighted fuzzy c-means algorithm for point clouds de-noising

    Science.gov (United States)

    Cui, Xin; Li, Shipeng; Yan, Xiutian; He, Xinhua

    2018-04-01

    In order to remove the noise of three-dimensional scattered point cloud and smooth the data without damnify the sharp geometric feature simultaneity, a novel algorithm is proposed in this paper. The feature-preserving weight is added to fuzzy c-means algorithm which invented a curvature weighted fuzzy c-means clustering algorithm. Firstly, the large-scale outliers are removed by the statistics of r radius neighboring points. Then, the algorithm estimates the curvature of the point cloud data by using conicoid parabolic fitting method and calculates the curvature feature value. Finally, the proposed clustering algorithm is adapted to calculate the weighted cluster centers. The cluster centers are regarded as the new points. The experimental results show that this approach is efficient to different scale and intensities of noise in point cloud with a high precision, and perform a feature-preserving nature at the same time. Also it is robust enough to different noise model.

  10. A deterministic algorithm for fitting a step function to a weighted point-set

    KAUST Repository

    Fournier, Hervé

    2013-02-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance to the input points. It matches the expected time bound of the best known randomized algorithm for this problem. Our approach relies on Coles improved parametric searching technique. As a direct application, our result yields the first O(nlogn)-time algorithm for computing a k-center of a set of n weighted points on the real line. © 2012 Elsevier B.V.

  11. A Feedback Optimal Control Algorithm with Optimal Measurement Time Points

    Directory of Open Access Journals (Sweden)

    Felix Jost

    2017-02-01

    Full Text Available Nonlinear model predictive control has been established as a powerful methodology to provide feedback for dynamic processes over the last decades. In practice it is usually combined with parameter and state estimation techniques, which allows to cope with uncertainty on many levels. To reduce the uncertainty it has also been suggested to include optimal experimental design into the sequential process of estimation and control calculation. Most of the focus so far was on dual control approaches, i.e., on using the controls to simultaneously excite the system dynamics (learning as well as minimizing a given objective (performing. We propose a new algorithm, which sequentially solves robust optimal control, optimal experimental design, state and parameter estimation problems. Thus, we decouple the control and the experimental design problems. This has the advantages that we can analyze the impact of measurement timing (sampling independently, and is practically relevant for applications with either an ethical limitation on system excitation (e.g., chemotherapy treatment or the need for fast feedback. The algorithm shows promising results with a 36% reduction of parameter uncertainties for the Lotka-Volterra fishing benchmark example.

  12. Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm

    Science.gov (United States)

    Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian

    2018-03-01

    In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.

  13. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    Science.gov (United States)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  14. Performance Comparison of Widely-Used Maximum Power Point Tracker Algorithms under Real Environmental Conditions

    Directory of Open Access Journals (Sweden)

    DURUSU, A.

    2014-08-01

    Full Text Available Maximum power point trackers (MPPTs play an essential role in extracting power from photovoltaic (PV panels as they make the solar panels to operate at the maximum power point (MPP whatever the changes of environmental conditions are. For this reason, they take an important place in the increase of PV system efficiency. MPPTs are driven by MPPT algorithms and a number of MPPT algorithms are proposed in the literature. The comparison of the MPPT algorithms in literature are made by a sun simulator based test system under laboratory conditions for short durations. However, in this study, the performances of four most commonly used MPPT algorithms are compared under real environmental conditions for longer periods. A dual identical experimental setup is designed to make a comparison between two the considered MPPT algorithms as synchronized. As a result of this study, the ranking among these algorithms are presented and the results show that Incremental Conductance (IC algorithm gives the best performance.

  15. Geographical proximity on the valuations of unlisted agrarian companies: Does distance from company to company and to strategic points matter?

    Energy Technology Data Exchange (ETDEWEB)

    Occhino, P.; Maté, M.

    2017-07-01

    This paper is a first attempt to examine the role played by the geography on agrarian firms’ valuations. The geography was evaluated through the physical proximity from agrarian companies to other companies and to some strategic points which ease their accessibility to external economic agents. To get our purpose, we developed an empirical application on a sample of non-listed agrarian Spanish companies located in the region of Murcia over the period 2010-2015. We applied Discount Cash Flow methodology for non-listed companies to get their valuations. With this information, we used spatial econometric techniques to analyse the spatial distribution of agrarian firms’ valuations and model the behavior of this variable. Our results supported the assertion that agrarian firms’ valuations are conditioned by the geography. We found that firms with similar valuations tend to be grouped together in the territory. In addition, we found significant effects on agrarian firms valuations derived from the geographical proximity among closer agrarian companies and from them to external agents and transport facilities.

  16. Geographical proximity on the valuations of unlisted agrarian companies: Does distance from company to company and to strategic points matter?

    International Nuclear Information System (INIS)

    Occhino, P.; Maté, M.

    2017-01-01

    This paper is a first attempt to examine the role played by the geography on agrarian firms’ valuations. The geography was evaluated through the physical proximity from agrarian companies to other companies and to some strategic points which ease their accessibility to external economic agents. To get our purpose, we developed an empirical application on a sample of non-listed agrarian Spanish companies located in the region of Murcia over the period 2010-2015. We applied Discount Cash Flow methodology for non-listed companies to get their valuations. With this information, we used spatial econometric techniques to analyse the spatial distribution of agrarian firms’ valuations and model the behavior of this variable. Our results supported the assertion that agrarian firms’ valuations are conditioned by the geography. We found that firms with similar valuations tend to be grouped together in the territory. In addition, we found significant effects on agrarian firms valuations derived from the geographical proximity among closer agrarian companies and from them to external agents and transport facilities.

  17. A three-point Taylor algorithm for three-point boundary value problems

    NARCIS (Netherlands)

    J.L. López; E. Pérez Sinusía; N.M. Temme (Nico)

    2011-01-01

    textabstractWe consider second-order linear differential equations $\\varphi(x)y''+f(x)y'+g(x)y=h(x)$ in the interval $(-1,1)$ with Dirichlet, Neumann or mixed Dirichlet-Neumann boundary conditions given at three points of the interval: the two extreme points $x=\\pm 1$ and an interior point

  18. Intensively exploited Mediterranean aquifers: resilience and proximity to critical points of seawater intrusion

    Science.gov (United States)

    Mazi, K.; Koussis, A. D.; Destouni, G.

    2013-11-01

    We investigate here seawater intrusion in three prominent Mediterranean aquifers that are subject to intensive exploitation and modified hydrologic regimes by human activities: the Nile Delta Aquifer, the Israel Coastal Aquifer and the Cyprus Akrotiri Aquifer. Using a generalized analytical sharp-interface model, we review the salinization history and current status of these aquifers, and quantify their resilience/vulnerability to current and future sea intrusion forcings. We identify two different critical limits of sea intrusion under groundwater exploitation and/or climatic stress: a limit of well intrusion, at which intruded seawater reaches key locations of groundwater pumping, and a tipping point of complete sea intrusion upto the prevailing groundwater divide of a coastal aquifer. Either limit can be reached, and ultimately crossed, under intensive aquifer exploitation and/or climate-driven change. We show that sea intrusion vulnerability for different aquifer cases can be directly compared in terms of normalized intrusion performance curves. The site-specific assessments show that the advance of seawater currently seriously threatens the Nile Delta Aquifer and the Israel Coastal Aquifer. The Cyprus Akrotiri Aquifer is currently somewhat less threatened by increased seawater intrusion.

  19. An Improvement of a Fuzzy Logic-Controlled Maximum Power Point Tracking Algorithm for Photovoltic Applications

    Directory of Open Access Journals (Sweden)

    Woonki Na

    2017-03-01

    Full Text Available This paper presents an improved maximum power point tracking (MPPT algorithm using a fuzzy logic controller (FLC in order to extract potential maximum power from photovoltaic cells. The objectives of the proposed algorithm are to improve the tracking speed, and to simultaneously solve the inherent drawbacks such as slow tracking in the conventional perturb and observe (P and O algorithm. The performances of the conventional P and O algorithm and the proposed algorithm are compared by using MATLAB/Simulink in terms of the tracking speed and steady-state oscillations. Additionally, both algorithms were experimentally validated through a digital signal processor (DSP-based controlled-boost DC-DC converter. The experimental results show that the proposed algorithm performs with a shorter tracking time, smaller output power oscillation, and higher efficiency, compared with the conventional P and O algorithm.

  20. A primal-dual exterior point algorithm for linear programming problems

    Directory of Open Access Journals (Sweden)

    Samaras Nikolaos

    2009-01-01

    Full Text Available The aim of this paper is to present a new simplex type algorithm for the Linear Programming Problem. The Primal - Dual method is a Simplex - type pivoting algorithm that generates two paths in order to converge to the optimal solution. The first path is primal feasible while the second one is dual feasible for the original problem. Specifically, we use a three-phase-implementation. The first two phases construct the required primal and dual feasible solutions, using the Primal Simplex algorithm. Finally, in the third phase the Primal - Dual algorithm is applied. Moreover, a computational study has been carried out, using randomly generated sparse optimal linear problems, to compare its computational efficiency with the Primal Simplex algorithm and also with MATLAB's Interior Point Method implementation. The algorithm appears to be very promising since it clearly shows its superiority to the Primal Simplex algorithm as well as its robustness over the IPM algorithm.

  1. A maximum power point tracking algorithm for buoy-rope-drum wave energy converters

    Science.gov (United States)

    Wang, J. Q.; Zhang, X. C.; Zhou, Y.; Cui, Z. C.; Zhu, L. S.

    2016-08-01

    The maximum power point tracking control is the key link to improve the energy conversion efficiency of wave energy converters (WEC). This paper presents a novel variable step size Perturb and Observe maximum power point tracking algorithm with a power classification standard for control of a buoy-rope-drum WEC. The algorithm and simulation model of the buoy-rope-drum WEC are presented in details, as well as simulation experiment results. The results show that the algorithm tracks the maximum power point of the WEC fast and accurately.

  2. The algorithm to generate color point-cloud with the registration between panoramic image and laser point-cloud

    International Nuclear Information System (INIS)

    Zeng, Fanyang; Zhong, Ruofei

    2014-01-01

    Laser point cloud contains only intensity information and it is necessary for visual interpretation to obtain color information from other sensor. Cameras can provide texture, color, and other information of the corresponding object. Points with color information of corresponding pixels in digital images can be used to generate color point-cloud and is conducive to the visualization, classification and modeling of point-cloud. Different types of digital cameras are used in different Mobile Measurement Systems (MMS).the principles and processes for generating color point-cloud in different systems are not the same. The most prominent feature of the panoramic images is the field of 360 degrees view angle in the horizontal direction, to obtain the image information around the camera as much as possible. In this paper, we introduce a method to generate color point-cloud with panoramic image and laser point-cloud, and deduce the equation of the correspondence between points in panoramic images and laser point-clouds. The fusion of panoramic image and laser point-cloud is according to the collinear principle of three points (the center of the omnidirectional multi-camera system, the image point on the sphere, the object point). The experimental results show that the proposed algorithm and formulae in this paper are correct

  3. A density based algorithm to detect cavities and holes from planar points

    Science.gov (United States)

    Zhu, Jie; Sun, Yizhong; Pang, Yueyong

    2017-12-01

    Delaunay-based shape reconstruction algorithms are widely used in approximating the shape from planar points. However, these algorithms cannot ensure the optimality of varied reconstructed cavity boundaries and hole boundaries. This inadequate reconstruction can be primarily attributed to the lack of efficient mathematic formulation for the two structures (hole and cavity). In this paper, we develop an efficient algorithm for generating cavities and holes from planar points. The algorithm yields the final boundary based on an iterative removal of the Delaunay triangulation. Our algorithm is mainly divided into two steps, namely, rough and refined shape reconstructions. The rough shape reconstruction performed by the algorithm is controlled by a relative parameter. Based on the rough result, the refined shape reconstruction mainly aims to detect holes and pure cavities. Cavity and hole are conceptualized as a structure with a low-density region surrounded by the high-density region. With this structure, cavity and hole are characterized by a mathematic formulation called as compactness of point formed by the length variation of the edges incident to point in Delaunay triangulation. The boundaries of cavity and hole are then found by locating a shape gradient change in compactness of point set. The experimental comparison with other shape reconstruction approaches shows that the proposed algorithm is able to accurately yield the boundaries of cavity and hole with varying point set densities and distributions.

  4. APPLICABILITY ANALYSIS OF CLOTH SIMULATION FILTERING ALGORITHM FOR MOBILE LIDAR POINT CLOUD

    Directory of Open Access Journals (Sweden)

    S. Cai

    2018-04-01

    Full Text Available Classifying the original point clouds into ground and non-ground points is a key step in LiDAR (light detection and ranging data post-processing. Cloth simulation filtering (CSF algorithm, which based on a physical process, has been validated to be an accurate, automatic and easy-to-use algorithm for airborne LiDAR point cloud. As a new technique of three-dimensional data collection, the mobile laser scanning (MLS has been gradually applied in various fields, such as reconstruction of digital terrain models (DTM, 3D building modeling and forest inventory and management. Compared with airborne LiDAR point cloud, there are some different features (such as point density feature, distribution feature and complexity feature for mobile LiDAR point cloud. Some filtering algorithms for airborne LiDAR data were directly used in mobile LiDAR point cloud, but it did not give satisfactory results. In this paper, we explore the ability of the CSF algorithm for mobile LiDAR point cloud. Three samples with different shape of the terrain are selected to test the performance of this algorithm, which respectively yields total errors of 0.44 %, 0.77 % and1.20 %. Additionally, large area dataset is also tested to further validate the effectiveness of this algorithm, and results show that it can quickly and accurately separate point clouds into ground and non-ground points. In summary, this algorithm is efficient and reliable for mobile LiDAR point cloud.

  5. Artificial Neural Network In Maximum Power Point Tracking Algorithm Of Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Modestas Pikutis

    2014-05-01

    Full Text Available Scientists are looking for ways to improve the efficiency of solar cells all the time. The efficiency of solar cells which are available to the general public is up to 20%. Part of the solar energy is unused and a capacity of solar power plant is significantly reduced – if slow controller or controller which cannot stay at maximum power point of solar modules is used. Various algorithms of maximum power point tracking were created, but mostly algorithms are slow or make mistakes. In the literature more and more oftenartificial neural networks (ANN in maximum power point tracking process are mentioned, in order to improve performance of the controller. Self-learner artificial neural network and IncCond algorithm were used for maximum power point tracking in created solar power plant model. The algorithm for control was created. Solar power plant model is implemented in Matlab/Simulink environment.

  6. Finite Element Modelling of a Field-Sensed Magnetic Suspended System for Accurate Proximity Measurement Based on a Sensor Fusion Algorithm with Unscented Kalman Filter.

    Science.gov (United States)

    Chowdhury, Amor; Sarjaš, Andrej

    2016-09-15

    The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation.

  7. A homogeneous interior-point algorithm for nonsymmetric convex conic optimization

    DEFF Research Database (Denmark)

    Skajaa, Anders; Ye, Yinyu

    2014-01-01

    -centered primal–dual point. Features of the algorithm include that it makes use only of the primal barrier function, that it is able to detect infeasibilities in the problem and that no phase-I method is needed. We prove convergence to TeX -accuracy in TeX iterations. To improve performance, the algorithm employs...

  8. A New Numerical Algorithm for Two-Point Boundary Value Problems

    OpenAIRE

    Guo, Lihua; Wu, Boying; Zhang, Dazhi

    2014-01-01

    We present a new numerical algorithm for two-point boundary value problems. We first present the exact solution in the form of series and then prove that the n-term numerical solution converges uniformly to the exact solution. Furthermore, we establish the numerical stability and error analysis. The numerical results show the effectiveness of the proposed algorithm.

  9. A Primal-Dual Interior Point-Linear Programming Algorithm for MPC

    DEFF Research Database (Denmark)

    Edlund, Kristian; Sokoler, Leo Emil; Jørgensen, John Bagterp

    2009-01-01

    Constrained optimal control problems for linear systems with linear constraints and an objective function consisting of linear and l1-norm terms can be expressed as linear programs. We develop an efficient primal-dual interior point algorithm for solution of such linear programs. The algorithm...

  10. Parallel algorithm for dominant points correspondences in robot binocular stereo vision

    Science.gov (United States)

    Al-Tammami, A.; Singh, B.

    1993-01-01

    This paper presents an algorithm to find the correspondences of points representing dominant feature in robot stereo vision. The algorithm consists of two main steps: dominant point extraction and dominant point matching. In the feature extraction phase, the algorithm utilizes the widely used Moravec Interest Operator and two other operators: the Prewitt Operator and a new operator called Gradient Angle Variance Operator. The Interest Operator in the Moravec algorithm was used to exclude featureless areas and simple edges which are oriented in the vertical, horizontal, and two diagonals. It was incorrectly detecting points on edges which are not on the four main directions (vertical, horizontal, and two diagonals). The new algorithm uses the Prewitt operator to exclude featureless areas, so that the Interest Operator is applied only on the edges to exclude simple edges and to leave interesting points. This modification speeds-up the extraction process by approximately 5 times. The Gradient Angle Variance (GAV), an operator which calculates the variance of the gradient angle in a window around the point under concern, is then applied on the interesting points to exclude the redundant ones and leave the actual dominant ones. The matching phase is performed after the extraction of the dominant points in both stereo images. The matching starts with dominant points in the left image and does a local search, looking for corresponding dominant points in the right image. The search is geometrically constrained the epipolar line of the parallel-axes stereo geometry and the maximum disparity of the application environment. If one dominant point in the right image lies in the search areas, then it is the corresponding point of the reference dominant point in the left image. A parameter provided by the GAV is thresholded and used as a rough similarity measure to select the corresponding dominant point if there is more than one point the search area. The correlation is used as

  11. Design and Implementation of Numerical Linear Algebra Algorithms on Fixed Point DSPs

    Directory of Open Access Journals (Sweden)

    Gene Frantz

    2007-01-01

    Full Text Available Numerical linear algebra algorithms use the inherent elegance of matrix formulations and are usually implemented using C/C++ floating point representation. The system implementation is faced with practical constraints because these algorithms usually need to run in real time on fixed point digital signal processors (DSPs to reduce total hardware costs. Converting the simulation model to fixed point arithmetic and then porting it to a target DSP device is a difficult and time-consuming process. In this paper, we analyze the conversion process. We transformed selected linear algebra algorithms from floating point to fixed point arithmetic, and compared real-time requirements and performance between the fixed point DSP and floating point DSP algorithm implementations. We also introduce an advanced code optimization and an implementation by DSP-specific, fixed point C code generation. By using the techniques described in the paper, speed can be increased by a factor of up to 10 compared to floating point emulation on fixed point hardware.

  12. Symbiotic organisms search algorithm for dynamic economic dispatch with valve-point effects

    Science.gov (United States)

    Sonmez, Yusuf; Kahraman, H. Tolga; Dosoglu, M. Kenan; Guvenc, Ugur; Duman, Serhat

    2017-05-01

    In this study, symbiotic organisms search (SOS) algorithm is proposed to solve the dynamic economic dispatch with valve-point effects problem, which is one of the most important problems of the modern power system. Some practical constraints like valve-point effects, ramp rate limits and prohibited operating zones have been considered as solutions. Proposed algorithm was tested on five different test cases in 5 units, 10 units and 13 units systems. The obtained results have been compared with other well-known metaheuristic methods reported before. Results show that proposed algorithm has a good convergence and produces better results than other methods.

  13. Generation of a statistical shape model with probabilistic point correspondences and the expectation maximization- iterative closest point algorithm

    International Nuclear Information System (INIS)

    Hufnagel, Heike; Pennec, Xavier; Ayache, Nicholas; Ehrhardt, Jan; Handels, Heinz

    2008-01-01

    Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of ''generalization ability'' and ''specificity'', the estimates were very satisfactory

  14. The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes

    Energy Technology Data Exchange (ETDEWEB)

    Bhatnagar, S.; Cornwell, T. J., E-mail: sbhatnag@nrao.edu [National Radio Astronomy Observatory, 1003 Lopezville Road, Socorro, NM 87801 (United States)

    2017-11-01

    This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth–Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measured a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.

  15. The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes

    Science.gov (United States)

    Bhatnagar, S.; Cornwell, T. J.

    2017-11-01

    This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth-Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measured a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.

  16. A deterministic algorithm for fitting a step function to a weighted point-set

    KAUST Repository

    Fournier, Hervé ; Vigneron, Antoine E.

    2013-01-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance

  17. Diagnostic accuracy of T stage of gastric cancer from the view point of application of laparoscopic proximal gastrectomy.

    Science.gov (United States)

    Kouzu, Keita; Tsujimoto, Hironori; Hiraki, Shuichi; Nomura, Shinsuke; Yamamoto, Junji; Ueno, Hideki

    2018-06-01

    The preoperative diagnosis of T stage is important in selecting limited treatments, such as laparoscopic proximal gastrectomy (LPG), which lacks the ability to palpate the tumor. Therefore, the present study examined the accuracy of preoperative diagnosis of the depth of tumor invasion in early gastric cancer from the view point of the indication for LPG. A total of 193 patients with cT1 gastric cancer underwent LPG with gastrointestinal endoscopic examinations and a series of upper gastrointestinal radiographs. The patients with pT1 were classified into the correctly diagnosed group (163 patients, 84.5%), and those with pT2 or deeper were classified into the underestimated group (30 patients, 15.5%). Factors that were associated with underestimation of tumor depth were analyzed. Tumor size in the underestimated group was significantly larger; the lesions were more frequently located in the upper third of the stomach and were more histologically diffuse, scirrhous, with infiltrative growth, and more frequent lymphatic and venous invasion. For upper third lesions, in univariate analysis, histology (diffuse type) was associated with underestimation of tumor depth. Multivariate analysis found that tumor size (≥20 mm) and histology (diffuse type) were independently associated with underestimation of tumor depth. gastric cancer in the upper third of the stomach with diffuse type histology and >20 mm needs particular attention when considering the application of LPG.

  18. Improvement of maximum power point tracking perturb and observe algorithm for a standalone solar photovoltaic system

    International Nuclear Information System (INIS)

    Awan, M.M.A.; Awan, F.G.

    2017-01-01

    Extraction of maximum power from PV (Photovoltaic) cell is necessary to make the PV system efficient. Maximum power can be achieved by operating the system at MPP (Maximum Power Point) (taking the operating point of PV panel to MPP) and for this purpose MPPT (Maximum Power Point Trackers) are used. There are many tracking algorithms/methods used by these trackers which includes incremental conductance, constant voltage method, constant current method, short circuit current method, PAO (Perturb and Observe) method, and open circuit voltage method but PAO is the mostly used algorithm because it is simple and easy to implement. PAO algorithm has some drawbacks, one is low tracking speed under rapid changing weather conditions and second is oscillations of PV systems operating point around MPP. Little improvement is achieved in past papers regarding these issues. In this paper, a new method named 'Decrease and Fix' method is successfully introduced as improvement in PAO algorithm to overcome these issues of tracking speed and oscillations. Decrease and fix method is the first successful attempt with PAO algorithm for stability achievement and speeding up of tracking process in photovoltaic system. Complete standalone photovoltaic system's model with improved perturb and observe algorithm is simulated in MATLAB Simulink. (author)

  19. A fully automated algorithm of baseline correction based on wavelet feature points and segment interpolation

    Science.gov (United States)

    Qian, Fang; Wu, Yihui; Hao, Peng

    2017-11-01

    Baseline correction is a very important part of pre-processing. Baseline in the spectrum signal can induce uneven amplitude shifts across different wavenumbers and lead to bad results. Therefore, these amplitude shifts should be compensated before further analysis. Many algorithms are used to remove baseline, however fully automated baseline correction is convenient in practical application. A fully automated algorithm based on wavelet feature points and segment interpolation (AWFPSI) is proposed. This algorithm finds feature points through continuous wavelet transformation and estimates baseline through segment interpolation. AWFPSI is compared with three commonly introduced fully automated and semi-automated algorithms, using simulated spectrum signal, visible spectrum signal and Raman spectrum signal. The results show that AWFPSI gives better accuracy and has the advantage of easy use.

  20. Study on characteristic points of boiling curve by using wavelet analysis and genetic algorithm

    International Nuclear Information System (INIS)

    Wei Huiming; Su Guanghui; Qiu Suizheng; Yang Xingbo

    2009-01-01

    Based on the wavelet analysis theory of signal singularity detection,the critical heat flux (CHF) and minimum film boiling starting point (q min ) of boiling curves can be detected and analyzed by using the wavelet multi-resolution analysis. To predict the CHF in engineering, empirical relations were obtained based on genetic algorithm. The results of wavelet detection and genetic algorithm prediction are consistent with experimental data very well. (authors)

  1. A trust region interior point algorithm for optimal power flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Wang Min [Hefei University of Technology (China). Dept. of Electrical Engineering and Automation; Liu Shengsong [Jiangsu Electric Power Dispatching and Telecommunication Company (China). Dept. of Automation

    2005-05-01

    This paper presents a new algorithm that uses the trust region interior point method to solve nonlinear optimal power flow (OPF) problems. The OPF problem is solved by a primal/dual interior point method with multiple centrality corrections as a sequence of linearized trust region sub-problems. It is the trust region that controls the linear step size and ensures the validity of the linear model. The convergence of the algorithm is improved through the modification of the trust region sub-problem. Numerical results of standard IEEE systems and two realistic networks ranging in size from 14 to 662 buses are presented. The computational results show that the proposed algorithm is very effective to optimal power flow applications, and favors the successive linear programming (SLP) method. Comparison with the predictor/corrector primal/dual interior point (PCPDIP) method is also made to demonstrate the superiority of the multiple centrality corrections technique. (author)

  2. Minimally invasive registration for computer-assisted orthopedic surgery: combining tracked ultrasound and bone surface points via the P-IMLOP algorithm.

    Science.gov (United States)

    Billings, Seth; Kang, Hyun Jae; Cheng, Alexis; Boctor, Emad; Kazanzides, Peter; Taylor, Russell

    2015-06-01

    We present a registration method for computer-assisted total hip replacement (THR) surgery, which we demonstrate to improve the state of the art by both reducing the invasiveness of current methods and increasing registration accuracy. A critical element of computer-guided procedures is the determination of the spatial correspondence between the patient and a computational model of patient anatomy. The current method for establishing this correspondence in robot-assisted THR is to register points intraoperatively sampled by a tracked pointer from the exposed proximal femur and, via auxiliary incisions, from the distal femur. In this paper, we demonstrate a noninvasive technique for sampling points on the distal femur using tracked B-mode ultrasound imaging and present a new algorithm for registering these data called Projected Iterative Most-Likely Oriented Point (P-IMLOP). Points and normal orientations of the distal bone surface are segmented from ultrasound images and registered to the patient model along with points sampled from the exposed proximal femur via a tracked pointer. The proposed approach is evaluated using a bone- and tissue-mimicking leg phantom constructed to enable accurate assessment of experimental registration accuracy with respect to a CT-image-based model of the phantom. These experiments demonstrate that localization of the femur shaft is greatly improved by tracked ultrasound. The experiments further demonstrate that, for ultrasound-based data, the P-IMLOP algorithm significantly improves registration accuracy compared to the standard ICP algorithm. Registration via tracked ultrasound and the P-IMLOP algorithm has high potential to reduce the invasiveness and improve the registration accuracy of computer-assisted orthopedic procedures.

  3. Hierarchical Threshold Adaptive for Point Cloud Filter Algorithm of Moving Surface Fitting

    Directory of Open Access Journals (Sweden)

    ZHU Xiaoxiao

    2018-02-01

    Full Text Available In order to improve the accuracy,efficiency and adaptability of point cloud filtering algorithm,a hierarchical threshold adaptive for point cloud filter algorithm of moving surface fitting was proposed.Firstly,the noisy points are removed by using a statistic histogram method.Secondly,the grid index is established by grid segmentation,and the surface equation is set up through the lowest point among the neighborhood grids.The real height and fit are calculated.The difference between the elevation and the threshold can be determined.Finally,in order to improve the filtering accuracy,hierarchical filtering is used to change the grid size and automatically set the neighborhood size and threshold until the filtering result reaches the accuracy requirement.The test data provided by the International Photogrammetry and Remote Sensing Society (ISPRS is used to verify the algorithm.The first and second error and the total error are 7.33%,10.64% and 6.34% respectively.The algorithm is compared with the eight classical filtering algorithms published by ISPRS.The experiment results show that the method has well-adapted and it has high accurate filtering result.

  4. New algorithm using only one variable measurement applied to a maximum power point tracker

    Energy Technology Data Exchange (ETDEWEB)

    Salas, V.; Olias, E.; Lazaro, A.; Barrado, A. [University Carlos III de Madrid (Spain). Dept. of Electronic Technology

    2005-05-01

    A novel algorithm for seeking the maximum power point of a photovoltaic (PV) array for any temperature and solar irradiation level, needing only the PV current value, is proposed. Satisfactory theoretical and experimental results are presented and were obtained when the algorithm was included on a 100 W 24 V PV buck converter prototype, using an inexpensive microcontroller. The load of the system used was a battery and a resistance. The main advantage of this new maximum power point tracking (MPPT), when is compared with others, is that it only uses the measurement of the photovoltaic current, I{sub PV}. (author)

  5. On Implementing a Homogeneous Interior-Point Algorithm for Nonsymmetric Conic Optimization

    DEFF Research Database (Denmark)

    Skajaa, Anders; Jørgensen, John Bagterp; Hansen, Per Christian

    Based on earlier work by Nesterov, an implementation of a homogeneous infeasible-start interior-point algorithm for solving nonsymmetric conic optimization problems is presented. Starting each iteration from (the vicinity of) the central path, the method computes (nearly) primal-dual symmetric...... approximate tangent directions followed by a purely primal centering procedure to locate the next central primal-dual point. Features of the algorithm include that it makes use only of the primal barrier function, that it is able to detect infeasibilities in the problem and that no phase-I method is needed...

  6. A GLOBAL REGISTRATION ALGORITHM OF THE SINGLE-CLOSED RING MULTI-STATIONS POINT CLOUD

    Directory of Open Access Journals (Sweden)

    R. Yang

    2018-04-01

    Full Text Available Aimed at the global registration problem of the single-closed ring multi-stations point cloud, a formula in order to calculate the error of rotation matrix was constructed according to the definition of error. The global registration algorithm of multi-station point cloud was derived to minimize the error of rotation matrix. And fast-computing formulas of transformation matrix with whose implementation steps and simulation experiment scheme was given. Compared three different processing schemes of multi-station point cloud, the experimental results showed that the effectiveness of the new global registration method was verified, and it could effectively complete the global registration of point cloud.

  7. A 3D Printing Model Watermarking Algorithm Based on 3D Slicing and Feature Points

    Directory of Open Access Journals (Sweden)

    Giao N. Pham

    2018-02-01

    Full Text Available With the increase of three-dimensional (3D printing applications in many areas of life, a large amount of 3D printing data is copied, shared, and used several times without any permission from the original providers. Therefore, copyright protection and ownership identification for 3D printing data in communications or commercial transactions are practical issues. This paper presents a novel watermarking algorithm for 3D printing models based on embedding watermark data into the feature points of a 3D printing model. Feature points are determined and computed by the 3D slicing process along the Z axis of a 3D printing model. The watermark data is embedded into a feature point of a 3D printing model by changing the vector length of the feature point in OXY space based on the reference length. The x and y coordinates of the feature point will be then changed according to the changed vector length that has been embedded with a watermark. Experimental results verified that the proposed algorithm is invisible and robust to geometric attacks, such as rotation, scaling, and translation. The proposed algorithm provides a better method than the conventional works, and the accuracy of the proposed algorithm is much higher than previous methods.

  8. Outdoor Illegal Construction Identification Algorithm Based on 3D Point Cloud Segmentation

    Science.gov (United States)

    An, Lu; Guo, Baolong

    2018-03-01

    Recently, various illegal constructions occur significantly in our surroundings, which seriously restrict the orderly development of urban modernization. The 3D point cloud data technology is used to identify the illegal buildings, which could address the problem above effectively. This paper proposes an outdoor illegal construction identification algorithm based on 3D point cloud segmentation. Initially, in order to save memory space and reduce processing time, a lossless point cloud compression method based on minimum spanning tree is proposed. Then, a ground point removing method based on the multi-scale filtering is introduced to increase accuracy. Finally, building clusters on the ground can be obtained using a region growing method, as a result, the illegal construction can be marked. The effectiveness of the proposed algorithm is verified using a publicly data set collected from the International Society for Photogrammetry and Remote Sensing (ISPRS).

  9. A Flexible VHDL Floating Point Module for Control Algorithm Implementation in Space Applications

    Science.gov (United States)

    Padierna, A.; Nicoleau, C.; Sanchez, J.; Hidalgo, I.; Elvira, S.

    2012-08-01

    The implementation of control loops for space applications is an area with great potential. However, the characteristics of this kind of systems, such as its wide dynamic range of numeric values, make inadequate the use of fixed-point algorithms.However, because the generic chips available for the treatment of floating point data are, in general, not qualified to operate in space environments and the possibility of using an IP module in a FPGA/ASIC qualified for space is not viable due to the low amount of logic cells available for these type of devices, it is necessary to find a viable alternative.For these reasons, in this paper a VHDL Floating Point Module is presented. This proposal allows the design and execution of floating point algorithms with acceptable occupancy to be implemented in FPGAs/ASICs qualified for space environments.

  10. Solving Singular Two-Point Boundary Value Problems Using Continuous Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Omar Abu Arqub

    2012-01-01

    Full Text Available In this paper, the continuous genetic algorithm is applied for the solution of singular two-point boundary value problems, where smooth solution curves are used throughout the evolution of the algorithm to obtain the required nodal values. The proposed technique might be considered as a variation of the finite difference method in the sense that each of the derivatives is replaced by an appropriate difference quotient approximation. This novel approach possesses main advantages; it can be applied without any limitation on the nature of the problem, the type of singularity, and the number of mesh points. Numerical examples are included to demonstrate the accuracy, applicability, and generality of the presented technique. The results reveal that the algorithm is very effective, straightforward, and simple.

  11. An Effective, Robust And Parallel Implementation Of An Interior Point Algorithm For Limit State Optimization

    DEFF Research Database (Denmark)

    Dollerup, Niels; Jepsen, Michael S.; Damkilde, Lars

    2013-01-01

    The artide describes a robust and effective implementation of the interior point optimization algorithm. The adopted method includes a precalculation step, which reduces the number of variables by fulfilling the equilibrium equations a priori. This work presents an improved implementation of the ...

  12. Object tracking system using a VSW algorithm based on color and point features

    Directory of Open Access Journals (Sweden)

    Lim Hye-Youn

    2011-01-01

    Full Text Available Abstract An object tracking system using a variable search window (VSW algorithm based on color and feature points is proposed. A meanshift algorithm is an object tracking technique that works according to color probability distributions. An advantage of this algorithm based on color is that it is robust to specific color objects; however, a disadvantage is that it is sensitive to non-specific color objects due to illumination and noise. Therefore, to offset this weakness, it presents the VSW algorithm based on robust feature points for the accurate tracking of moving objects. The proposed method extracts the feature points of a detected object which is the region of interest (ROI, and generates a VSW using the given information which is the positions of extracted feature points. The goal of this paper is to achieve an efficient and effective object tracking system that meets the accurate tracking of moving objects. Through experiments, the object tracking system is implemented that it performs more precisely than existing techniques.

  13. Computational Analysis of Distance Operators for the Iterative Closest Point Algorithm.

    Directory of Open Access Journals (Sweden)

    Higinio Mora

    Full Text Available The Iterative Closest Point (ICP algorithm is currently one of the most popular methods for rigid registration so that it has become the standard in the Robotics and Computer Vision communities. Many applications take advantage of it to align 2D/3D surfaces due to its popularity and simplicity. Nevertheless, some of its phases present a high computational cost thus rendering impossible some of its applications. In this work, it is proposed an efficient approach for the matching phase of the Iterative Closest Point algorithm. This stage is the main bottleneck of that method so that any efficiency improvement has a great positive impact on the performance of the algorithm. The proposal consists in using low computational cost point-to-point distance metrics instead of classic Euclidean one. The candidates analysed are the Chebyshev and Manhattan distance metrics due to their simpler formulation. The experiments carried out have validated the performance, robustness and quality of the proposal. Different experimental cases and configurations have been set up including a heterogeneous set of 3D figures, several scenarios with partial data and random noise. The results prove that an average speed up of 14% can be obtained while preserving the convergence properties of the algorithm and the quality of the final results.

  14. Optimization Algorithms for Calculation of the Joint Design Point in Parallel Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    1992-01-01

    In large structures it is often necessary to estimate the reliability of the system by use of parallel systems. Optimality criteria-based algorithms for calculation of the joint design point in a parallel system are described and efficient active set strategies are developed. Three possible...

  15. A Numerical Algorithm for Solving a Four-Point Nonlinear Fractional Integro-Differential Equations

    Directory of Open Access Journals (Sweden)

    Er Gao

    2012-01-01

    Full Text Available We provide a new algorithm for a four-point nonlocal boundary value problem of nonlinear integro-differential equations of fractional order q∈(1,2] based on reproducing kernel space method. According to our work, the analytical solution of the equations is represented in the reproducing kernel space which we construct and so the n-term approximation. At the same time, the n-term approximation is proved to converge to the analytical solution. An illustrative example is also presented, which shows that the new algorithm is efficient and accurate.

  16. MUSIC ALGORITHM FOR LOCATING POINT-LIKE SCATTERERS CONTAINED IN A SAMPLE ON FLAT SUBSTRATE

    Institute of Scientific and Technical Information of China (English)

    Dong Heping; Ma Fuming; Zhang Deyue

    2012-01-01

    In this paper,we consider a MUSIC algorithm for locating point-like scatterers contained in a sample on flat substrate.Based on an asymptotic expansion of the scattering amplitude proposed by Ammari et al.,the reconstruction problem can be reduced to a calculation of Green function corresponding to the background medium.In addition,we use an explicit formulation of Green function in the MUSIC algorithm to simplify the calculation when the cross-section of sample is a half-disc.Numerical experiments are included to demonstrate the feasibility of this method.

  17. Using the Chandra Source-Finding Algorithm to Automatically Identify Solar X-ray Bright Points

    Science.gov (United States)

    Adams, Mitzi L.; Tennant, A.; Cirtain, J. M.

    2009-01-01

    This poster details a technique of bright point identification that is used to find sources in Chandra X-ray data. The algorithm, part of a program called LEXTRCT, searches for regions of a given size that are above a minimum signal to noise ratio. The algorithm allows selected pixels to be excluded from the source-finding, thus allowing exclusion of saturated pixels (from flares and/or active regions). For Chandra data the noise is determined by photon counting statistics, whereas solar telescopes typically integrate a flux. Thus the calculated signal-to-noise ratio is incorrect, but we find we can scale the number to get reasonable results. For example, Nakakubo and Hara (1998) find 297 bright points in a September 11, 1996 Yohkoh image; with judicious selection of signal-to-noise ratio, our algorithm finds 300 sources. To further assess the efficacy of the algorithm, we analyze a SOHO/EIT image (195 Angstroms) and compare results with those published in the literature (McIntosh and Gurman, 2005). Finally, we analyze three sets of data from Hinode, representing different parts of the decline to minimum of the solar cycle.

  18. Characterization results and Markov chain Monte Carlo algorithms including exact simulation for some spatial point processes

    DEFF Research Database (Denmark)

    Häggström, Olle; Lieshout, Marie-Colette van; Møller, Jesper

    1999-01-01

    The area-interaction process and the continuum random-cluster model are characterized in terms of certain functional forms of their respective conditional intensities. In certain cases, these two point process models can be derived from a bivariate point process model which in many respects...... is simpler to analyse and simulate. Using this correspondence we devise a two-component Gibbs sampler, which can be used for fast and exact simulation by extending the recent ideas of Propp and Wilson. We further introduce a Swendsen-Wang type algorithm. The relevance of the results within spatial statistics...

  19. Determination of point of maximum likelihood in failure domain using genetic algorithms

    International Nuclear Information System (INIS)

    Obadage, A.S.; Harnpornchai, N.

    2006-01-01

    The point of maximum likelihood in a failure domain yields the highest value of the probability density function in the failure domain. The maximum-likelihood point thus represents the worst combination of random variables that contribute in the failure event. In this work Genetic Algorithms (GAs) with an adaptive penalty scheme have been proposed as a tool for the determination of the maximum likelihood point. The utilization of only numerical values in the GAs operation makes the algorithms applicable to cases of non-linear and implicit single and multiple limit state function(s). The algorithmic simplicity readily extends its application to higher dimensional problems. When combined with Monte Carlo Simulation, the proposed methodology will reduce the computational complexity and at the same time will enhance the possibility in rare-event analysis under limited computational resources. Since, there is no approximation done in the procedure, the solution obtained is considered accurate. Consequently, GAs can be used as a tool for increasing the computational efficiency in the element and system reliability analyses

  20. Weak and Strong Convergence of an Algorithm for the Split Common Fixed-Point of Asymptotically Quasi-Nonexpansive Operators

    Directory of Open Access Journals (Sweden)

    Yazheng Dang

    2013-01-01

    Full Text Available Inspired by the Moudafi (2010, we propose an algorithm for solving the split common fixed-point problem for a wide class of asymptotically quasi-nonexpansive operators and the weak and strong convergence of the algorithm are shown under some suitable conditions in Hilbert spaces. The algorithm and its convergence results improve and develop previous results for split feasibility problems.

  1. Disentangling the roles of point-of-sale ban, tobacco retailer density and proximity on cessation and relapse among a cohort of smokers: findings from ITC Canada Survey.

    Science.gov (United States)

    Fleischer, Nancy L; Lozano, Paula; Wu, Yun-Hsuan; Hardin, James W; Meng, Gang; Liese, Angela D; Fong, Geoffrey T; Thrasher, James F

    2018-03-08

    To examine how point-of-sale (POS) display bans, tobacco retailer density and tobacco retailer proximity were associated with smoking cessation and relapse in a cohort of smokers in Canada, where provincial POS bans were implemented differentially over time from 2004 to 2010. Data from the 2005 to 2011 administrations of the International Tobacco Control (ITC) Canada Survey, a nationally representative cohort of adult smokers, were linked via residential geocoding with tobacco retailer data to derive for each smoker a measure of retailer density and proximity. An indicator variable identified whether the smoker's province banned POS displays at the time of the interview. Outcomes included cessation for at least 1 month at follow-up among smokers from the previous wave and relapse at follow-up among smokers who had quit at the previous wave. Logistic generalised estimating equation models were used to determine the relationship between living in a province with a POS display ban, tobacco retailer density and tobacco retailer proximity with cessation (n=4388) and relapse (n=866). Provincial POS display bans were not associated with cessation. In adjusted models, POS display bans were associated with lower odds of relapse which strengthened after adjusting for retailer density and proximity, although results were not statistically significant (OR 0.66, 95% CI 0.41 to 1.07, p=0.089). Neither tobacco retailer density nor proximity was associated with cessation or relapse. Banning POS retail displays shows promise as an additional tool to prevent relapse, although these results need to be confirmed in larger longitudinal studies. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  2. Quad-Rotor Helicopter Autonomous Navigation Based on Vanishing Point Algorithm

    Directory of Open Access Journals (Sweden)

    Jialiang Wang

    2014-01-01

    Full Text Available Quad-rotor helicopter is becoming popular increasingly as they can well implement many flight missions in more challenging environments, with lower risk of damaging itself and its surroundings. They are employed in many applications, from military operations to civilian tasks. Quad-rotor helicopter autonomous navigation based on the vanishing point fast estimation (VPFE algorithm using clustering principle is implemented in this paper. For images collected by the camera of quad-rotor helicopter, the system executes the process of preprocessing of image, deleting noise interference, edge extracting using Canny operator, and extracting straight lines by randomized hough transformation (RHT method. Then system obtains the position of vanishing point and regards it as destination point and finally controls the autonomous navigation of the quad-rotor helicopter by continuous modification according to the calculated navigation error. The experimental results show that the quad-rotor helicopter can implement the destination navigation well in the indoor environment.

  3. On a Hopping-Points SVD and Hough Transform-Based Line Detection Algorithm for Robot Localization and Mapping

    Directory of Open Access Journals (Sweden)

    Abhijeet Ravankar

    2016-05-01

    Full Text Available Line detection is an important problem in computer vision, graphics and autonomous robot navigation. Lines detected using a laser range sensor (LRS mounted on a robot can be used as features to build a map of the environment, and later to localize the robot in the map, in a process known as Simultaneous Localization and Mapping (SLAM. We propose an efficient algorithm for line detection from LRS data using a novel hopping-points Singular Value Decomposition (SVD and Hough transform-based algorithm, in which SVD is applied to intermittent LRS points to accelerate the algorithm. A reverse-hop mechanism ensures that the end points of the line segments are accurately extracted. Line segments extracted from the proposed algorithm are used to form a map and, subsequently, LRS data points are matched with the line segments to localize the robot. The proposed algorithm eliminates the drawbacks of point-based matching algorithms like the Iterative Closest Points (ICP algorithm, the performance of which degrades with an increasing number of points. We tested the proposed algorithm for mapping and localization in both simulated and real environments, and found it to detect lines accurately and build maps with good self-localization.

  4. A Review of Point-Wise Motion Tracking Algorithms for Fetal Magnetic Resonance Imaging.

    Science.gov (United States)

    Chikop, Shivaprasad; Koulagi, Girish; Kumbara, Ankita; Geethanath, Sairam

    2016-01-01

    We review recent feature-based tracking algorithms as applied to fetal magnetic resonance imaging (MRI). Motion in fetal MRI is an active and challenging area of research, but the challenge can be mitigated by strategies related to patient setup, acquisition, reconstruction, and image processing. We focus on fetal motion correction through methods based on tracking algorithms for registration of slices with similar anatomy in multiple volumes. We describe five motion detection algorithms based on corner detection and region-based methods through pseudocodes, illustrating the results of their application to fetal MRI. We compare the performance of these methods on the basis of error in registration and minimum number of feature points required for registration. Harris, a corner detection method, provides similar error when compared to the other methods and has the lowest number of feature points required at that error level. We do not discuss group-wise methods here. Finally, we attempt to communicate the application of available feature extraction methods to fetal MRI.

  5. A Kind of Single-frequency Precise Point Positioning Algorithm Based on the Raw Observations

    Directory of Open Access Journals (Sweden)

    WANG Li

    2015-01-01

    Full Text Available A kind of single-frequency precise point positioning (PPP algorithm based on the raw observations is presented in this paper. By this algorithm, the ionospheric delays were corrected efficiently by means of adding the ionospheric delay prior information and the virtual observation equations with the spatial and temporal constraints, and they were estimated as the unknown parameters simultaneously with other positioning parameters. Then, a dataset of 178 International GNSS Service (IGS stations at day 72 in 2012 was used to evaluate the convergence speed, the positioning accuracy and the accuracy of the retrieved ionospheric VTEC. The series of results have shown that the convergence speed and stability of the new algorithm are much better than the traditional PPP algorithm, and the positioning accuracy of about 2-3 cm and 2-3 dm can be achieved respectively for static and kinematic positioning with the single-frequency observations' daily solution. The average bias of ionospheric total electron content retrieved by the single-frequency PPP and dual-frequency PPP is less than 5 TECU. So the ionospheric total electron content can be used as a kind of auxiliary products in GPS positioning.

  6. Interior point algorithms: guaranteed optimality for fluence map optimization in IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Aleman, Dionne M [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, ON M5S 3G8 (Canada); Glaser, Daniel [Division of Optimization and Systems Theory, Department of Mathematics, Royal Institute of Technology, Stockholm (Sweden); Romeijn, H Edwin [Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109-2117 (United States); Dempsey, James F, E-mail: aleman@mie.utoronto.c, E-mail: romeijn@umich.ed, E-mail: jfdempsey@viewray.co [ViewRay, Inc. 2 Thermo Fisher Way, Village of Oakwood, OH 44146 (United States)

    2010-09-21

    One of the most widely studied problems of the intensity-modulated radiation therapy (IMRT) treatment planning problem is the fluence map optimization (FMO) problem, the problem of determining the amount of radiation intensity, or fluence, of each beamlet in each beam. For a given set of beams, the fluences of the beamlets can drastically affect the quality of the treatment plan, and thus it is critical to obtain good fluence maps for radiation delivery. Although several approaches have been shown to yield good solutions to the FMO problem, these solutions are not guaranteed to be optimal. This shortcoming can be attributed to either optimization model complexity or properties of the algorithms used to solve the optimization model. We present a convex FMO formulation and an interior point algorithm that yields an optimal treatment plan in seconds, making it a viable option for clinical applications.

  7. Interior point algorithms: guaranteed optimality for fluence map optimization in IMRT

    International Nuclear Information System (INIS)

    Aleman, Dionne M; Glaser, Daniel; Romeijn, H Edwin; Dempsey, James F

    2010-01-01

    One of the most widely studied problems of the intensity-modulated radiation therapy (IMRT) treatment planning problem is the fluence map optimization (FMO) problem, the problem of determining the amount of radiation intensity, or fluence, of each beamlet in each beam. For a given set of beams, the fluences of the beamlets can drastically affect the quality of the treatment plan, and thus it is critical to obtain good fluence maps for radiation delivery. Although several approaches have been shown to yield good solutions to the FMO problem, these solutions are not guaranteed to be optimal. This shortcoming can be attributed to either optimization model complexity or properties of the algorithms used to solve the optimization model. We present a convex FMO formulation and an interior point algorithm that yields an optimal treatment plan in seconds, making it a viable option for clinical applications.

  8. Application of Deconvolution Algorithm of Point Spread Function in Improving Image Quality: An Observer Preference Study on Chest Radiography.

    Science.gov (United States)

    Chae, Kum Ju; Goo, Jin Mo; Ahn, Su Yeon; Yoo, Jin Young; Yoon, Soon Ho

    2018-01-01

    To evaluate the preference of observers for image quality of chest radiography using the deconvolution algorithm of point spread function (PSF) (TRUVIEW ART algorithm, DRTECH Corp.) compared with that of original chest radiography for visualization of anatomic regions of the chest. Prospectively enrolled 50 pairs of posteroanterior chest radiographs collected with standard protocol and with additional TRUVIEW ART algorithm were compared by four chest radiologists. This algorithm corrects scattered signals generated by a scintillator. Readers independently evaluated the visibility of 10 anatomical regions and overall image quality with a 5-point scale of preference. The significance of the differences in reader's preference was tested with a Wilcoxon's signed rank test. All four readers preferred the images applied with the algorithm to those without algorithm for all 10 anatomical regions (mean, 3.6; range, 3.2-4.0; p chest anatomical structures applied with the deconvolution algorithm of PSF was superior to the original chest radiography.

  9. Iterative algorithms for computing the feedback Nash equilibrium point for positive systems

    Science.gov (United States)

    Ivanov, I.; Imsland, Lars; Bogdanova, B.

    2017-03-01

    The paper studies N-player linear quadratic differential games on an infinite time horizon with deterministic feedback information structure. It introduces two iterative methods (the Newton method as well as its accelerated modification) in order to compute the stabilising solution of a set of generalised algebraic Riccati equations. The latter is related to the Nash equilibrium point of the considered game model. Moreover, we derive the sufficient conditions for convergence of the proposed methods. Finally, we discuss two numerical examples so as to illustrate the performance of both of the algorithms.

  10. Genetic algorithms optimized fuzzy logic control for the maximum power point tracking in photovoltaic system

    Energy Technology Data Exchange (ETDEWEB)

    Larbes, C.; Ait Cheikh, S.M.; Obeidi, T.; Zerguerras, A. [Laboratoire des Dispositifs de Communication et de Conversion Photovoltaique, Departement d' Electronique, Ecole Nationale Polytechnique, 10, Avenue Hassen Badi, El Harrach, Alger 16200 (Algeria)

    2009-10-15

    This paper presents an intelligent control method for the maximum power point tracking (MPPT) of a photovoltaic system under variable temperature and irradiance conditions. First, for the purpose of comparison and because of its proven and good performances, the perturbation and observation (P and O) technique is briefly introduced. A fuzzy logic controller based MPPT (FLC) is then proposed which has shown better performances compared to the P and O MPPT based approach. The proposed FLC has been also improved using genetic algorithms (GA) for optimisation. Different development stages are presented and the optimized fuzzy logic MPPT controller (OFLC) is then simulated and evaluated, which has shown better performances. (author)

  11. A Numerical Algorithm for Solving a Four-Point Nonlinear Fractional Integro-Differential Equations

    OpenAIRE

    Gao, Er; Song, Songhe; Zhang, Xinjian

    2012-01-01

    We provide a new algorithm for a four-point nonlocal boundary value problem of nonlinear integro-differential equations of fractional order q∈(1,2] based on reproducing kernel space method. According to our work, the analytical solution of the equations is represented in the reproducing kernel space which we construct and so the n-term approximation. At the same time, the n-term approximation is proved to converge to the analytical solution. An illustrative example is also presented, which sh...

  12. A modified Symbiotic Organisms Search algorithm for large scale economic dispatch problem with valve-point effects

    International Nuclear Information System (INIS)

    Secui, Dinu Calin

    2016-01-01

    This paper proposes a new metaheuristic algorithm, called Modified Symbiotic Organisms Search (MSOS) algorithm, to solve the economic dispatch problem considering the valve-point effects, the prohibited operating zones (POZ), the transmission line losses, multi-fuel sources, as well as other operating constraints of the generating units and power system. The MSOS algorithm introduces, in all of its phases, new relations to update the solutions to improve its capacity of identifying stable and of high-quality solutions in a reasonable time. Furthermore, to increase the capacity of exploring the MSOS algorithm in finding the most promising zones, it is endowed with a chaotic component generated by the Logistic map. The performance of the modified algorithm and of the original algorithm Symbiotic Organisms Search (SOS) is tested on five systems of different characteristics, constraints and dimensions (13-unit, 40-unit, 80-unit, 160-unit and 320-unit). The results obtained by applying the proposed algorithm (MSOS) show that this has a better performance than other techniques of optimization recently used in solving the economic dispatch problem with valve-point effects. - Highlights: • A new modified SOS algorithm (MSOS) is proposed to solve the EcD problem. • Valve-point effects, ramp-rate limits, POZ, multi-fuel sources, transmission losses were considered. • The algorithm is tested on five systems having 13, 40, 80, 160 and 320 thermal units. • MSOS algorithm outperforms many other optimization techniques.

  13. Maximum power point tracking-based control algorithm for PMSG wind generation system without mechanical sensors

    International Nuclear Information System (INIS)

    Hong, Chih-Ming; Chen, Chiung-Hsing; Tu, Chia-Sheng

    2013-01-01

    Highlights: ► This paper presents MPPT based control for optimal wind energy capture using RBFN. ► MPSO is adopted to adjust the learning rates to improve the learning capability. ► This technique can maintain the system stability and reach the desired performance. ► The EMF in the rotating reference frame is utilized in order to estimate speed. - Abstract: This paper presents maximum-power-point-tracking (MPPT) based control algorithms for optimal wind energy capture using radial basis function network (RBFN) and a proposed torque observer MPPT algorithm. The design of a high-performance on-line training RBFN using back-propagation learning algorithm with modified particle swarm optimization (MPSO) regulating controller for the sensorless control of a permanent magnet synchronous generator (PMSG). The MPSO is adopted in this study to adapt the learning rates in the back-propagation process of the RBFN to improve the learning capability. The PMSG is controlled by the loss-minimization control with MPPT below the base speed, which corresponds to low and high wind speed, and the maximum energy can be captured from the wind. Then the observed disturbance torque is feed-forward to increase the robustness of the PMSG system

  14. An automatic, stagnation point based algorithm for the delineation of Wellhead Protection Areas

    Science.gov (United States)

    Tosco, Tiziana; Sethi, Rajandrea; di Molfetta, Antonio

    2008-07-01

    Time-related capture areas are usually delineated using the backward particle tracking method, releasing circles of equally spaced particles around each well. In this way, an accurate delineation often requires both a very high number of particles and a manual capture zone encirclement. The aim of this work was to propose an Automatic Protection Area (APA) delineation algorithm, which can be coupled with any model of flow and particle tracking. The computational time is here reduced, thanks to the use of a limited number of nonequally spaced particles. The particle starting positions are determined coupling forward particle tracking from the stagnation point, and backward particle tracking from the pumping well. The pathlines are postprocessed for a completely automatic delineation of closed perimeters of time-related capture zones. The APA algorithm was tested for a two-dimensional geometry, in homogeneous and nonhomogeneous aquifers, steady state flow conditions, single and multiple wells. Results show that the APA algorithm is robust and able to automatically and accurately reconstruct protection areas with a very small number of particles, also in complex scenarios.

  15. Development of Quadratic Programming Algorithm Based on Interior Point Method with Estimation Mechanism of Active Constraints

    Science.gov (United States)

    Hashimoto, Hiroyuki; Takaguchi, Yusuke; Nakamura, Shizuka

    Instability of calculation process and increase of calculation time caused by increasing size of continuous optimization problem remain the major issues to be solved to apply the technique to practical industrial systems. This paper proposes an enhanced quadratic programming algorithm based on interior point method mainly for improvement of calculation stability. The proposed method has dynamic estimation mechanism of active constraints on variables, which fixes the variables getting closer to the upper/lower limit on them and afterwards releases the fixed ones as needed during the optimization process. It is considered as algorithm-level integration of the solution strategy of active-set method into the interior point method framework. We describe some numerical results on commonly-used bench-mark problems called “CUTEr” to show the effectiveness of the proposed method. Furthermore, the test results on large-sized ELD problem (Economic Load Dispatching problems in electric power supply scheduling) are also described as a practical industrial application.

  16. Registration of TLS and MLS Point Cloud Combining Genetic Algorithm with ICP

    Directory of Open Access Journals (Sweden)

    YAN Li

    2018-04-01

    Full Text Available Large scene point cloud can be quickly acquired by mobile laser scanning (MLS technology,which needs to be supplemented by terrestrial laser scanning (TLS point cloud because of limited field of view and occlusion.MLS and TLS point cloud are located in geodetic coordinate system and local coordinate system respectively.This paper proposes an automatic registration method combined genetic algorithm (GA and iterative closed point ICP to achieve a uniform coordinate reference frame.The local optimizer is utilized in ICP.The efficiency of ICP is higher than that of GA registration,but it depends on a initial solution.GA is a global optimizer,but it's inefficient.The combining strategy is that ICP is enabled to complete the registration when the GA tends to local search.The rough position measured by a built-in GPS of a terrestrial laser scanner is used in the GA registration to limit its optimizing search space.To improve the GA registration accuracy,a maximum registration model called normalized sum of matching scores (NSMS is presented.The results for measured data show that the NSMS model is effective,the root mean square error (RMSE of GA registration is 1~5 cm and the registration efficiency can be improved by about 50% combining GA with ICP.

  17. Optimization of dynamic economic dispatch with valve-point effect using chaotic sequence based differential evolution algorithms

    International Nuclear Information System (INIS)

    He Dakuo; Dong Gang; Wang Fuli; Mao Zhizhong

    2011-01-01

    A chaotic sequence based differential evolution (DE) approach for solving the dynamic economic dispatch problem (DEDP) with valve-point effect is presented in this paper. The proposed method combines the DE algorithm with the local search technique to improve the performance of the algorithm. DE is the main optimizer, while an approximated model for local search is applied to fine tune in the solution of the DE run. To accelerate convergence of DE, a series of constraints handling rules are adopted. An initial population obtained by using chaotic sequence exerts optimal performance of the proposed algorithm. The combined algorithm is validated for two test systems consisting of 10 and 13 thermal units whose incremental fuel cost function takes into account the valve-point loading effects. The proposed combined method outperforms other algorithms reported in literatures for DEDP considering valve-point effects.

  18. Frequency and Proximity Clustering Analyses for Georeferencing Toponyms and Points-of-Interest Names from a Travel Journal

    Science.gov (United States)

    McDermott, Scott D.

    2017-01-01

    This research study uses geographic information retrieval (GIR) to georeference toponyms and points-of-interest (POI) names from a travel journal. Travel journals are an ideal data source with which to conduct this study because they are significant accounts specific to the author's experience, and contain geographic instances based on the…

  19. A Hybrid Maximum Power Point Tracking Approach for Photovoltaic Systems under Partial Shading Conditions Using a Modified Genetic Algorithm and the Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Yu-Pei Huang

    2018-01-01

    Full Text Available This paper proposes a modified maximum power point tracking (MPPT algorithm for photovoltaic systems under rapidly changing partial shading conditions (PSCs. The proposed algorithm integrates a genetic algorithm (GA and the firefly algorithm (FA and further improves its calculation process via a differential evolution (DE algorithm. The conventional GA is not advisable for MPPT because of its complicated calculations and low accuracy under PSCs. In this study, we simplified the GA calculations with the integration of the DE mutation process and FA attractive process. Results from both the simulation and evaluation verify that the proposed algorithm provides rapid response time and high accuracy due to the simplified processing. For instance, evaluation results demonstrate that when compared to the conventional GA, the execution time and tracking accuracy of the proposed algorithm can be, respectively, improved around 69.4% and 4.16%. In addition, in comparison to FA, the tracking speed and tracking accuracy of the proposed algorithm can be improved around 42.9% and 1.85%, respectively. Consequently, the major improvement of the proposed method when evaluated against the conventional GA and FA is tracking speed. Moreover, this research provides a framework to integrate multiple nature-inspired algorithms for MPPT. Furthermore, the proposed method is adaptable to different types of solar panels and different system formats with specifically designed equations, the advantages of which are rapid tracking speed with high accuracy under PSCs.

  20. Photovoltaic System Modeling with Fuzzy Logic Based Maximum Power Point Tracking Algorithm

    Directory of Open Access Journals (Sweden)

    Hasan Mahamudul

    2013-01-01

    Full Text Available This paper represents a novel modeling technique of PV module with a fuzzy logic based MPPT algorithm and boost converter in Simulink environment. The prime contributions of this work are simplification of PV modeling technique and implementation of fuzzy based MPPT system to track maximum power efficiently. The main highlighted points of this paper are to demonstrate the precise control of the duty cycle with respect to various atmospheric conditions, illustration of PV characteristic curves, and operation analysis of the converter. The proposed system has been applied for three different PV modules SOLKAR 36 W, BP MSX 60 W, and KC85T 87 W. Finally the resultant data has been compared with the theoretical prediction and company specified value to ensure the validity of the system.

  1. Hybrid SOA-SQP algorithm for dynamic economic dispatch with valve-point effects

    Energy Technology Data Exchange (ETDEWEB)

    Sivasubramani, S.; Swarup, K.S. [Department of Electrical Engineering, Indian Institute of Technology Madras, Chennai 600036 (India)

    2010-12-15

    This paper proposes a hybrid technique combining a new heuristic algorithm named seeker optimization algorithm (SOA) and sequential quadratic programming (SQP) method for solving dynamic economic dispatch problem with valve-point effects. The SOA is based on the concept of simulating the act of human searching, where the search direction is based on the empirical gradient (EG) by evaluating the response to the position changes and the step length is based on uncertainty reasoning by using a simple fuzzy rule. In this paper, SOA is used as a base level search, which can give a good direction to the optimal global region and SQP as a local search to fine tune the solution obtained from SOA. Thus SQP guides SOA to find optimal or near optimal solution in the complex search space. Two test systems i.e., 5 unit with losses and 10 unit without losses, have been taken to validate the efficiency of the proposed hybrid method. Simulation results clearly show that the proposed method outperforms the existing method in terms of solution quality. (author)

  2. A Compound Algorithm for Maximum Power Point Tracking Used in Laser Power Beaming

    Science.gov (United States)

    Chen, Cheng; Liu, Qiang; Gao, Shan; Teng, Yun; Cheng, Lin; Yu, Chengtao; Peng, Kai

    2018-03-01

    With the high voltage intelligent substation developing in a pretty high speed, more and more artificial intelligent techniques have been incorporated into the power devices to meet the automation needs. For the sake of the line maintenance staff’s safety, the high voltage isolating switch draws great attention among the most important power devices because of its capability of connecting and disconnecting the high voltage circuit. However, due to the very high level voltage of the high voltage isolating switch’s working environment, the power supply system of the surveillance devices could suffer from great electromagnetic interference. Laser power beaming exhibits its merits in such situation because it can provide steady power from a distance despite the day or the night. Then the energy conversion efficiency arises as a new concern. To make as much use of the laser power as possible, our work mainly focuses on extracting maximum power from the photovoltaic (PV) panel. In this paper, we proposed a neural network based algorithm which relates both the intrinsic and the extrinsic features of the PV panel to the proportion of the voltage at the maximum power point (MPP) to the open circuit voltage of the PV panel. Simulations and experiments were carried out to verify the validness of our algorithm.

  3. Correlation Wave-Front Sensing Algorithms for Shack-Hartmann-Based Adaptive Optics using a Point Source

    International Nuclear Information System (INIS)

    Poynee, L A

    2003-01-01

    Shack-Hartmann based Adaptive Optics system with a point-source reference normally use a wave-front sensing algorithm that estimates the centroid (center of mass) of the point-source image 'spot' to determine the wave-front slope. The centroiding algorithm suffers for several weaknesses. For a small number of pixels, the algorithm gain is dependent on spot size. The use of many pixels on the detector leads to significant propagation of read noise. Finally, background light or spot halo aberrations can skew results. In this paper an alternative algorithm that suffers from none of these problems is proposed: correlation of the spot with a ideal reference spot. The correlation method is derived and a theoretical analysis evaluates its performance in comparison with centroiding. Both simulation and data from real AO systems are used to illustrate the results. The correlation algorithm is more robust than centroiding, but requires more computation

  4. Research on Scheduling Algorithm for Multi-satellite and Point Target Task on Swinging Mode

    Science.gov (United States)

    Wang, M.; Dai, G.; Peng, L.; Song, Z.; Chen, G.

    2012-12-01

    and negative swinging angle and the computation of time window are analyzed and discussed. And many strategies to improve the efficiency of this model are also put forward. In order to solve the model, we bring forward the conception of activity sequence map. By using the activity sequence map, the activity choice and the start time of the activity can be divided. We also bring forward three neighborhood operators to search the result space. The front movement remaining time and the back movement remaining time are used to analyze the feasibility to generate solution from neighborhood operators. Lastly, the algorithm to solve the problem and model is put forward based genetic algorithm. Population initialization, crossover operator, mutation operator, individual evaluation, collision decrease operator, select operator and collision elimination operator is designed in the paper. Finally, the scheduling result and the simulation for a practical example on 5 satellites and 100 point targets with swinging mode is given, and the scheduling performances are also analyzed while the swinging angle in 0, 5, 10, 15, 25. It can be shown by the result that the model and the algorithm are more effective than those ones without swinging mode.

  5. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  6. LiveWire interactive boundary extraction algorithm based on Haar wavelet transform and control point set direction search

    Science.gov (United States)

    Cheng, Jun; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Based on deep analysis of the LiveWire interactive boundary extraction algorithm, a new algorithm focusing on improving the speed of LiveWire algorithm is proposed in this paper. Firstly, the Haar wavelet transform is carried on the input image, and the boundary is extracted on the low resolution image obtained by the wavelet transform of the input image. Secondly, calculating LiveWire shortest path is based on the control point set direction search by utilizing the spatial relationship between the two control points users provide in real time. Thirdly, the search order of the adjacent points of the starting node is set in advance. An ordinary queue instead of a priority queue is taken as the storage pool of the points when optimizing their shortest path value, thus reducing the complexity of the algorithm from O[n2] to O[n]. Finally, A region iterative backward projection method based on neighborhood pixel polling has been used to convert dual-pixel boundary of the reconstructed image to single-pixel boundary after Haar wavelet inverse transform. The algorithm proposed in this paper combines the advantage of the Haar wavelet transform and the advantage of the optimal path searching method based on control point set direction search. The former has fast speed of image decomposition and reconstruction and is more consistent with the texture features of the image and the latter can reduce the time complexity of the original algorithm. So that the algorithm can improve the speed in interactive boundary extraction as well as reflect the boundary information of the image more comprehensively. All methods mentioned above have a big role in improving the execution efficiency and the robustness of the algorithm.

  7. Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch

    Directory of Open Access Journals (Sweden)

    M. Karthikeyan

    2015-01-01

    mutation (DHSPM algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR and pitch adjusting rate (PAR are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods.

  8. Fine-scale estimation of carbon monoxide and fine particulate matter concentrations in proximity to a road intersection by using wavelet neural network with genetic algorithm

    Science.gov (United States)

    Wang, Zhanyong; Lu, Feng; He, Hong-di; Lu, Qing-Chang; Wang, Dongsheng; Peng, Zhong-Ren

    2015-03-01

    At road intersections, vehicles frequently stop with idling engines during the red-light period and speed up rapidly in the green-light period, which generates higher velocity fluctuation and thus higher emission rates. Additionally, the frequent changes of wind direction further add the highly variable dispersion of pollutants at the street scale. It is, therefore, very difficult to estimate the distribution of pollutant concentrations using conventional deterministic causal models. For this reason, a hybrid model combining wavelet neural network and genetic algorithm (GA-WNN) is proposed for predicting 5-min series of carbon monoxide (CO) and fine particulate matter (PM2.5) concentrations in proximity to an intersection. The proposed model is examined based on the measured data under two situations. As the measured pollutant concentrations are found to be dependent on the distance to the intersection, the model is evaluated in three locations respectively, i.e. 110 m, 330 m and 500 m. Due to the different variation of pollutant concentrations on varied time, the model is also evaluated in peak and off-peak traffic time periods separately. Additionally, the proposed model, together with the back-propagation neural network (BPNN), is examined with the measured data in these situations. The proposed model is found to perform better in predictability and precision for both CO and PM2.5 than BPNN does, implying that the hybrid model can be an effective tool to improve the accuracy of estimating pollutants' distribution pattern at intersections. The outputs of these findings demonstrate the potential of the proposed model to be applicable to forecast the distribution pattern of air pollution in real-time in proximity to road intersection.

  9. An algorithm to locate optimal bond breaking points on a potential energy surface for applications in mechanochemistry and catalysis.

    Science.gov (United States)

    Bofill, Josep Maria; Ribas-Ariño, Jordi; García, Sergio Pablo; Quapp, Wolfgang

    2017-10-21

    The reaction path of a mechanically induced chemical transformation changes under stress. It is well established that the force-induced structural changes of minima and saddle points, i.e., the movement of the stationary points on the original or stress-free potential energy surface, can be described by a Newton Trajectory (NT). Given a reactive molecular system, a well-fitted pulling direction, and a sufficiently large value of the force, the minimum configuration of the reactant and the saddle point configuration of a transition state collapse at a point on the corresponding NT trajectory. This point is called barrier breakdown point or bond breaking point (BBP). The Hessian matrix at the BBP has a zero eigenvector which coincides with the gradient. It indicates which force (both in magnitude and direction) should be applied to the system to induce the reaction in a barrierless process. Within the manifold of BBPs, there exist optimal BBPs which indicate what is the optimal pulling direction and what is the minimal magnitude of the force to be applied for a given mechanochemical transformation. Since these special points are very important in the context of mechanochemistry and catalysis, it is crucial to develop efficient algorithms for their location. Here, we propose a Gauss-Newton algorithm that is based on the minimization of a positively defined function (the so-called σ-function). The behavior and efficiency of the new algorithm are shown for 2D test functions and for a real chemical example.

  10. A Homogeneous and Self-Dual Interior-Point Linear Programming Algorithm for Economic Model Predictive Control

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Skajaa, Anders

    2015-01-01

    We develop an efficient homogeneous and self-dual interior-point method (IPM) for the linear programs arising in economic model predictive control of constrained linear systems with linear objective functions. The algorithm is based on a Riccati iteration procedure, which is adapted to the linear...... system of equations solved in homogeneous and self-dual IPMs. Fast convergence is further achieved using a warm-start strategy. We implement the algorithm in MATLAB and C. Its performance is tested using a conceptual power management case study. Closed loop simulations show that 1) the proposed algorithm...

  11. Current review and a simplified "five-point management algorithm" for keratoconus

    Directory of Open Access Journals (Sweden)

    Rohit Shetty

    2015-01-01

    Full Text Available Keratoconus is a slowly progressive, noninflammatory ectatic corneal disease characterized by changes in corneal collagen structure and organization. Though the etiology remains unknown, novel techniques are continuously emerging for the diagnosis and management of the disease. Demographical parameters are known to affect the rate of progression of the disease. Common methods of vision correction for keratoconus range from spectacles and rigid gas-permeable contact lenses to other specialized lenses such as piggyback, Rose-K or Boston scleral lenses. Corneal collagen cross-linking is effective in stabilizing the progression of the disease. Intra-corneal ring segments can improve vision by flattening the cornea in patients with mild to moderate keratoconus. Topography-guided custom ablation treatment betters the quality of vision by correcting the refractive error and improving the contact lens fit. In advanced keratoconus with corneal scarring, lamellar or full thickness penetrating keratoplasty will be the treatment of choice. With such a wide spectrum of alternatives available, it is necessary to choose the best possible treatment option for each patient. Based on a brief review of the literature and our own studies we have designed a five-point management algorithm for the treatment of keratoconus.

  12. Accelerating simulation for the multiple-point statistics algorithm using vector quantization

    Science.gov (United States)

    Zuo, Chen; Pan, Zhibin; Liang, Hao

    2018-03-01

    Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.

  13. Verification of the Solar Dynamics Observatory High Gain Antenna Pointing Algorithm Using Flight Data

    Science.gov (United States)

    Bourkland, Kristin L.; Liu, Kuo-Chia

    2011-01-01

    presentehat shows the readback delay does not have a negative impact on gimbal control. The decision was made to consider implementing two of the jitter mitigation techniques on board the spacecraft: stagger stepping and the NSR. Flight data from two sets of handovers, one set without jitter mitigation and the other with mitigation enabled, were examined. The trajectory of the predicted handover was compared with the measured trajectory for the two cases, showing that tracking was not negatively impacted with the addition of the jitter mitigation techniques. Additionally, the individual gimbal steps were examined, and it was confirmed that the stagger stepping and NSRs worked as designed. An Image Quality Test was performed to determine the amount of cumulative jitter from the reaction wheels, HGAs, and instruments during various combinations of typical operations. In this paper, the flight results are examined from a test where the HGAs are following the path of a nominal handover with stagger stepping on and HMI NSRs enabled. In this case, the reaction wheels are moving at low speed and the instruments are taking pictures in their standard sequence. The flight data shows the level of jitter that the instruments see when their shutters are open. The HGA-induced jitter is well within the jitter requirement when the stagger step and NSR mitigation options are enabled. The SDO HGA pointing algorithm was designed to achieve nominal antenna pointing at the ground station, perform slews during handover season, and provide three HGA-induced jitter mitigation options without compromising pointing objectives. During the commissioning phase, flight data sets were collected to verify the HGA pointing algorithm and demonstrate its jitter mitigation capabilities.

  14. Application of point-to-point matching algorithms for background correction in on-line liquid chromatography-Fourier transform infrared spectrometry (LC-FTIR).

    Science.gov (United States)

    Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M

    2010-03-15

    A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  15. A Simple Checking Algorithm with Perturb and Observe Maximum Power Point Tracking for Partially Shaded Photovoltaic System

    Directory of Open Access Journals (Sweden)

    Rozana Alik

    2016-03-01

    Full Text Available This paper presents a simple checking algorithm for maximum power point tracking (MPPT technique for Photovoltaic (PV system using Perturb and Observe (P&O algorithm. The main benefit of this checking algorithm is the simplicity and efficiency of the system whose duty cycle produced by the MPPT is smoother and changes faster according to maximum power point (MPP. This checking algorithm can determine the maximum power first before the P&O algorithm takes place to identify the voltage at MPP (VMPP, which is needed to calculate the duty cycle for the boost converter. To test the effectiveness of the algorithm, a simulation model of PV system has been carried out using MATLAB/Simulink under different level of irradiation; or in other words partially shaded condition of PV array. The results from the system using the proposed approach prove to have faster response and low ripple. Besides, the results are close to the desired outputs and exhibit an approximately 98.25% of the system efficiency. On the other hand, the system with conventional P&O MPPT seems to be unstable and has higher percentage of error. In summary, the proposed method is useful under varying level of irradiation with higher efficiency of the system.

  16. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  17. Evaluation of a photovoltaic energy mechatronics system with a built-in quadratic maximum power point tracking algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Chao, R.M.; Ko, S.H.; Lin, I.H. [Department of Systems and Naval Mechatronics Engineering, National Cheng Kung University, Tainan, Taiwan 701 (China); Pai, F.S. [Department of Electronic Engineering, National University of Tainan (China); Chang, C.C. [Department of Environment and Energy, National University of Tainan (China)

    2009-12-15

    The historically high cost of crude oil price is stimulating research into solar (green) energy as an alternative energy source. In general, applications with large solar energy output require a maximum power point tracking (MPPT) algorithm to optimize the power generated by the photovoltaic effect. This work aims to provide a stand-alone solution for solar energy applications by integrating a DC/DC buck converter to a newly developed quadratic MPPT algorithm along with its appropriate software and hardware. The quadratic MPPT method utilizes three previously used duty cycles with their corresponding power outputs. It approaches the maximum value by using a second order polynomial formula, which converges faster than the existing MPPT algorithm. The hardware implementation takes advantage of the real-time controller system from National Instruments, USA. Experimental results have shown that the proposed solar mechatronics system can correctly and effectively track the maximum power point without any difficulties. (author)

  18. A Class of Large-Update and Small-Update Primal-Dual Interior-Point Algorithms for Linear Optimization

    NARCIS (Netherlands)

    Bai, Y.Q.; Lesaja, G.; Roos, C.; Wang, G.Q.; El Ghami, M.

    2008-01-01

    In this paper we present a class of polynomial primal-dual interior-point algorithms for linear optimization based on a new class of kernel functions. This class is fairly general and includes the classical logarithmic function, the prototype self-regular function, and non-self-regular kernel

  19. Proximal Humerus

    NARCIS (Netherlands)

    Diercks, Ron L.; Bain, Gregory; Itoi, Eiji; Di Giacomo, Giovanni; Sugaya, Hiroyuki

    2015-01-01

    This chapter describes the bony structures of the proximal humerus. The proximal humerus is often regarded as consisting of four parts, which assists in understanding function and, more specially, describes the essential parts in reconstruction after fracture or in joint replacement. These are the

  20. An Optimized Structure on FPGA of Key Point Detection in SIFT Algorithm

    Directory of Open Access Journals (Sweden)

    Xu Chenyu

    2016-01-01

    Full Text Available SIFT algorithm is the most efficient and powerful algorithm to describe the features of images and it has been applied in many fields. In this paper, we propose an optimized method to realize the hardware implementation of the SIFT algorithm. We mainly discuss the structure of Data Generation here. A pipeline architecture is introduced to accelerate this optimized system. Parameters’ setting and approximation’s controlling in different image qualities and hardware resources are the focus of this paper. The results of experiments fully prove that this structure is real-time and effective, and provide consultative opinion to meet the different situations.

  1. MODIS 250m burned area mapping based on an algorithm using change point detection and Markov random fields.

    Science.gov (United States)

    Mota, Bernardo; Pereira, Jose; Campagnolo, Manuel; Killick, Rebeca

    2013-04-01

    Area burned in tropical savannas of Brazil was mapped using MODIS-AQUA daily 250m resolution imagery by adapting one of the European Space Agency fire_CCI project burned area algorithms, based on change point detection and Markov random fields. The study area covers 1,44 Mkm2 and was performed with data from 2005. The daily 1000 m image quality layer was used for cloud and cloud shadow screening. The algorithm addresses each pixel as a time series and detects changes in the statistical properties of NIR reflectance values, to identify potential burning dates. The first step of the algorithm is robust filtering, to exclude outlier observations, followed by application of the Pruned Exact Linear Time (PELT) change point detection technique. Near-infrared (NIR) spectral reflectance changes between time segments, and post change NIR reflectance values are combined into a fire likelihood score. Change points corresponding to an increase in reflectance are dismissed as potential burn events, as are those occurring outside of a pre-defined fire season. In the last step of the algorithm, monthly burned area probability maps and detection date maps are converted to dichotomous (burned-unburned maps) using Markov random fields, which take into account both spatial and temporal relations in the potential burned area maps. A preliminary assessment of our results is performed by comparison with data from the MODIS 1km active fires and the 500m burned area products, taking into account differences in spatial resolution between the two sensors.

  2. Building optimal regression tree by ant colony system-genetic algorithm: Application to modeling of melting points

    Energy Technology Data Exchange (ETDEWEB)

    Hemmateenejad, Bahram, E-mail: hemmatb@sums.ac.ir [Department of Chemistry, Shiraz University, Shiraz (Iran, Islamic Republic of); Medicinal and Natural Products Chemistry Research Center, Shiraz University of Medical Sciences, Shiraz (Iran, Islamic Republic of); Shamsipur, Mojtaba [Department of Chemistry, Razi University, Kermanshah (Iran, Islamic Republic of); Zare-Shahabadi, Vali [Young Researchers Club, Mahshahr Branch, Islamic Azad University, Mahshahr (Iran, Islamic Republic of); Akhond, Morteza [Department of Chemistry, Shiraz University, Shiraz (Iran, Islamic Republic of)

    2011-10-17

    Highlights: {yields} Ant colony systems help to build optimum classification and regression trees. {yields} Using of genetic algorithm operators in ant colony systems resulted in more appropriate models. {yields} Variable selection in each terminal node of the tree gives promising results. {yields} CART-ACS-GA could model the melting point of organic materials with prediction errors lower than previous models. - Abstract: The classification and regression trees (CART) possess the advantage of being able to handle large data sets and yield readily interpretable models. A conventional method of building a regression tree is recursive partitioning, which results in a good but not optimal tree. Ant colony system (ACS), which is a meta-heuristic algorithm and derived from the observation of real ants, can be used to overcome this problem. The purpose of this study was to explore the use of CART and its combination with ACS for modeling of melting points of a large variety of chemical compounds. Genetic algorithm (GA) operators (e.g., cross averring and mutation operators) were combined with ACS algorithm to select the best solution model. In addition, at each terminal node of the resulted tree, variable selection was done by ACS-GA algorithm to build an appropriate partial least squares (PLS) model. To test the ability of the resulted tree, a set of approximately 4173 structures and their melting points were used (3000 compounds as training set and 1173 as validation set). Further, an external test set containing of 277 drugs was used to validate the prediction ability of the tree. Comparison of the results obtained from both trees showed that the tree constructed by ACS-GA algorithm performs better than that produced by recursive partitioning procedure.

  3. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    Science.gov (United States)

    Atanassov, E.; Dimitrov, D.; Gurov, T.

    2015-10-01

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.

  4. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Atanassov, E.; Dimitrov, D., E-mail: d.slavov@bas.bg, E-mail: emanouil@parallel.bas.bg, E-mail: gurov@bas.bg; Gurov, T. [Institute of Information and Communication Technologies, BAS, Acad. G. Bonchev str., bl. 25A, 1113 Sofia (Bulgaria)

    2015-10-28

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.

  5. An improved Pattern Search based algorithm to solve the Dynamic Economic Dispatch problem with valve-point effect

    International Nuclear Information System (INIS)

    Alsumait, J.S.; Qasem, M.; Sykulski, J.K.; Al-Othman, A.K.

    2010-01-01

    In this paper, an improved algorithm based on Pattern Search method (PS) to solve the Dynamic Economic Dispatch is proposed. The algorithm maintains the essential unit ramp rate constraint, along with all other necessary constraints, not only for the time horizon of operation (24 h), but it preserves these constraints through the transaction period to the next time horizon (next day) in order to avoid the discontinuity of the power system operation. The Dynamic Economic and Emission Dispatch problem (DEED) is also considered. The load balance constraints, operating limits, valve-point loading and network losses are included in the models of both DED and DEED. The numerical results clarify the significance of the improved algorithm and verify its performance.

  6. The collapsed cone algorithm for (192)Ir dosimetry using phantom-size adaptive multiple-scatter point kernels.

    Science.gov (United States)

    Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-07

    The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient

  7. Image-based point spread function implementation in a fully 3D OSEM reconstruction algorithm for PET.

    Science.gov (United States)

    Rapisarda, E; Bettinardi, V; Thielemans, K; Gilardi, M C

    2010-07-21

    The interest in positron emission tomography (PET) and particularly in hybrid integrated PET/CT systems has significantly increased in the last few years due to the improved quality of the obtained images. Nevertheless, one of the most important limits of the PET imaging technique is still its poor spatial resolution due to several physical factors originating both at the emission (e.g. positron range, photon non-collinearity) and at detection levels (e.g. scatter inside the scintillating crystals, finite dimensions of the crystals and depth of interaction). To improve the spatial resolution of the images, a possible way consists of measuring the point spread function (PSF) of the system and then accounting for it inside the reconstruction algorithm. In this work, the system response of the GE Discovery STE operating in 3D mode has been characterized by acquiring (22)Na point sources in different positions of the scanner field of view. An image-based model of the PSF was then obtained by fitting asymmetric two-dimensional Gaussians on the (22)Na images reconstructed with small pixel sizes. The PSF was then incorporated, at the image level, in a three-dimensional ordered subset maximum likelihood expectation maximization (OS-MLEM) reconstruction algorithm. A qualitative and quantitative validation of the algorithm accounting for the PSF has been performed on phantom and clinical data, showing improved spatial resolution, higher contrast and lower noise compared with the corresponding images obtained using the standard OS-MLEM algorithm.

  8. An improved contour symmetry axes extraction algorithm and its application in the location of picking points of apples

    Energy Technology Data Exchange (ETDEWEB)

    Wang, D.; Song, H.; Yu, X.; Zhang, W.; Qu, W.; Xu, Y.

    2015-07-01

    The key problem for picking robots is to locate the picking points of fruit. A method based on the moment of inertia and symmetry of apples is proposed in this paper to locate the picking points of apples. Image pre-processing procedures, which are crucial to improving the accuracy of the location, were carried out to remove noise and smooth the edges of apples. The moment of inertia method has the disadvantage of high computational complexity, which should be solved, so convex hull was used to improve this problem. To verify the validity of this algorithm, a test was conducted using four types of apple images containing 107 apple targets. These images were single and unblocked apple images, single and blocked apple images, images containing adjacent apples, and apples in panoramas. The root mean square error values of these four types of apple images were 6.3, 15.0, 21.6 and 18.4, respectively, and the average location errors were 4.9°, 10.2°, 16.3° and 13.8°, respectively. Furthermore, the improved algorithm was effective in terms of average runtime, with 3.7 ms and 9.2 ms for single and unblocked and single and blocked apple images, respectively. For the other two types of apple images, the runtime was determined by the number of apples and blocked apples contained in the images. The results showed that the improved algorithm could extract symmetry axes and locate the picking points of apples more efficiently. In conclusion, the improved algorithm is feasible for extracting symmetry axes and locating the picking points of apples. (Author)

  9. A threshold auto-adjustment algorithm of feature points extraction based on grid

    Science.gov (United States)

    Yao, Zili; Li, Jun; Dong, Gaojie

    2018-02-01

    When dealing with high-resolution digital images, detection of feature points is usually the very first important step. Valid feature points depend on the threshold. If the threshold is too low, plenty of feature points will be detected, and they may be aggregated in the rich texture regions, which consequently not only affects the speed of feature description, but also aggravates the burden of following processing; if the threshold is set high, the feature points in poor texture area will lack. To solve these problems, this paper proposes a threshold auto-adjustment method of feature extraction based on grid. By dividing the image into numbers of grid, threshold is set in every local grid for extracting the feature points. When the number of feature points does not meet the threshold requirement, the threshold will be adjusted automatically to change the final number of feature points The experimental results show that feature points produced by our method is more uniform and representative, which avoids the aggregation of feature points and greatly reduces the complexity of following work.

  10. A Uniform Energy Consumption Algorithm for Wireless Sensor and Actuator Networks Based on Dynamic Polling Point Selection

    Science.gov (United States)

    Li, Shuo; Peng, Jun; Liu, Weirong; Zhu, Zhengfa; Lin, Kuo-Chi

    2014-01-01

    Recent research has indicated that using the mobility of the actuator in wireless sensor and actuator networks (WSANs) to achieve mobile data collection can greatly increase the sensor network lifetime. However, mobile data collection may result in unacceptable collection delays in the network if the path of the actuator is too long. Because real-time network applications require meeting data collection delay constraints, planning the path of the actuator is a very important issue to balance the prolongation of the network lifetime and the reduction of the data collection delay. In this paper, a multi-hop routing mobile data collection algorithm is proposed based on dynamic polling point selection with delay constraints to address this issue. The algorithm can actively update the selection of the actuator's polling points according to the sensor nodes' residual energies and their locations while also considering the collection delay constraint. It also dynamically constructs the multi-hop routing trees rooted by these polling points to balance the sensor node energy consumption and the extension of the network lifetime. The effectiveness of the algorithm is validated by simulation. PMID:24451455

  11. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  12. A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals

    Directory of Open Access Journals (Sweden)

    Nathan Gold

    2018-01-01

    Full Text Available Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.

  13. Strong Convergence Iterative Algorithms for Equilibrium Problems and Fixed Point Problems in Banach Spaces

    Directory of Open Access Journals (Sweden)

    Shenghua Wang

    2013-01-01

    Full Text Available We first introduce the concept of Bregman asymptotically quasinonexpansive mappings and prove that the fixed point set of this kind of mappings is closed and convex. Then we construct an iterative scheme to find a common element of the set of solutions of an equilibrium problem and the set of common fixed points of a countable family of Bregman asymptotically quasinonexpansive mappings in reflexive Banach spaces and prove strong convergence theorems. Our results extend the recent ones of some others.

  14. Interferometry with flexible point source array for measuring complex freeform surface and its design algorithm

    Science.gov (United States)

    Li, Jia; Shen, Hua; Zhu, Rihong; Gao, Jinming; Sun, Yue; Wang, Jinsong; Li, Bo

    2018-06-01

    The precision of the measurements of aspheric and freeform surfaces remains the primary factor restrict their manufacture and application. One effective means of measuring such surfaces involves using reference or probe beams with angle modulation, such as tilted-wave-interferometer (TWI). It is necessary to improve the measurement efficiency by obtaining the optimum point source array for different pieces before TWI measurements. For purpose of forming a point source array based on the gradients of different surfaces under test, we established a mathematical model describing the relationship between the point source array and the test surface. However, the optimal point sources are irregularly distributed. In order to achieve a flexible point source array according to the gradient of test surface, a novel interference setup using fiber array is proposed in which every point source can be independently controlled on and off. Simulations and the actual measurement examples of two different surfaces are given in this paper to verify the mathematical model. Finally, we performed an experiment of testing an off-axis ellipsoidal surface that proved the validity of the proposed interference system.

  15. Effects of Varying Epoch Lengths, Wear Time Algorithms, and Activity Cut-Points on Estimates of Child Sedentary Behavior and Physical Activity from Accelerometer Data.

    Science.gov (United States)

    Banda, Jorge A; Haydel, K Farish; Davila, Tania; Desai, Manisha; Bryson, Susan; Haskell, William L; Matheson, Donna; Robinson, Thomas N

    2016-01-01

    To examine the effects of accelerometer epoch lengths, wear time (WT) algorithms, and activity cut-points on estimates of WT, sedentary behavior (SB), and physical activity (PA). 268 7-11 year-olds with BMI ≥ 85th percentile for age and sex wore accelerometers on their right hips for 4-7 days. Data were processed and analyzed at epoch lengths of 1-, 5-, 10-, 15-, 30-, and 60-seconds. For each epoch length, WT minutes/day was determined using three common WT algorithms, and minutes/day and percent time spent in SB, light (LPA), moderate (MPA), and vigorous (VPA) PA were determined using five common activity cut-points. ANOVA tested differences in WT, SB, LPA, MPA, VPA, and MVPA when using the different epoch lengths, WT algorithms, and activity cut-points. WT minutes/day varied significantly by epoch length when using the NHANES WT algorithm (p algorithms. Minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA varied significantly by epoch length for all sets of activity cut-points tested with all three WT algorithms (all p algorithms (all p algorithms and activity cut-point definitions to match different epoch lengths may introduce significant errors. Estimates of SB and PA from studies that process and analyze data using different epoch lengths, WT algorithms, and/or activity cut-points are not comparable, potentially leading to very different results, interpretations, and conclusions, misleading research and public policy.

  16. Computational proximity excursions in the topology of digital images

    CERN Document Server

    Peters, James F

    2016-01-01

    This book introduces computational proximity (CP) as an algorithmic approach to finding nonempty sets of points that are either close to each other or far apart. Typically in computational proximity, the book starts with some form of proximity space (topological space equipped with a proximity relation) that has an inherent geometry. In CP, two types of near sets are considered, namely, spatially near sets and descriptivelynear sets. It is shown that connectedness, boundedness, mesh nerves, convexity, shapes and shape theory are principal topics in the study of nearness and separation of physical aswell as abstract sets. CP has a hefty visual content. Applications of CP in computer vision, multimedia, brain activity, biology, social networks, and cosmology are included. The book has been derived from the lectures of the author in a graduate course on the topology of digital images taught over the past several years. Many of the students have provided important insights and valuable suggestions. The topics in ...

  17. Implementation of an algorithm for absorbed dose calculation in high energy photon beams at off axis points

    International Nuclear Information System (INIS)

    Matos, M.F.; Alvarez, G.D.; Sanz, D.E.

    2008-01-01

    Full text: A semiempirical algorithm for absorbed dose calculation at off-axis points in irregular beams was implemented. It is well known that semiempirical methods are very useful because of their easy implementation and its helpfulness in dose calculation in the clinic. These methods can be used as independent tools for dosimetric calculation in many applications of quality assurance. However, the applicability of such methods has some limitations, even in homogeneous media, specially at off axis points, near beam fringes or outside the beam. Only methods derived from tissue-air-ratio (TAR) or scatter-maximum-ratio (SMR) have been devised for those situations, many years ago. Despite there have been improvements for these manual methods, like the Sc-Sp ones, no attempt has been made to extend their usage at off axis points. In this work, a semiempirical formalism was introduced, based on the works of Venselaar et al. (1999) and Sanz et al. (2004), aimed to the Sc-Sp separation. This new formalism relies on the separation of primary and secondary components of the beam although in a relative way. The data required by the algorithm are reduced to a minimal, allowing for experimental easy. According to modern recommendations, reference measurements in water phantom are performed at 10 cm depth, keeping away electron contamination. Air measurements are done using a mini phantom instead of the old equilibrium caps. Finally, the calculation at off-axis points are done using data measured on the central beam axis; but correcting the results with the introduction of a measured function which depends on the location of the off axis point. The measurements for testing the algorithm were performed in our Siemens MXE linear accelerator. The algorithm was used to determine specific dose profiles for a great number of different beam configurations, and the results were compared with direct measurements to validate the accuracy of the algorithm. Additionally, the results were

  18. Validation of Point Clouds Segmentation Algorithms Through Their Application to Several Case Studies for Indoor Building Modelling

    Science.gov (United States)

    Macher, H.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    Laser scanners are widely used for the modelling of existing buildings and particularly in the creation process of as-built BIM (Building Information Modelling). However, the generation of as-built BIM from point clouds involves mainly manual steps and it is consequently time consuming and error-prone. Along the path to automation, a three steps segmentation approach has been developed. This approach is composed of two phases: a segmentation into sub-spaces namely floors and rooms and a plane segmentation combined with the identification of building elements. In order to assess and validate the developed approach, different case studies are considered. Indeed, it is essential to apply algorithms to several datasets and not to develop algorithms with a unique dataset which could influence the development with its particularities. Indoor point clouds of different types of buildings will be used as input for the developed algorithms, going from an individual house of almost one hundred square meters to larger buildings of several thousand square meters. Datasets provide various space configurations and present numerous different occluding objects as for example desks, computer equipments, home furnishings and even wine barrels. For each dataset, the results will be illustrated. The analysis of the results will provide an insight into the transferability of the developed approach for the indoor modelling of several types of buildings.

  19. A proposed adaptive step size perturbation and observation maximum power point tracking algorithm based on photovoltaic system modeling

    Science.gov (United States)

    Huang, Yu

    Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.

  20. Fixed-Point Algorithms for the Blind Separation of Arbitrary Complex-Valued Non-Gaussian Signal Mixtures

    Directory of Open Access Journals (Sweden)

    Douglas Scott C

    2007-01-01

    Full Text Available We derive new fixed-point algorithms for the blind separation of complex-valued mixtures of independent, noncircularly symmetric, and non-Gaussian source signals. Leveraging recently developed results on the separability of complex-valued signal mixtures, we systematically construct iterative procedures on a kurtosis-based contrast whose evolutionary characteristics are identical to those of the FastICA algorithm of Hyvarinen and Oja in the real-valued mixture case. Thus, our methods inherit the fast convergence properties, computational simplicity, and ease of use of the FastICA algorithm while at the same time extending this class of techniques to complex signal mixtures. For extracting multiple sources, symmetric and asymmetric signal deflation procedures can be employed. Simulations for both noiseless and noisy mixtures indicate that the proposed algorithms have superior finite-sample performance in data-starved scenarios as compared to existing complex ICA methods while performing about as well as the best of these techniques for larger data-record lengths.

  1. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  2. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  3. An effective, robust and parallel implementation of an interior point algorithm for limit state optimization

    DEFF Research Database (Denmark)

    Dollerup, Niels; Jepsen, Michael S.; Frier, Christian

    2014-01-01

    A robust and effective finite element based implementation of lower bound limit state analysis applying an interior point formulation is presented in this paper. The lower bound formulation results in a convex optimization problem consisting of a number of linear constraints from the equilibrium...

  4. Detection of uterine MMG contractions using a multiple change point estimator and the K-means cluster algorithm.

    Science.gov (United States)

    La Rosa, Patricio S; Nehorai, Arye; Eswaran, Hari; Lowery, Curtis L; Preissl, Hubert

    2008-02-01

    We propose a single channel two-stage time-segment discriminator of uterine magnetomyogram (MMG) contractions during pregnancy. We assume that the preprocessed signals are piecewise stationary having distribution in a common family with a fixed number of parameters. Therefore, at the first stage, we propose a model-based segmentation procedure, which detects multiple change-points in the parameters of a piecewise constant time-varying autoregressive model using a robust formulation of the Schwarz information criterion (SIC) and a binary search approach. In particular, we propose a test statistic that depends on the SIC, derive its asymptotic distribution, and obtain closed-form optimal detection thresholds in the sense of the Neyman-Pearson criterion; therefore, we control the probability of false alarm and maximize the probability of change-point detection in each stage of the binary search algorithm. We compute and evaluate the relative energy variation [root mean squares (RMS)] and the dominant frequency component [first order zero crossing (FOZC)] in discriminating between time segments with and without contractions. The former consistently detects a time segment with contractions. Thus, at the second stage, we apply a nonsupervised K-means cluster algorithm to classify the detected time segments using the RMS values. We apply our detection algorithm to real MMG records obtained from ten patients admitted to the hospital for contractions with gestational ages between 31 and 40 weeks. We evaluate the performance of our detection algorithm in computing the detection and false alarm rate, respectively, using as a reference the patients' feedback. We also analyze the fusion of the decision signals from all the sensors as in the parallel distributed detection approach.

  5. a Robust Registration Algorithm for Point Clouds from Uav Images for Change Detection

    Science.gov (United States)

    Al-Rawabdeh, A.; Al-Gurrani, H.; Al-Durgham, K.; Detchev, I.; He, F.; El-Sheimy, N.; Habib, A.

    2016-06-01

    Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs

  6. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  7. Structure analysis in algorithms and programs. Generator of coordinates of equivalent points. (Collected programs)

    International Nuclear Information System (INIS)

    Matyushenko, N.N.; Titov, Yu.G.

    1982-01-01

    Programs of atom coordinate generation and space symmetry groups in a form of equivalent point systems are presented. Programs of generation and coordinate output from an on-line storage are written in the FORTRAN language for the ES computer. They may be used in laboratories specialized in studying atomic structure and material properties, in colleges and by specialists in other fields of physics and chemistry

  8. Determining decoupling points in a supply chain networks using NSGA II algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ebrahimiarjestan, M.; Wang, G.

    2017-07-01

    Purpose: In the model, we used the concepts of Lee and Amaral (2002) and Tang and Zhou (2009) and offer a multi-criteria decision-making model that identify the decoupling points to aim to minimize production costs, minimize the product delivery time to customer and maximize their satisfaction. Design/methodology/approach: We encounter with a triple-objective model that meta-heuristic method (NSGA II) is used to solve the model and to identify the Pareto optimal points. The max (min) method was used. Findings: Our results of using NSGA II to find Pareto optimal solutions demonstrate good performance of NSGA II to extract Pareto solutions in proposed model that considers determining of decoupling point in a supply network. Originality/value: So far, several approaches to model the future have been proposed, of course, each of them modeled a part of this concept. This concept has been considered more general in the model that defined in follow. In this model, we face with a multi-criteria decision problem that includes minimization of the production costs and product delivery time to customers as well as customer consistency maximization.

  9. Determining decoupling points in a supply chain networks using NSGA II algorithm

    International Nuclear Information System (INIS)

    Ebrahimiarjestan, M.; Wang, G.

    2017-01-01

    Purpose: In the model, we used the concepts of Lee and Amaral (2002) and Tang and Zhou (2009) and offer a multi-criteria decision-making model that identify the decoupling points to aim to minimize production costs, minimize the product delivery time to customer and maximize their satisfaction. Design/methodology/approach: We encounter with a triple-objective model that meta-heuristic method (NSGA II) is used to solve the model and to identify the Pareto optimal points. The max (min) method was used. Findings: Our results of using NSGA II to find Pareto optimal solutions demonstrate good performance of NSGA II to extract Pareto solutions in proposed model that considers determining of decoupling point in a supply network. Originality/value: So far, several approaches to model the future have been proposed, of course, each of them modeled a part of this concept. This concept has been considered more general in the model that defined in follow. In this model, we face with a multi-criteria decision problem that includes minimization of the production costs and product delivery time to customers as well as customer consistency maximization.

  10. A ROBUST REGISTRATION ALGORITHM FOR POINT CLOUDS FROM UAV IMAGES FOR CHANGE DETECTION

    Directory of Open Access Journals (Sweden)

    A. Al-Rawabdeh

    2016-06-01

    Full Text Available Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs of the camera and the Exterior Orientation Parameters (EOPs of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV action camera which facilitated capturing high-resolution geo-tagged images

  11. Singular point detection algorithm based on the transition line of the fingerprint orientation image

    CSIR Research Space (South Africa)

    Mathekga, ME

    2009-11-01

    Full Text Available there is another core immediately afterwards, as in fig- ures 1 and 2. Thus, the transition columns where this condition is satisfied is set as the location of the core singular point. a and c or b and f in figure 4 must be comparable as ridges are expected... of Singularities and Pseudo Ridges”, Patt. Recog., Vol. 37, 2004, p 2233– 2243. [9] Msiza, I.S., Leke-Betechuoh, B., Nelwamondo, F.V., and Msimang, N., “A Fingerprint Pattern Classification Ap- proach Based on the Coordinate Geometry of Singulari- ties”, In...

  12. A highly accurate algorithm for the solution of the point kinetics equations

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    2013-01-01

    Highlights: • Point kinetics equations for nuclear reactor transient analysis are numerically solved to extreme accuracy. • Results for classic benchmarks found in the literature are given to 9-digit accuracy. • Recent results of claimed accuracy are shown to be less accurate than claimed. • Arguably brings a chapter of numerical evaluation of the PKEs to a close. - Abstract: Attempts to resolve the point kinetics equations (PKEs) describing nuclear reactor transients have been the subject of numerous articles and texts over the past 50 years. Some very innovative methods, such as the RTS (Reactor Transient Simulation) and CAC (Continuous Analytical Continuation) methods of G.R. Keepin and J. Vigil respectively, have been shown to be exceptionally useful. Recently however, several authors have developed methods they consider accurate without a clear basis for their assertion. In response, this presentation will establish a definitive set of benchmarks to enable those developing PKE methods to truthfully assess the degree of accuracy of their methods. Then, with these benchmarks, two recently published methods, found in this journal will be shown to be less accurate than claimed and a legacy method from 1984 will be confirmed

  13. An improved artificial physical optimization algorithm for dynamic dispatch of generators with valve-point effects and wind power

    International Nuclear Information System (INIS)

    Yuan, Xiaohui; Ji, Bin; Zhang, Shuangquan; Tian, Hao; Chen, Zhihuan

    2014-01-01

    Highlights: • Dynamic load economic dispatch with wind power (DLEDW) model is established. • Markov chains combined with scenario analysis method are used to predict wind power. • Chance constrained technique is used to simulate the impacts of wind forecast error. • Improved artificial physical optimization algorithm is proposed to solve DLEDW. • Heuristic search strategies are applied to handle the constraints of DLEDW. - Abstract: Wind power, a kind of promising renewable energy resource, has recently been getting more attractive because of various environmental and economic considerations. But the penetration of wind power with its fluctuation nature has made the operation of power system more intractable. To coordinate the reliability and operation cost, this paper established a stochastic model of dynamic load economic dispatch with wind integration (DLEDW). In this model, constraints such as ramping up/down capacity, prohibited operating zone are considered and effects of valve-point are taken into account. Markov chains combined with scenario analysis method is used to generate predictive values of wind power and chance constrained programming (CCP) is applied to simulate the impacts of wind power fluctuation on system operation. An improved artificial physical optimization algorithm is presented to solve the DLEDW problem. Heuristic strategies based on the priority list and stochastic simulation techniques are proposed to handle the constraints. In addition, a local chaotic mutation strategy is applied to overcome the disadvantage of premature convergence of artificial physical optimization algorithm. Two test systems with and without wind power integration are used to verify the feasibility and effectiveness of the proposed method and the results are compared with those of gravitational search algorithm, particle swarm optimization and standard artificial physical optimization. The simulation results demonstrate that the proposed method has a

  14. Present status on numerical algorithms and benchmark tests for point kinetics and quasi-static approximate kinetics

    International Nuclear Information System (INIS)

    Ise, Takeharu

    1976-12-01

    Review studies have been made on algorithms of numerical analysis and benchmark tests on point kinetics and quasistatic approximate kinetics computer codes to perform efficiently benchmark tests on space-dependent neutron kinetics codes. Point kinetics methods have now been improved since they can be directly applied to the factorization procedures. Methods based on Pade rational function give numerically stable solutions and methods on matrix-splitting are interested in the fact that they are applicable to the direct integration methods. An improved quasistatic (IQ) approximation is the best and the most practical method; it is numerically shown that the IQ method has a high stability and precision and the computation time which is about one tenth of that of the direct method. IQ method is applicable to thermal reactors as well as fast reactors and especially fitted for fast reactors to which many time steps are necessary. Two-dimensional diffusion kinetics codes are most practicable though there exist also three-dimensional diffusion kinetics code as well as two-dimensional transport kinetics code. On developing a space-dependent kinetics code, in any case, it is desirable to improve the method so as to have a high computing speed for solving static diffusion and transport equations. (auth.)

  15. The Prevalence and Marketing of Electronic Cigarettes in Proximity to At-Risk Youths: An Investigation of Point-of-Sale Practices near Alternative High Schools

    Science.gov (United States)

    Miller, Stephen; Pike, James; Chapman, Jared; Xie, Bin; Hilton, Brian N.; Ames, Susan L.; Stacy, Alan W.

    2017-01-01

    This study examines the point-of-sale marketing practices used to promote electronic cigarettes at stores near schools that serve at-risk youths. One hundred stores selling tobacco products within a half-mile of alternative high schools in Southern California were assessed for this study. Seventy percent of stores in the sample sold electronic…

  16. The Behavior of Counter-Current Packed Bed in the Proximity of the Flooding Point under Periodic Variations of Inlet Velocities

    Czech Academy of Sciences Publication Activity Database

    Ondráček, Jakub; Stavárek, Petr; Jiřičný, Vladimír; Staněk, Vladimír

    2006-01-01

    Roč. 20, č. 2 (2006), s. 147-155 ISSN 0352-9568 R&D Projects: GA ČR(CZ) GA104/03/1558 Institutional research plan: CEZ:AV0Z40720504 Keywords : counter-current flow * flooding point * axial dispersion Subject RIV: CI - Industrial Chemistry, Chemical Engineering Impact factor: 0.357, year: 2006

  17. Interior point algorithm-based power flow optimisation of a combined AC and DC multi-terminal grid

    Directory of Open Access Journals (Sweden)

    Farhan Beg

    2015-01-01

    Full Text Available The high cost of power electronic equipment, lower reliability and poor power handling capacity of the semiconductor devices had stalled the deployment of systems based on DC (multi-terminal direct current system (MTDC networks. The introduction of voltage source converters (VSCs for transmission has renewed the interest in the development of large interconnected grids based on both alternate current (AC and DC transmission networks. Such a grid platform also realises the added advantage of integrating the renewable energy sources into the grid. Thus a grid based on DC MTDC network is a possible solution to improve energy security and check the increasing supply demand gap. An optimal power solution for combined AC and DC grids obtained by the solution of the interior point algorithm is proposed in this study. Multi-terminal HVDC grids lie at the heart of various suggested transmission capacity increases. A significant difference is observed when MTDC grids are solved for power flows in place of conventional AC grids. This study deals with the power flow problem of a combined MTDC and an AC grid. The AC side is modelled with the full power flow equations and the VSCs are modelled using a connecting line, two generators and an AC node. The VSC and the DC losses are also considered. The optimisation focuses on several different goals. Three different scenarios are presented in an arbitrary grid network with ten AC nodes and five converter stations.

  18. Distributed Autonomous Control of Multiple Spacecraft During Close Proximity Operations

    National Research Council Canada - National Science Library

    McCamish, Shawn B

    2007-01-01

    This research contributes to multiple spacecraft control by developing an autonomous distributed control algorithm for close proximity operations of multiple spacecraft systems, including rendezvous...

  19. Maximum Power Point tracking algorithm based on I-V characteristic of PV array under uniform and non-uniform conditions

    DEFF Research Database (Denmark)

    Kouchaki, Alireza; Iman-Eini, H.; Asaei, B.

    2012-01-01

    This paper presents a new algorithm based on characteristic equation of solar cells to determine the Maximum Power Point (MPP) of PV modules under partially shaded conditions (PSC). To achieve this goal, an analytic condition is introduced to determine uniform or non-uniform atmospheric condition...

  20. Simulation and analysis of an isolated full-bridge DC/DC boost converter operating with a modified perturb and observe maximum power point tracking algorithm

    Directory of Open Access Journals (Sweden)

    Calebe A. Matias

    2017-07-01

    Full Text Available The purpose of the present study is to simulate and analyze an isolated full-bridge DC/DC boost converter, for photovoltaic panels, running a modified perturb and observe maximum power point tracking method. The zero voltage switching technique was used in order to minimize the losses of the converter for a wide range of solar operation. The efficiency of the power transfer is higher than 90% for large solar operating points. The panel enhancement due to the maximum power point tracking algorithm is 5.06%.

  1. A Robust and Accurate Two-Step Auto-Labeling Conditional Iterative Closest Points (TACICP Algorithm for Three-Dimensional Multi-Modal Carotid Image Registration.

    Directory of Open Access Journals (Sweden)

    Hengkai Guo

    Full Text Available Atherosclerosis is among the leading causes of death and disability. Combining information from multi-modal vascular images is an effective and efficient way to diagnose and monitor atherosclerosis, in which image registration is a key technique. In this paper a feature-based registration algorithm, Two-step Auto-labeling Conditional Iterative Closed Points (TACICP algorithm, is proposed to align three-dimensional carotid image datasets from ultrasound (US and magnetic resonance (MR. Based on 2D segmented contours, a coarse-to-fine strategy is employed with two steps: rigid initialization step and non-rigid refinement step. Conditional Iterative Closest Points (CICP algorithm is given in rigid initialization step to obtain the robust rigid transformation and label configurations. Then the labels and CICP algorithm with non-rigid thin-plate-spline (TPS transformation model is introduced to solve non-rigid carotid deformation between different body positions. The results demonstrate that proposed TACICP algorithm has achieved an average registration error of less than 0.2mm with no failure case, which is superior to the state-of-the-art feature-based methods.

  2. A New Multi-Step Iterative Algorithm for Approximating Common Fixed Points of a Finite Family of Multi-Valued Bregman Relatively Nonexpansive Mappings

    Directory of Open Access Journals (Sweden)

    Wiyada Kumam

    2016-05-01

    Full Text Available In this article, we introduce a new multi-step iteration for approximating a common fixed point of a finite class of multi-valued Bregman relatively nonexpansive mappings in the setting of reflexive Banach spaces. We prove a strong convergence theorem for the proposed iterative algorithm under certain hypotheses. Additionally, we also use our results for the solution of variational inequality problems and to find the zero points of maximal monotone operators. The theorems furnished in this work are new and well-established and generalize many well-known recent research works in this field.

  3. Simultaneous solution algorithms for Eulerian-Eulerian gas-solid flow models: Stability analysis and convergence behaviour of a point and a plane solver

    International Nuclear Information System (INIS)

    Wilde, Juray de; Vierendeels, Jan; Heynderickx, Geraldine J.; Marin, Guy B.

    2005-01-01

    Simultaneous solution algorithms for Eulerian-Eulerian gas-solid flow models are presented and their stability analyzed. The integration algorithms are based on dual-time stepping with fourth-order Runge-Kutta in pseudo-time. The domain is solved point or plane wise. The discretization of the inviscid terms is based on a low-Mach limit of the multi-phase preconditioned advection upstream splitting method (MP-AUSMP). The numerical stability of the simultaneous solution algorithms is analyzed in 2D with the Fourier method. Stability results are compared with the convergence behaviour of 3D riser simulations. The impact of the grid aspect ratio, preconditioning, artificial dissipation, and the treatment of the source terms is investigated. A particular advantage of the simultaneous solution algorithms is that they allow a fully implicit treatment of the source terms which are of crucial importance for the Eulerian-Eulerian gas-solid flow models and their solution. The numerical stability of the optimal simultaneous solution algorithm is analyzed for different solids volume fractions and gas-solid slip velocities. Furthermore, the effect of the grid resolution on the convergence behaviour and the simulation results is investigated. Finally, simulations of the bottom zone of a pilot-scale riser with a side solids inlet are experimentally validated

  4. Children's proximal societal conditions

    DEFF Research Database (Denmark)

    Stanek, Anja Hvidtfeldt

    2018-01-01

    that is above or outside the institutional setting or the children’s everyday life, but something that is represented through societal structures and actual persons participating (in political ways) within the institutional settings, in ways that has meaning to children’s possibilities to participate, learn...... and develop. Understanding school or kindergarten as (part of) the children’s proximal societal conditions for development and learning, means for instance that considerations about an inclusive agenda are no longer simply thoughts about the school – for economic reasons – having space for as many pupils...... as possible (schools for all). Such thoughts can be supplemented by reflections about which version of ‘the societal’ we wish to present our children with, and which version of ‘the societal’ we wish to set up as the condition for children’s participation and development. The point is to clarify or sharpen...

  5. Industrial Computed Tomography using Proximal Algorithm

    KAUST Repository

    Zang, Guangming

    2016-01-01

    fewer projections. We compare our framework to state-of-the-art methods and existing popular software tomography reconstruction packages, on both synthetic and real datasets, and show superior reconstruction quality, especially from noisy data and a

  6. Mobility Management Algorithms for the Client-Driven Mobility Frame System–Mobility from a Brand New Point of View

    Directory of Open Access Journals (Sweden)

    Péter Fülöp

    2009-01-01

    Full Text Available In this paper a new mobility management is introduced. The main idea in this approach is that the mobil node should manage the mobility for itself not the network. The network nodes provide only basic services for mobile entities: connectivity and administration. We construct a framework called the Client-based Mobility Frame System (CMFS for this mobility environment. We developed the CMFS protocol as a solution over IPv4 and we show how to use Mobile IPv6 to realize our concept. We propose some basic mobility management solutions that can be implemented into the mobile clients and give details about a working simulation of a complete Mobility Management System. Example mobility management approaches such as the centralized- and hierarchical- or cellular-like ones are also defined and hints are given what kind of algorithms might be implemented upon the Client-based Mobility Frame System over IPv4 and IPv6 as well. We introduce some example algorithms that can work with the CMFS making mobility management efficient by minimizing signalling load on the network. In the present work modeling and detailed discussion on the parameters of the algorithms is given and comparison to existing mobility approaches and protocols is done. We prepared a simulation to test our protocol and to back up the proposals we provide the reader with simulation results. We stress that still one the most important benefit of our findings is that all the MNs can run different management strategies and can optimize mobility for themselves.

  7. Algorithmic-Reducibility = Renormalization-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') Replacing CRUTCHES!!!: Gauss Modular/Clock-Arithmetic Congruences = Signal X Noise PRODUCTS..

    Science.gov (United States)

    Siegel, J.; Siegel, Edward Carl-Ludwig

    2011-03-01

    Cook-Levin computational-"complexity"(C-C) algorithmic-equivalence reduction-theorem reducibility equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited with Gauss modular/clock-arithmetic/model congruences = signal X noise PRODUCT reinterpretation. Siegel-Baez FUZZYICS=CATEGORYICS(SON of ``TRIZ''): Category-Semantics(C-S) tabular list-format truth-table matrix analytics predicts and implements "noise"-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics(1987)]-Sipser[Intro. Theory Computation(1997) algorithmic C-C: "NIT-picking" to optimize optimization-problems optimally(OOPO). Versus iso-"noise" power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, this "NIT-picking" is "noise" power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-"science" algorithmic C-C models: Turing-machine, finite-state-models/automata, are identified as early-days once-workable but NOW ONLY LIMITING CRUTCHES IMPEDING latter-days new-insights!!!

  8. Application of the nonlinear time series prediction method of genetic algorithm for forecasting surface wind of point station in the South China Sea with scatterometer observations

    International Nuclear Information System (INIS)

    Zhong Jian; Dong Gang; Sun Yimei; Zhang Zhaoyang; Wu Yuqin

    2016-01-01

    The present work reports the development of nonlinear time series prediction method of genetic algorithm (GA) with singular spectrum analysis (SSA) for forecasting the surface wind of a point station in the South China Sea (SCS) with scatterometer observations. Before the nonlinear technique GA is used for forecasting the time series of surface wind, the SSA is applied to reduce the noise. The surface wind speed and surface wind components from scatterometer observations at three locations in the SCS have been used to develop and test the technique. The predictions have been compared with persistence forecasts in terms of root mean square error. The predicted surface wind with GA and SSA made up to four days (longer for some point station) in advance have been found to be significantly superior to those made by persistence model. This method can serve as a cost-effective alternate prediction technique for forecasting surface wind of a point station in the SCS basin. (paper)

  9. Parameter identification for continuous point emission source based on Tikhonov regularization method coupled with particle swarm optimization algorithm.

    Science.gov (United States)

    Ma, Denglong; Tan, Wei; Zhang, Zaoxiao; Hu, Jun

    2017-03-05

    In order to identify the parameters of hazardous gas emission source in atmosphere with less previous information and reliable probability estimation, a hybrid algorithm coupling Tikhonov regularization with particle swarm optimization (PSO) was proposed. When the source location is known, the source strength can be estimated successfully by common Tikhonov regularization method, but it is invalid when the information about both source strength and location is absent. Therefore, a hybrid method combining linear Tikhonov regularization and PSO algorithm was designed. With this method, the nonlinear inverse dispersion model was transformed to a linear form under some assumptions, and the source parameters including source strength and location were identified simultaneously by linear Tikhonov-PSO regularization method. The regularization parameters were selected by L-curve method. The estimation results with different regularization matrixes showed that the confidence interval with high-order regularization matrix is narrower than that with zero-order regularization matrix. But the estimation results of different source parameters are close to each other with different regularization matrixes. A nonlinear Tikhonov-PSO hybrid regularization was also designed with primary nonlinear dispersion model to estimate the source parameters. The comparison results of simulation and experiment case showed that the linear Tikhonov-PSO method with transformed linear inverse model has higher computation efficiency than nonlinear Tikhonov-PSO method. The confidence intervals from linear Tikhonov-PSO are more reasonable than that from nonlinear method. The estimation results from linear Tikhonov-PSO method are similar to that from single PSO algorithm, and a reasonable confidence interval with some probability levels can be additionally given by Tikhonov-PSO method. Therefore, the presented linear Tikhonov-PSO regularization method is a good potential method for hazardous emission

  10. Maximum power point tracking algorithm based on sliding mode and fuzzy logic for photovoltaic sources under variable environmental conditions

    Science.gov (United States)

    Atik, L.; Petit, P.; Sawicki, J. P.; Ternifi, Z. T.; Bachir, G.; Della, M.; Aillerie, M.

    2017-02-01

    Solar panels have a nonlinear voltage-current characteristic, with a distinct maximum power point (MPP), which depends on the environmental factors, such as temperature and irradiation. In order to continuously harvest maximum power from the solar panels, they have to operate at their MPP despite the inevitable changes in the environment. Various methods for maximum power point tracking (MPPT) were developed and finally implemented in solar power electronic controllers to increase the efficiency in the electricity production originate from renewables. In this paper we compare using Matlab tools Simulink, two different MPP tracking methods, which are, fuzzy logic control (FL) and sliding mode control (SMC), considering their efficiency in solar energy production.

  11. Algorithms for Collision Detection Between a Point and a Moving Polygon, with Applications to Aircraft Weather Avoidance

    Science.gov (United States)

    Narkawicz, Anthony; Hagen, George

    2016-01-01

    This paper proposes mathematical definitions of functions that can be used to detect future collisions between a point and a moving polygon. The intended application is weather avoidance, where the given point represents an aircraft and bounding polygons are chosen to model regions with bad weather. Other applications could possibly include avoiding other moving obstacles. The motivation for the functions presented here is safety, and therefore they have been proved to be mathematically correct. The functions are being developed for inclusion in NASA's Stratway software tool, which allows low-fidelity air traffic management concepts to be easily prototyped and quickly tested.

  12. The Type-2 Fuzzy Logic Controller-Based Maximum Power Point Tracking Algorithm and the Quadratic Boost Converter for Pv System

    Science.gov (United States)

    Altin, Necmi

    2018-05-01

    An interval type-2 fuzzy logic controller-based maximum power point tracking algorithm and direct current-direct current (DC-DC) converter topology are proposed for photovoltaic (PV) systems. The proposed maximum power point tracking algorithm is designed based on an interval type-2 fuzzy logic controller that has an ability to handle uncertainties. The change in PV power and the change in PV voltage are determined as inputs of the proposed controller, while the change in duty cycle is determined as the output of the controller. Seven interval type-2 fuzzy sets are determined and used as membership functions for input and output variables. The quadratic boost converter provides high voltage step-up ability without any reduction in performance and stability of the system. The performance of the proposed system is validated through MATLAB/Simulink simulations. It is seen that the proposed system provides high maximum power point tracking speed and accuracy even for fast changing atmospheric conditions and high voltage step-up requirements.

  13. An Implementable First-Order Primal-Dual Algorithm for Structured Convex Optimization

    Directory of Open Access Journals (Sweden)

    Feng Ma

    2014-01-01

    Full Text Available Many application problems of practical interest can be posed as structured convex optimization models. In this paper, we study a new first-order primaldual algorithm. The method can be easily implementable, provided that the resolvent operators of the component objective functions are simple to evaluate. We show that the proposed method can be interpreted as a proximal point algorithm with a customized metric proximal parameter. Convergence property is established under the analytic contraction framework. Finally, we verify the efficiency of the algorithm by solving the stable principal component pursuit problem.

  14. A new algorithm combining geostatistics with the surrogate data approach to increase the accuracy of comparisons of point radiation measurements with cloud measurements

    Science.gov (United States)

    Venema, V. K. C.; Lindau, R.; Varnai, T.; Simmer, C.

    2009-04-01

    Two main groups of statistical methods used in the Earth sciences are geostatistics and stochastic modelling. Geostatistical methods, such as various kriging algorithms, aim at estimating the mean value for every point as well as possible. In case of sparse measurements, such fields have less variability at small scales and a narrower distribution as the true field. This can lead to biases if a nonlinear process is simulated on such a kriged field. Stochastic modelling aims at reproducing the structure of the data. One of the stochastic modelling methods, the so-called surrogate data approach, replicates the value distribution and power spectrum of a certain data set. However, while stochastic methods reproduce the statistical properties of the data, the location of the measurement is not considered. Because radiative transfer through clouds is a highly nonlinear process it is essential to model the distribution (e.g. of optical depth, extinction, liquid water content or liquid water path) accurately as well as the correlations in the cloud field because of horizontal photon transport. This explains the success of surrogate cloud fields for use in 3D radiative transfer studies. However, up to now we could only achieve good results for the radiative properties averaged over the field, but not for a radiation measurement located at a certain position. Therefore we have developed a new algorithm that combines the accuracy of stochastic (surrogate) modelling with the positioning capabilities of kriging. In this way, we can automatically profit from the large geostatistical literature and software. The algorithm is tested on cloud fields from large eddy simulations (LES). On these clouds a measurement is simulated. From the pseudo-measurement we estimated the distribution and power spectrum. Furthermore, the pseudo-measurement is kriged to a field the size of the final surrogate cloud. The distribution, spectrum and the kriged field are the inputs to the algorithm. This

  15. Proximity credentials: A survey

    International Nuclear Information System (INIS)

    Wright, L.J.

    1987-04-01

    Credentials as a means of identifying individuals have traditionally been a photo badge and more recently, the coded credential. Another type of badge, the proximity credential, is making inroads in the personnel identification field. This badge can be read from a distance instead of being veiewed by a guard or inserted into a reading device. This report reviews proximity credentials, identifies the companies marketing or developing proximity credentials, and describes their respective credentials. 3 tabs

  16. Proximal Probes Facility

    Data.gov (United States)

    Federal Laboratory Consortium — The Proximal Probes Facility consists of laboratories for microscopy, spectroscopy, and probing of nanostructured materials and their functional properties. At the...

  17. An automated and robust image processing algorithm for glaucoma diagnosis from fundus images using novel blood vessel tracking and bend point detection.

    Science.gov (United States)

    M, Soorya; Issac, Ashish; Dutta, Malay Kishore

    2018-02-01

    Glaucoma is an ocular disease which can cause irreversible blindness. The disease is currently identified using specialized equipment operated by optometrists manually. The proposed work aims to provide an efficient imaging solution which can help in automating the process of Glaucoma diagnosis using computer vision techniques from digital fundus images. The proposed method segments the optic disc using a geometrical feature based strategic framework which improves the detection accuracy and makes the algorithm invariant to illumination and noise. Corner thresholding and point contour joining based novel methods are proposed to construct smooth contours of Optic Disc. Based on a clinical approach as used by ophthalmologist, the proposed algorithm tracks blood vessels inside the disc region and identifies the points at which first vessel bend from the optic disc boundary and connects them to obtain the contours of Optic Cup. The proposed method has been compared with the ground truth marked by the medical experts and the similarity parameters, used to determine the performance of the proposed method, have yield a high similarity of segmentation. The proposed method has achieved a macro-averaged f-score of 0.9485 and accuracy of 97.01% in correctly classifying fundus images. The proposed method is clinically significant and can be used for Glaucoma screening over a large population which will work in a real time. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. OPTIMASI OLSR ROUTING PROTOCOL PADA JARINGAN WIRELESS MESH DENGAN ADAPTIVE REFRESHING TIME INTERVAL DAN ENHANCE MULTI POINT RELAY SELECTING ALGORITHM

    Directory of Open Access Journals (Sweden)

    Faosan Mapa

    2014-01-01

    Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Wireless Mesh Network (WMN adalah suatu konektivitas jaringan yang self-organized, self-configured dan multi-hop. Tujuan dari WMN adalah menawarkan pengguna suatu bentuk jaringan nirkabel yang dapat dengan mudah berkomunikasi dengan jaringan konvensional dengan kecepatan tinggi dan dengan cakupan yang lebih luas serta biaya awal yang minimal. Diperlukan suatu desain protokol routing yang efisien untuk WMN yang secara adaptif dapat mendukung mesh routers dan mesh clients. Dalam tulisan ini, diusulkan untuk mengoptimalkan protokol OLSR, yang merupakan protokol routing proaktif. Digunakan heuristik yang meningkatkan protokol OLSR melalui adaptive refreshing time interval dan memperbaiki metode MPR selecting algorithm. Suatu analisa dalam meningkatkan protokol OLSR melalui adaptive refreshing time interval dan memperbaiki algoritma pemilihan MPR menunjukkan kinerja yang signifikan dalam hal throughput jika dibandingkan dengan protokol OLSR yang asli. Akan tetapi, terdapat kenaikan dalam hal delay. Pada simulasi yang dilakukan dapat disimpulkan bahwa OLSR dapat dioptimalkan dengan memodifikasi pemilihan node MPR berdasarkan cost effective dan penyesuaian waktu interval refreshing hello message sesuai dengan keadaan

  19. Real-time estimation of prostate tumor rotation and translation with a kV imaging system based on an iterative closest point algorithm

    International Nuclear Information System (INIS)

    Tehrani, Joubin Nasehi; O’Brien, Ricky T; Keall, Paul; Poulsen, Per Rugaard

    2013-01-01

    Previous studies have shown that during cancer radiotherapy a small translation or rotation of the tumor can lead to errors in dose delivery. Current best practice in radiotherapy accounts for tumor translations, but is unable to address rotation due to a lack of a reliable real-time estimate. We have developed a method based on the iterative closest point (ICP) algorithm that can compute rotation from kilovoltage x-ray images acquired during radiation treatment delivery. A total of 11 748 kilovoltage (kV) images acquired from ten patients (one fraction for each patient) were used to evaluate our tumor rotation algorithm. For each kV image, the three dimensional coordinates of three fiducial markers inside the prostate were calculated. The three dimensional coordinates were used as input to the ICP algorithm to calculate the real-time tumor rotation and translation around three axes. The results show that the root mean square error was improved for real-time calculation of tumor displacement from a mean of 0.97 mm with the stand alone translation to a mean of 0.16 mm by adding real-time rotation and translation displacement with the ICP algorithm. The standard deviation (SD) of rotation for the ten patients was 2.3°, 0.89° and 0.72° for rotation around the right–left (RL), anterior–posterior (AP) and superior–inferior (SI) directions respectively. The correlation between all six degrees of freedom showed that the highest correlation belonged to the AP and SI translation with a correlation of 0.67. The second highest correlation in our study was between the rotation around RL and rotation around AP, with a correlation of −0.33. Our real-time algorithm for calculation of rotation also confirms previous studies that have shown the maximum SD belongs to AP translation and rotation around RL. ICP is a reliable and fast algorithm for estimating real-time tumor rotation which could create a pathway to investigational clinical treatment studies requiring

  20. Maximum Power Point Tracking for Brushless DC Motor-Driven Photovoltaic Pumping Systems Using a Hybrid ANFIS-FLOWER Pollination Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Neeraj Priyadarshi

    2018-04-01

    Full Text Available In this research paper, a hybrid Artificial Neural Network (ANN-Fuzzy Logic Control (FLC tuned Flower Pollination Algorithm (FPA as a Maximum Power Point Tracker (MPPT is employed to amend root mean square error (RMSE of photovoltaic (PV modeling. Moreover, Gaussian membership functions have been considered for fuzzy controller design. This paper interprets the Luo converter occupied brushless DC motor (BLDC-directed PV water pump application. Experimental responses certify the effectiveness of the suggested motor-pump system supporting diverse operating states. The Luo converter, a newly developed DC-DC converter, has high power density, better voltage gain transfer and superior output waveform and can track optimal power from PV modules. For BLDC speed control there is no extra circuitry, and phase current sensors are enforced for this scheme. The most recent attempt using adaptive neuro-fuzzy inference system (ANFIS-FPA-operated BLDC directed PV pump with advanced Luo converter, has not been formerly conferred.

  1. Application of genetic algorithm to land use optimization for non-point source pollution control based on CLUE-S and SWAT

    Science.gov (United States)

    Wang, Qingrui; Liu, Ruimin; Men, Cong; Guo, Lijia

    2018-05-01

    The genetic algorithm (GA) was combined with the Conversion of Land Use and its Effect at Small regional extent (CLUE-S) model to obtain an optimized land use pattern for controlling non-point source (NPS) pollution. The performance of the combination was evaluated. The effect of the optimized land use pattern on the NPS pollution control was estimated by the Soil and Water Assessment Tool (SWAT) model and an assistant map was drawn to support the land use plan for the future. The Xiangxi River watershed was selected as the study area. Two scenarios were used to simulate the land use change. Under the historical trend scenario (Markov chain prediction), the forest area decreased by 2035.06 ha, and was mainly converted into paddy and dryland area. In contrast, under the optimized scenario (genetic algorithm (GA) prediction), up to 3370 ha of dryland area was converted into forest area. Spatially, the conversion of paddy and dryland into forest occurred mainly in the northwest and southeast of the watershed, where the slope land occupied a large proportion. The organic and inorganic phosphorus loads decreased by 3.6% and 3.7%, respectively, in the optimized scenario compared to those in the historical trend scenario. GA showed a better performance in optimized land use prediction. A comparison of the land use patterns in 2010 under the real situation and in 2020 under the optimized situation showed that Shennongjia and Shuiyuesi should convert 1201.76 ha and 1115.33 ha of dryland into forest areas, respectively, which represented the greatest changes in all regions in the watershed. The results of this study indicated that GA and the CLUE-S model can be used to optimize the land use patterns in the future and that SWAT can be used to evaluate the effect of land use optimization on non-point source pollution control. These methods may provide support for land use plan of an area.

  2. Proximity Queries between Interval-Based CSG Octrees

    International Nuclear Information System (INIS)

    Dyllong, Eva; Grimm, Cornelius

    2007-01-01

    This short paper is concerned with a new algorithm for collision and distance calculation between CSG octrees, a generalization of an octree model created from a Constructive Solid Geometry (CSG) object. The data structure uses interval arithmetic and allows us to extend the tests for classifying points in space as inside, on the boundary, or outside a CSG object to entire sections of the space at once. Tree nodes with additional information about relevant parts of the CSG object are introduced in order to reduce the depth of the required subdivision. The new data structure reduces the input complexity and enables us to reconstruct the CSG object. We present an efficient algorithm for computing the distance between CSG objects encoded by the new data structure. The distance algorithm is based on a distance algorithm for classical octrees but, additionally, it utilizes an elaborated sort sequence and differentiated handling of pairs of octree nodes to enhance its efficiency. Experimental results indicate that, in comparison to common octrees, the new representation has advantages in the field of proximity query

  3. Generic primal-dual interior point methods based on a new kernel function

    NARCIS (Netherlands)

    EL Ghami, M.; Roos, C.

    2008-01-01

    In this paper we present a generic primal-dual interior point methods (IPMs) for linear optimization in which the search direction depends on a univariate kernel function which is also used as proximity measure in the analysis of the algorithm. The proposed kernel function does not satisfy all the

  4. A points-based algorithm for prognosticating clinical outcome of Chiari malformation Type I with syringomyelia: results from a predictive model analysis of 82 surgically managed adult patients.

    Science.gov (United States)

    Thakar, Sumit; Sivaraju, Laxminadh; Jacob, Kuruthukulangara S; Arun, Aditya Atal; Aryan, Saritha; Mohan, Dilip; Sai Kiran, Narayanam Anantha; Hegde, Alangar S

    2018-01-01

    OBJECTIVE Although various predictors of postoperative outcome have been previously identified in patients with Chiari malformation Type I (CMI) with syringomyelia, there is no known algorithm for predicting a multifactorial outcome measure in this widely studied disorder. Using one of the largest preoperative variable arrays used so far in CMI research, the authors attempted to generate a formula for predicting postoperative outcome. METHODS Data from the clinical records of 82 symptomatic adult patients with CMI and altered hindbrain CSF flow who were managed with foramen magnum decompression, C-1 laminectomy, and duraplasty over an 8-year period were collected and analyzed. Various preoperative clinical and radiological variables in the 57 patients who formed the study cohort were assessed in a bivariate analysis to determine their ability to predict clinical outcome (as measured on the Chicago Chiari Outcome Scale [CCOS]) and the resolution of syrinx at the last follow-up. The variables that were significant in the bivariate analysis were further analyzed in a multiple linear regression analysis. Different regression models were tested, and the model with the best prediction of CCOS was identified and internally validated in a subcohort of 25 patients. RESULTS There was no correlation between CCOS score and syrinx resolution (p = 0.24) at a mean ± SD follow-up of 40.29 ± 10.36 months. Multiple linear regression analysis revealed that the presence of gait instability, obex position, and the M-line-fourth ventricle vertex (FVV) distance correlated with CCOS score, while the presence of motor deficits was associated with poor syrinx resolution (p ≤ 0.05). The algorithm generated from the regression model demonstrated good diagnostic accuracy (area under curve 0.81), with a score of more than 128 points demonstrating 100% specificity for clinical improvement (CCOS score of 11 or greater). The model had excellent reliability (κ = 0.85) and was validated with

  5. Proximity functions for general right cylinders

    International Nuclear Information System (INIS)

    Kellerer, A.M.

    1981-01-01

    Distributions of distances between pairs of points within geometrical objects, or the closely related proximity functions and geometric reduction factors, have applications to dosimetric and microdosimetric calculations. For convex bodies these functions are linked to the chord-length distributions that result from random intersections by straight lines. A synopsis of the most important relations is given. The proximity functions and related functions are derived for right cylinders with arbitrary cross sections. The solution utilizes the fact that the squares of the distances between two random points are sums of independently distributed squares of distances parallel and perpendicular to the axis of the cylinder. Analogous formulas are derived for the proximity functions or geometric reduction factors for a cylinder relative to a point. This requires only a minor modification of the solution

  6. Performance of two commercial electron beam algorithms over regions close to the lung-mediastinum interface, against Monte Carlo simulation and point dosimetry in virtual and anthropomorphic phantoms.

    Science.gov (United States)

    Ojala, J; Hyödynmaa, S; Barańczyk, R; Góra, E; Waligórski, M P R

    2014-03-01

    Electron radiotherapy is applied to treat the chest wall close to the mediastinum. The performance of the GGPB and eMC algorithms implemented in the Varian Eclipse treatment planning system (TPS) was studied in this region for 9 and 16 MeV beams, against Monte Carlo (MC) simulations, point dosimetry in a water phantom and dose distributions calculated in virtual phantoms. For the 16 MeV beam, the accuracy of these algorithms was also compared over the lung-mediastinum interface region of an anthropomorphic phantom, against MC calculations and thermoluminescence dosimetry (TLD). In the phantom with a lung-equivalent slab the results were generally congruent, the eMC results for the 9 MeV beam slightly overestimating the lung dose, and the GGPB results for the 16 MeV beam underestimating the lung dose. Over the lung-mediastinum interface, for 9 and 16 MeV beams, the GGPB code underestimated the lung dose and overestimated the dose in water close to the lung, compared to the congruent eMC and MC results. In the anthropomorphic phantom, results of TLD measurements and MC and eMC calculations agreed, while the GGPB code underestimated the lung dose. Good agreement between TLD measurements and MC calculations attests to the accuracy of "full" MC simulations as a reference for benchmarking TPS codes. Application of the GGPB code in chest wall radiotherapy may result in significant underestimation of the lung dose and overestimation of dose to the mediastinum, affecting plan optimization over volumes close to the lung-mediastinum interface, such as the lung or heart. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. Evaluation of the new electron-transport algorithm in MCNP6.1 for the simulation of dose point kernel in water

    Science.gov (United States)

    Antoni, Rodolphe; Bourgois, Laurent

    2017-12-01

    In this work, the calculation of specific dose distribution in water is evaluated in MCNP6.1 with the regular condensed history algorithm the "detailed electron energy-loss straggling logic" and the new electrons transport algorithm proposed the "single event algorithm". Dose Point Kernel (DPK) is calculated with monoenergetic electrons of 50, 100, 500, 1000 and 3000 keV for different scoring cells dimensions. A comparison between MCNP6 results and well-validated codes for electron-dosimetry, i.e., EGSnrc or Penelope, is performed. When the detailed electron energy-loss straggling logic is used with default setting (down to the cut-off energy 1 keV), we infer that the depth of the dose peak increases with decreasing thickness of the scoring cell, largely due to combined step-size and boundary crossing artifacts. This finding is less prominent for 500 keV, 1 MeV and 3 MeV dose profile. With an appropriate number of sub-steps (ESTEP value in MCNP6), the dose-peak shift is almost complete absent to 50 keV and 100 keV electrons. However, the dose-peak is more prominent compared to EGSnrc and the absorbed dose tends to be underestimated at greater depths, meaning that boundaries crossing artifact are still occurring while step-size artifacts are greatly reduced. When the single-event mode is used for the whole transport, we observe the good agreement of reference and calculated profile for 50 and 100 keV electrons. Remaining artifacts are fully vanished, showing a possible transport treatment for energies less than a hundred of keV and accordance with reference for whatever scoring cell dimension, even if the single event method initially intended to support electron transport at energies below 1 keV. Conversely, results for 500 keV, 1 MeV and 3 MeV undergo a dramatic discrepancy with reference curves. These poor results and so the current unreliability of the method is for a part due to inappropriate elastic cross section treatment from the ENDF/B-VI.8 library in those

  8. Neighborhoods and manageable proximity

    Directory of Open Access Journals (Sweden)

    Stavros Stavrides

    2011-08-01

    Full Text Available The theatricality of urban encounters is above all a theatricality of distances which allow for the encounter. The absolute “strangeness” of the crowd (Simmel 1997: 74 expressed, in its purest form, in the absolute proximity of a crowded subway train, does not generally allow for any movements of approach, but only for nervous hostile reactions and submissive hypnotic gestures. Neither forced intersections in the course of pedestrians or vehicles, nor the instantaneous crossing of distances by the technology of live broadcasting and remote control give birth to places of encounter. In the forced proximity of the metropolitan crowd which haunted the city of the 19th and 20th century, as well as in the forced proximity of the tele-presence which haunts the dystopic prospect of the future “omnipolis” (Virilio 1997: 74, the necessary distance, which is the stage of an encounter between different instances of otherness, is dissipated.

  9. Proximal collagenous gastroenteritides:

    DEFF Research Database (Denmark)

    Nielsen, Ole Haagen; Riis, Lene Buhl; Danese, Silvio

    2014-01-01

    AIM: While collagenous colitis represents the most common form of the collagenous gastroenteritides, the collagenous entities affecting the proximal part of the gastrointestinal tract are much less recognized and possibly overlooked. The aim was to summarize the latest information through a syste...

  10. Proximal femoral fractures

    DEFF Research Database (Denmark)

    Palm, Henrik; Teixidor, Jordi

    2015-01-01

    searched the homepages of the national heath authorities and national orthopedic societies in West Europe and found 11 national or regional (in case of no national) guidelines including any type of proximal femoral fracture surgery. RESULTS: Pathway consensus is outspread (internal fixation for un...

  11. Proximate Analysis of Coal

    Science.gov (United States)

    Donahue, Craig J.; Rais, Elizabeth A.

    2009-01-01

    This lab experiment illustrates the use of thermogravimetric analysis (TGA) to perform proximate analysis on a series of coal samples of different rank. Peat and coke are also examined. A total of four exercises are described. These are dry exercises as students interpret previously recorded scans. The weight percent moisture, volatile matter,…

  12. Quantum Proximity Resonances

    International Nuclear Information System (INIS)

    Heller, E.J.

    1996-01-01

    It is well known that at long wavelengths λ an s-wave scatterer can have a scattering cross section σ on the order of λ 2 , much larger than its physical size, as measured by the range of its potential. Very interesting phenomena can arise when two or more identical scatterers are placed close together, well within one wavelength. We show that, for a pair of identical scatterers, an extremely narrow p-wave open-quote open-quote proximity close-quote close-quote resonance develops from a broader s-wave resonance of the individual scatterers. A new s-wave resonance of the pair also appears. The relation of these proximity resonances (so called because they appear when the scatterers are close together) to the Thomas and Efimov effects is discussed. copyright 1996 The American Physical Society

  13. Analyzing algorithms for nonlinear and spatially nonuniform phase shifts in the liquid crystal point diffraction interferometer. 1998 summer research program for high school juniors at the University of Rochester's Laboratory for Laser Energetics. Student research reports

    International Nuclear Information System (INIS)

    Jain, N.

    1999-03-01

    Phase-shifting interferometry has many advantages, and the phase shifting nature of the Liquid Crystal Point Diffraction Interferometer (LCPDI) promises to provide significant improvement over other current OMEGA wavefront sensors. However, while phase-shifting capabilities improve its accuracy as an interferometer, phase-shifting itself introduces errors. Phase-shifting algorithms are designed to eliminate certain types of phase-shift errors, and it is important to chose an algorithm that is best suited for use with the LCPDI. Using polarization microscopy, the authors have observed a correlation between LC alignment around the microsphere and fringe behavior. After designing a procedure to compare phase-shifting algorithms, they were able to predict the accuracy of two particular algorithms through computer modeling of device-specific phase shift-errors

  14. Inexact proximal Newton methods for self-concordant functions

    DEFF Research Database (Denmark)

    Li, Jinchao; Andersen, Martin Skovgaard; Vandenberghe, Lieven

    2016-01-01

    with an application to L1-regularized covariance selection, in which prior constraints on the sparsity pattern of the inverse covariance matrix are imposed. In the numerical experiments the proximal Newton steps are computed by an accelerated proximal gradient method, and multifrontal algorithms for positive definite...... matrices with chordal sparsity patterns are used to evaluate gradients and matrix-vector products with the Hessian of the smooth component of the objective....

  15. Proximity friction reexamined

    International Nuclear Information System (INIS)

    Krappe, H.J.

    1989-01-01

    The contribution of inelastic excitations to radial and tangential friction form-factors in heavy-ion collisions is investigated in the frame-work of perturbation theory. The dependence of the form factors on the essential geometrical and level-density parameters of the scattering system is exhibited in a rather closed form. The conditions for the existence of time-local friction coefficients are discussed. Results are compared to form factors from other models, in particular the transfer-related proximity friction. For the radial friction coefficient the inelastic excitation mechanism seems to be the dominant contribution in peripheral collisions. (orig.)

  16. Proximal femoral fractures.

    Science.gov (United States)

    Webb, Lawrence X

    2002-01-01

    Fractures of the proximal femur include fractures of the head, neck, intertrochanteric, and subtrochanteric regions. Head fractures commonly accompany dislocations. Neck fractures and intertrochanteric fractures occur with greatest frequency in elderly patients with a low bone mineral density and are produced by low-energy mechanisms. Subtrochanteric fractures occur in a predominantly strong cortical osseous region which is exposed to large compressive stresses. Implants used to address these fractures must be able to accommodate significant loads while the fractures consolidate. Complications secondary to these injuries produce significant morbidity and include infection, nonunion, malunion, decubitus ulcers, fat emboli, deep venous thrombosis, pulmonary embolus, pneumonia, myocardial infarction, stroke, and death.

  17. Echosonography with proximity sensors

    International Nuclear Information System (INIS)

    Thaisiam, W; Laithong, T; Meekhun, S; Chaiwathyothin, N; Thanlarp, P; Danworaphong, S

    2013-01-01

    We propose the use of a commercial ultrasonic proximity sensor kit for profiling an altitude-varying surface by employing echosonography. The proximity sensor kit, two identical transducers together with its dedicated operating circuit, is used as a profiler for the construction of an image. Ultrasonic pulses are emitted from one of the transducers and received by the other. The time duration between the pulses allows us to determine the traveling distance of each pulse. In the experiment, the circuit is used with the addition of two copper wires for directing the outgoing and incoming signals to an oscilloscope. The time of flight of ultrasonic pulses can thus be determined. Square grids of 5 × 5 cm 2 are made from fishing lines, forming pixels in the image. The grids are designed to hold the detection unit in place, about 30 cm above a flat surface. The surface to be imaged is constructed to be height varying and placed on the flat surface underneath the grids. Our result shows that an image of the profiled surface can be created by varying the location of the detection unit along the grid. We also investigate the deviation in relation to the time of flight of the ultrasonic pulse. Such an experiment should be valuable for conveying the concept of ultrasonic imaging to physical and medical science undergraduate students. Due to its simplicity, the setup could be made in any undergraduate laboratory relatively inexpensively and it requires no complex parts. The results illustrate the concept of echosonography. (paper)

  18. Proximity detection system underground

    Energy Technology Data Exchange (ETDEWEB)

    Denis Kent [Mine Site Technologies (Australia)

    2008-04-15

    Mine Site Technologies (MST) with the support ACARP and Xstrata Coal NSW, as well as assistance from Centennial Coal, has developed a Proximity Detection System to proof of concept stage as per plan. The basic aim of the project was to develop a system to reduce the risk of the people coming into contact with vehicles in an uncontrolled manner (i.e. being 'run over'). The potential to extend the developed technology into other areas, such as controls for vehicle-vehicle collisions and restricting access of vehicle or people into certain zones (e.g. non FLP vehicles into Hazardous Zones/ERZ) was also assessed. The project leveraged off MST's existing Intellectual Property and experience gained with our ImPact TRACKER tagging technology, allowing the development to be fast tracked. The basic concept developed uses active RFID Tags worn by miners underground to be detected by vehicle mounted Readers. These Readers in turn provide outputs that can be used to alert a driver (e.g. by light and/or audible alarm) that a person (Tag) approaching within their vicinity. The prototype/test kit developed proved the concept and technology, the four main components being: Active RFID Tags to send out signals for detection by vehicle mounted receivers; Receiver electronics to detect RFID Tags approaching within the vicinity of the unit to create a long range detection system (60 m to 120 m); A transmitting/exciter device to enable inner detection zone (within 5 m to 20 m); and A software/hardware device to process & log incoming Tags reads and create certain outputs. Tests undertaken in the laboratory and at a number of mine sites, confirmed the technology path taken could form the basis of a reliable Proximity Detection/Alert System.

  19. PROXIMITY MANAGEMENT IN CRISIS CONDITIONS

    Directory of Open Access Journals (Sweden)

    Ion Dorin BUMBENECI

    2010-01-01

    Full Text Available The purpose of this study is to evaluate the level of assimilation for the terms "Proximity Management" and "Proximity Manager", both in the specialized literature and in practice. The study has two parts: the theoretical research of the two terms, and an evaluation of the use of Proximity management in 32 companies in Gorj, Romania. The object of the evaluation resides in 27 companies with less than 50 employees and 5 companies with more than 50 employees.

  20. Localization method of picking point of apple target based on smoothing contour symmetry axis algorithm%基于平滑轮廓对称轴法的苹果目标采摘点定位方法

    Institute of Scientific and Technical Information of China (English)

    王丹丹; 徐越; 宋怀波; 何东健

    2015-01-01

    果实采摘点的精确定位是采摘机器人必须解决的关键问题。鉴于苹果目标具有良好对称性的特点,利用转动惯量所具有的平移、旋转不变性及其在对称轴方向取得极值的特性,提出了一种基于轮廓对称轴法的苹果目标采摘点定位方法。为了解决分割后苹果目标边缘不够平滑而导致定位精度偏低的问题,提出了一种苹果目标轮廓平滑方法。为了验证算法的有效性,对随机选取的20幅无遮挡的单果苹果图像分别利用轮廓平滑和未进行轮廓平滑的算法进行试验,试验结果表明,未进行轮廓平滑算法的平均定位误差为20.678°,而轮廓平滑后算法平均定位误差为4.542°,比未进行轮廓平滑算法平均定位误差降低了78.035%,未进行轮廓平滑算法的平均运行时间为10.2 ms,而轮廓平滑后算法的平均运行时间为7.5 ms,比未进行轮廓平滑算法平均运行时间降低了25.839%,表明平滑轮廓算法可以提高定位精度和运算效率。利用平滑轮廓对称轴算法可以较好地找到苹果目标的对称轴并实现采摘点定位,表明将该方法应用于苹果目标的对称轴提取及采摘点定位是可行的。%The localization of picking points of fruits is one of the key problems for picking robots, and it is the first step of implementation of the picking task for picking robots. In view of a good symmetry of apples, and characteristics of shift, rotation invariance, and reaching the extreme values in symmetry axis direction which moment of inertia possesses, a new method based on a contour symmetry axis was proposed to locate the picking point of apples. In order to solve the problem of low localization accuracy which results from the rough edge of apples after segmentation, a method of smoothing contour algorithm was presented. The steps of the algorithm were as follow, first, the image was transformed from RGB color space into

  1. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  2. Mathematical representation of the normal proximal human femur: application in planning of cam hip surgery.

    Science.gov (United States)

    Masjedi, Milad; Harris, Simon J; Davda, Kinner; Cobb, Justin P

    2013-04-01

    Precise modelling of the proximal femur can be used for detecting and planning corrective surgery for subjects with deformed femurs using robotic technology or navigation systems. In this study, the proximal femoral geometry has been modelled mathematically. It is hypothesised that it is possible to fit a quadratic surface or combinations of them onto different bone surfaces with a relatively good fit. Forty-six computed tomography datasets of normal proximal femora were segmented. A least-squares fitting algorithm was used to fit a quadratic surface on the femoral head and neck such that the sum of distances between a set of points on the femoral neck and the quadratic surface was minimised. Furthermore, the position of the head-neck articular margin was also measured. The femoral neck was found to be represented as a good fit to a hyperboloid with an average root mean-squared error of 1.0 ± 0.13 mm while the shape of the femoral articular margin was a reproducible sinusoidal wave form with two peaks. The mathematical description in this study can be used for planning corrective surgery for subjects with cam-type femoroacetabular impingement.

  3. Proximal caries detection: Sirona Sidexis versus Kodak Ektaspeed Plus.

    Science.gov (United States)

    Khan, Emad A; Tyndall, Donald A; Ludlow, John B; Caplan, Daniel

    2005-01-01

    This study compared the accuracy of intraoral film and a charge-coupled device (CCD) receptor for proximal caries detection. Four observers evaluated images of the proximal surfaces of 40 extracted posterior teeth. The presence or absence of caries was scored using a five-point confidence scale. The actual status of each surface was determined from ground section histology. Responses were evaluated by means of receiver operating characteristic (ROC) analysis. Areas under ROC curves (Az) were assessed through a paired t-test. The performance of the CCD-based intraoral sensor was not different statistically from Ektaspeed Plus film in detecting proximal caries.

  4. Hemiarthroplasty for proximal humeral fracture: restoration of the Gothic arch.

    Science.gov (United States)

    Krishnan, Sumant G; Bennion, Phillip W; Reineck, John R; Burkhead, Wayne Z

    2008-10-01

    Proximal humerus fractures are the most common fractures of the shoulder girdle, and initial management of these injuries often determines final outcome. When arthroplasty is used to manage proximal humeral fractures, surgery remains technically demanding, and outcomes have been unpredictable. Recent advances in both technique and prosthetic implants have led to more successful and reproducible results. Key technical points include restoration of the Gothic arch, anatomic tuberosity reconstruction, and minimal soft tissue dissection.

  5. Autonomous vision-based navigation for proximity operations around binary asteroids

    Science.gov (United States)

    Gil-Fernandez, Jesus; Ortega-Hernando, Guillermo

    2018-06-01

    Future missions to small bodies demand higher level of autonomy in the Guidance, Navigation and Control system for higher scientific return and lower operational costs. Different navigation strategies have been assessed for ESA's asteroid impact mission (AIM). The main objective of AIM is the detailed characterization of binary asteroid Didymos. The trajectories for the proximity operations shall be intrinsically safe, i.e., no collision in presence of failures (e.g., spacecraft entering safe mode), perturbations (e.g., non-spherical gravity field), and errors (e.g., maneuver execution error). Hyperbolic arcs with sufficient hyperbolic excess velocity are designed to fulfil the safety, scientific, and operational requirements. The trajectory relative to the asteroid is determined using visual camera images. The ground-based trajectory prediction error at some points is comparable to the camera Field Of View (FOV). Therefore, some images do not contain the entire asteroid. Autonomous navigation can update the state of the spacecraft relative to the asteroid at higher frequency. The objective of the autonomous navigation is to improve the on-board knowledge compared to the ground prediction. The algorithms shall fit in off-the-shelf, space-qualified avionics. This note presents suitable image processing and relative-state filter algorithms for autonomous navigation in proximity operations around binary asteroids.

  6. Cook-Levin Theorem Algorithmic-Reducibility/Completeness = Wilson Renormalization-(Semi)-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') REPLACING CRUTCHES!!!: Models: Turing-machine, finite-state-models, finite-automata

    Science.gov (United States)

    Young, Frederic; Siegel, Edward

    Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!

  7. Fractures of the proximal humerus

    DEFF Research Database (Denmark)

    Brorson, Stig

    2013-01-01

    Fractures of the proximal humerus have been diagnosed and managed since the earliest known surgical texts. For more than four millennia the preferred treatment was forceful traction, closed reduction, and immobilization with linen soaked in combinations of oil, honey, alum, wine, or cerate......, classification of proximal humeral fractures remains a challenge for the conduct, reporting, and interpretation of clinical trials. The evidence for the benefits of surgery in complex fractures of the proximal humerus is weak. In three systematic reviews I studied the outcome after locking plate osteosynthesis...

  8. Quantum ferromagnet in the proximity of the tricritical point

    Czech Academy of Sciences Publication Activity Database

    Opletal, P.; Prokleška, J.; Valenta, J.; Proschek, P.; Tkáč, V.; Tarasenko, R.; Běhounková, M.; Matoušková, Šárka; Abd-Elmeguid, M. M.; Sechovský, V.

    2017-01-01

    Roč. 2, JUN 13 2017 (2017), č. článku 29. ISSN 2397-4648 Institutional support: RVO:67985831 Keywords : metamagnetic transition * high-pressure * liquid * UCoAL * state * destruction * criticality Subject RIV: BM - Solid Matter Physics ; Magnetism OBOR OECD: Condensed matter physics (including formerly solid state physics , supercond.)

  9. An automated three-dimensional detection and segmentation method for touching cells by integrating concave points clustering and random walker algorithm.

    Directory of Open Access Journals (Sweden)

    Yong He

    Full Text Available Characterizing cytoarchitecture is crucial for understanding brain functions and neural diseases. In neuroanatomy, it is an important task to accurately extract cell populations' centroids and contours. Recent advances have permitted imaging at single cell resolution for an entire mouse brain using the Nissl staining method. However, it is difficult to precisely segment numerous cells, especially those cells touching each other. As presented herein, we have developed an automated three-dimensional detection and segmentation method applied to the Nissl staining data, with the following two key steps: 1 concave points clustering to determine the seed points of touching cells; and 2 random walker segmentation to obtain cell contours. Also, we have evaluated the performance of our proposed method with several mouse brain datasets, which were captured with the micro-optical sectioning tomography imaging system, and the datasets include closely touching cells. Comparing with traditional detection and segmentation methods, our approach shows promising detection accuracy and high robustness.

  10. Development of procedures for programmable proximity aperture lithography

    Energy Technology Data Exchange (ETDEWEB)

    Whitlow, H.J., E-mail: harry.whitlow@he-arc.ch [Institut des Microtechnologies Appliquées Arc, Haute Ecole Arc Ingénierie, Eplatures-Grise 17, CH-2300 La Chaux-de-Fonds (Switzerland); Department of Physics, University of Jyväskylä, P.O. Box 35 (YFL), FI-40014 Jyväskylä (Finland); Gorelick, S. [VTT Technical Research Centre of Finland, P.O. Box 1000, Tietotie 3, Espoo, FI-02044 VTT (Finland); Puttaraksa, N. [Department of Physics, University of Jyväskylä, P.O. Box 35 (YFL), FI-40014 Jyväskylä (Finland); Plasma and Beam Physics Research Facility, Department of Physics and Materials Science, Faculty of Science, Chiang Mai University, Chiang Mai 50200 (Thailand); Napari, M.; Hokkanen, M.J.; Norarat, R. [Department of Physics, University of Jyväskylä, P.O. Box 35 (YFL), FI-40014 Jyväskylä (Finland)

    2013-07-01

    Programmable proximity aperture lithography (PPAL) with MeV ions has been used in Jyväskylä and Chiang Mai universities for a number of years. Here we describe a number of innovations and procedures that have been incorporated into the LabView-based software. The basic operation involves the coordination of the beam blanker and five motor-actuated translators with high accuracy, close to the minimum step size with proper anti-collision algorithms. By using special approaches, such writing calibration patterns, linearisation of position and careful backlash correction the absolute accuracy of the aperture size and position, can be improved beyond the standard afforded by the repeatability of the translator end-point switches. Another area of consideration has been the fluence control procedures. These involve control of the uniformity of the beam where different approaches for fluence measurement such as simultaneous aperture current and the ion current passing through the aperture using a Faraday cup are used. Microfluidic patterns may contain many elements that make-up mixing sections, reaction chambers, separation columns and fluid reservoirs. To facilitate conception and planning we have implemented a .svg file interpreter, that allows the use of scalable vector graphics files produced by standard drawing software for generation of patterns made up of rectangular elements.

  11. Research on an uplink carrier sense multiple access algorithm of large indoor visible light communication networks based on an optical hard core point process.

    Science.gov (United States)

    Nan, Zhufen; Chi, Xuefen

    2016-12-20

    The IEEE 802.15.7 protocol suggests that it could coordinate the channel access process based on the competitive method of carrier sensing. However, the directionality of light and randomness of diffuse reflection would give rise to a serious imperfect carrier sense (ICS) problem [e.g., hidden node (HN) problem and exposed node (EN) problem], which brings great challenges in realizing the optical carrier sense multiple access (CSMA) mechanism. In this paper, the carrier sense process implemented by diffuse reflection light is modeled as the choice of independent sets. We establish an ICS model with the presence of ENs and HNs for the multi-point to multi-point visible light communication (VLC) uplink communications system. Considering the severe optical ICS problem, an optical hard core point process (OHCPP) is developed, which characterizes the optical CSMA for the indoor VLC uplink communications system. Due to the limited coverage of the transmitted optical signal, in our OHCPP, the ENs within the transmitters' carrier sense region could be retained provided that they could not corrupt the ongoing communications. Moreover, because of the directionality of both light emitting diode (LED) transmitters and receivers, theoretical analysis of the HN problem becomes difficult. In this paper, we derive the closed-form expression for approximating the outage probability and transmission capacity of VLC networks with the presence of HNs and ENs. Simulation results validate the analysis and also show the existence of an optimal physical carrier-sensing threshold that maximizes the transmission capacity for a given emission angle of LED.

  12. An algorithm for finding a common solution for a system of mixed equilibrium problem, quasi-variational inclusion problem and fixed point problem of nonexpansive semigroup

    Directory of Open Access Journals (Sweden)

    Liu Min

    2010-01-01

    Full Text Available In this paper, we introduce a hybrid iterative scheme for finding a common element of the set of solutions for a system of mixed equilibrium problems, the set of common fixed points for a nonexpansive semigroup and the set of solutions of the quasi-variational inclusion problem with multi-valued maximal monotone mappings and inverse-strongly monotone mappings in a Hilbert space. Under suitable conditions, some strong convergence theorems are proved. Our results extend some recent results in the literature.

  13. A 3D-Space Vector Modulation Algorithm for Three Phase Four Wire Neutral Point Clamped Inverter Systems as Power Quality Compensator

    Directory of Open Access Journals (Sweden)

    Palanisamy Ramasamy

    2017-11-01

    Full Text Available A Unified Power Quality Conditioner (UPQC is designed using a Neutral Point Clamped (NPC multilevel inverter to improve the power quality. When designed for high/medium voltage and power applications, the voltage stress across the switches and harmonic content in the output voltage are increased. A 3-phase 4-wire NPC inverter system is developed as Power Quality Conditioner using an effectual three dimensional Space Vector Modulation (3D-SVM technique. The proposed system behaves like a UPQC with shunt and series active filter under balanced and unbalanced loading conditions. In addition to the improvement of the power quality issues, it also balances the neutral point voltage and voltage balancing across the capacitors under unbalanced condition. The hardware and simulation results of proposed system are compared with 2D-SVM and 3D-SVM. The proposed system is stimulated using MATLAB and the hardware is designed using FPGA. From the results it is evident that effectual 3D-SVM technique gives better performance compared to other control methods.

  14. Solving discrete zero point problems

    NARCIS (Netherlands)

    van der Laan, G.; Talman, A.J.J.; Yang, Z.F.

    2004-01-01

    In this paper an algorithm is proposed to .nd a discrete zero point of a function on the collection of integral points in the n-dimensional Euclidean space IRn.Starting with a given integral point, the algorithm generates a .nite sequence of adjacent integral simplices of varying dimension and

  15. Primal-dual predictor-corrector interior point algorithm for quadratic semidefinite programming%二次半定规划的原始对偶预估校正内点算法

    Institute of Scientific and Technical Information of China (English)

    黄静静; 商朋见; 王爱文

    2011-01-01

    将半定规划(Semidefinite Programming,SDP)的内点算法推广到二次半定规划(QuadraticSemidefinite Programming,QSDP),重点讨论了AHO搜索方向的产生方法.首先利用Wolfe对偶理论推导得到了求解二次半定规划的非线性方程组,利用牛顿法求解该方程组,得到了求解QSDP的内点算法的AHO搜索方向,证明了该搜索方向的存在唯一性,最后给出了求解二次半定规划的预估校正内点算法的具体步骤,并对基于不同搜索方向的内点算法进行了数值实验,结果表明基于NT方向的内点算法最为稳健.%This paper extends the interior point algorithm for solving Semidefinite Programming (SDP) to Quadratic Semidefinite Programming(QSDP) and especially discusses the generation of AHO search direction. Firstly, we derive the nonlinear equations for solving QSDP using Wolfe's dual theorem.The AHO search direction is got by applying Newton' s method to the equations. Then we prove the existence and uniqueness of the search direction, and give the detaied steps of predictor-corrector interior-point algorithm. At last, this paper provides a numerical comparison of the algoritms using three different search directions and suggests the algorithm using NT direction is the most robust.

  16. Approximate solutions of common fixed-point problems

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book presents results on the convergence behavior of algorithms which are known as vital tools for solving convex feasibility problems and common fixed point problems. The main goal for us in dealing with a known computational error is to find what approximate solution can be obtained and how many iterates one needs to find it. According to know results, these algorithms should converge to a solution. In this exposition, these algorithms are studied, taking into account computational errors which remain consistent in practice. In this case the convergence to a solution does not take place. We show that our algorithms generate a good approximate solution if computational errors are bounded from above by a small positive constant. Beginning with an introduction, this monograph moves on to study: · dynamic string-averaging methods for common fixed point problems in a Hilbert space · dynamic string methods for common fixed point problems in a metric space · dynamic string-averaging version of the proximal...

  17. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    International Nuclear Information System (INIS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. (paper)

  18. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  19. Preliminary study on leadership proximity

    Directory of Open Access Journals (Sweden)

    Ghinea Valentina Mihaela

    2017-07-01

    Full Text Available In general, it is agreed that effective leadership requires a certain degree of proximity, either physical or mental, which enables leaders to maintain control over their followers and communicate their vision. Although we agree with the leadership proximity principles which states that leaders are able to efficiently serve only those people with whom they interact frequently, in this article we focus instead on the disadvantages of being too close and the way in which close proximity can actually hurt the effectiveness of leadership. The main effects that we discuss regard the way in which proximity and familiarity allow followers to see the weaknesses and faults of the leader much more easily and thus diminish the leader’s heroic aura, and the emotional bias that results from a leader being too familiar with his followers which will impede the process of rational decision making. As a result, we argue that there exists a functional proximity which allows the leader the necessary space in which to perform effective identity work and to hide the backstage aspects of leadership, while also allowing him an emotional buffer zone which will enable him to maintain the ability to see clearly and make rational decisions.

  20. Correction of Misclassifications Using a Proximity-Based Estimation Method

    Directory of Open Access Journals (Sweden)

    Shmulevich Ilya

    2004-01-01

    Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.

  1. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  2. Distributed Solutions for Loosely Coupled Feasibility Problems Using Proximal Splitting Methods

    DEFF Research Database (Denmark)

    Pakazad, Sina Khoshfetrat; Andersen, Martin Skovgaard; Hansson, Anders

    2014-01-01

    In this paper,we consider convex feasibility problems (CFPs) where the underlying sets are loosely coupled, and we propose several algorithms to solve such problems in a distributed manner. These algorithms are obtained by applying proximal splitting methods to convex minimization reformulations ...

  3. Sensitivity Analysis of the Proximal-Based Parallel Decomposition Methods

    Directory of Open Access Journals (Sweden)

    Feng Ma

    2014-01-01

    Full Text Available The proximal-based parallel decomposition methods were recently proposed to solve structured convex optimization problems. These algorithms are eligible for parallel computation and can be used efficiently for solving large-scale separable problems. In this paper, compared with the previous theoretical results, we show that the range of the involved parameters can be enlarged while the convergence can be still established. Preliminary numerical tests on stable principal component pursuit problem testify to the advantages of the enlargement.

  4. Community detection in complex networks using proximate support vector clustering

    Science.gov (United States)

    Wang, Feifan; Zhang, Baihai; Chai, Senchun; Xia, Yuanqing

    2018-03-01

    Community structure, one of the most attention attracting properties in complex networks, has been a cornerstone in advances of various scientific branches. A number of tools have been involved in recent studies concentrating on the community detection algorithms. In this paper, we propose a support vector clustering method based on a proximity graph, owing to which the introduced algorithm surpasses the traditional support vector approach both in accuracy and complexity. Results of extensive experiments undertaken on computer generated networks and real world data sets illustrate competent performances in comparison with the other counterparts.

  5. APPLICATION OF A PRIMAL-DUAL INTERIOR POINT ALGORITHM USING EXACT SECOND ORDER INFORMATION WITH A NOVEL NON-MONOTONE LINE SEARCH METHOD TO GENERALLY CONSTRAINED MINIMAX OPTIMISATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    INTAN S. AHMAD

    2008-04-01

    Full Text Available This work presents the application of a primal-dual interior point method to minimax optimisation problems. The algorithm differs significantly from previous approaches as it involves a novel non-monotone line search procedure, which is based on the use of standard penalty methods as the merit function used for line search. The crucial novel concept is the discretisation of the penalty parameter used over a finite range of orders of magnitude and the provision of a memory list for each such order. An implementation within a logarithmic barrier algorithm for bounds handling is presented with capabilities for large scale application. Case studies presented demonstrate the capabilities of the proposed methodology, which relies on the reformulation of minimax models into standard nonlinear optimisation models. Some previously reported case studies from the open literature have been solved, and with significantly better optimal solutions identified. We believe that the nature of the non-monotone line search scheme allows the search procedure to escape from local minima, hence the encouraging results obtained.

  6. Worlds largest particle physics laboratory selects Proxim Wireless Mesh

    CERN Multimedia

    2007-01-01

    "Proxim Wireless has announced that the European Organization for Nuclear Research (CERN), the world's largest particle physics laboratory and the birthplace of the World Wide Web, is using it's ORiNOCO AP-4000 mesh access points to extend the range of the laboratory's Wi-Fi network and to provide continuous monitoring of the lab's calorimeters" (1/2 page)

  7. Topology of digital images visual pattern discovery in proximity spaces

    CERN Document Server

    Peters, James F

    2014-01-01

    This book carries forward recent work on visual patterns and structures in digital images and introduces a near set-based a topology of digital images. Visual patterns arise naturally in digital images viewed as sets of non-abstract points endowed with some form of proximity (nearness) relation. Proximity relations make it possible to construct uniform topolo- gies on the sets of points that constitute a digital image. In keeping with an interest in gaining an understanding of digital images themselves as a rich source of patterns, this book introduces the basics of digital images from a computer vision perspective. In parallel with a computer vision perspective on digital images, this book also introduces the basics of prox- imity spaces. Not only the traditional view of spatial proximity relations but also the more recent descriptive proximity relations are considered. The beauty of the descriptive proximity approach is that it is possible to discover visual set patterns among sets that are non-overlapping ...

  8. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  9. Inter-proximal enamel reduction in contemporary orthodontics.

    Science.gov (United States)

    Pindoria, J; Fleming, P S; Sharma, P K

    2016-12-16

    Inter-proximal enamel reduction has gained increasing prominence in recent years being advocated to provide space for orthodontic alignment, to refine contact points and to potentially improve long-term stability. An array of techniques and products are available ranging from hand-held abrasive strips to handpiece mounted burs and discs. The indications for inter-proximal enamel reduction and the importance of formal space analysis, together with the various techniques and armamentarium which may be used to perform it safely in both the labial and buccal segments are outlined.

  10. Electromagnetic properties of proximity systems

    Science.gov (United States)

    Kresin, Vladimir Z.

    1985-07-01

    Magnetic screening in the proximity system Sα-Mβ, where Mβ is a normal metal N, semiconductor (semimetal), or a superconductor, is studied. Main attention is paid to the low-temperature region where nonlocality plays an important role. The thermodynamic Green's-function method is employed in order to describe the behavior of the proximity system in an external field. The temperature and thickness dependences of the penetration depth λ are obtained. The dependence λ(T) differs in a striking way from the dependence in usual superconductors. The strong-coupling effect is taken into account. A special case of screening in a superconducting film backed by a size-quantizing semimetal film is considered. The results obtained are in good agreement with experimental data.

  11. Electromagnetic properties of proximity systems

    International Nuclear Information System (INIS)

    Kresin, V.Z.

    1985-01-01

    Magnetic screening in the proximity system S/sub α/-M/sub β/, where M/sub β/ is a normal metal N, semiconductor (semimetal), or a superconductor, is studied. Main attention is paid to the low-temperature region where nonlocality plays an important role. The thermodynamic Green's-function method is employed in order to describe the behavior of the proximity system in an external field. The temperature and thickness dependences of the penetration depth lambda are obtained. The dependence lambda(T) differs in a striking way from the dependence in usual superconductors. The strong-coupling effect is taken into account. A special case of screening in a superconducting film backed by a size-quantizing semimetal film is considered. The results obtained are in good agreement with experimental data

  12. Proximity effect at Millikelvin temperatures

    International Nuclear Information System (INIS)

    Mota, A.C.

    1986-01-01

    Proximity effects have been studied extensively for the past 25 years. Typically, they are in films several thousand angstroms thick at temperatures not so far below T/sub CNS/, the transition temperature of the NS system. Interesting is, however, the proximity effect at temperatures much lower than T/sub CNS/. In this case, the Cooper-pair amplitudes are not small and very long pair penetration lengths into the normal metal can be expected. Thus, we have observed pair penetration lengths. For these investigations very suitable specimens are commercial wires of one filament of NbTi or Nb embedded in a copper matrix. The reasons are the high transmission coefficient at the interface between the copper and the superconductor and the fact that the copper in these commercial wires is rather clean with electron free paths between 5 to 10 μm long. In this paper, the magnetic properties of thick proximity systems in the range of temperatures between T/sub CNS/ and 5 x 10/sup -4/ T/sub CNS/ in both low and high magnetic fields are discussed

  13. Promoter proximal polyadenylation sites reduce transcription activity

    DEFF Research Database (Denmark)

    Andersen, Pia Kjølhede; Lykke-Andersen, Søren; Jensen, Torben Heick

    2012-01-01

    Gene expression relies on the functional communication between mRNA processing and transcription. We previously described the negative impact of a point-mutated splice donor (SD) site on transcription. Here we demonstrate that this mutation activates an upstream cryptic polyadenylation (CpA) site......, which in turn causes reduced transcription. Functional depletion of U1 snRNP in the context of the wild-type SD triggers the same CpA event accompanied by decreased RNA levels. Thus, in accordance with recent findings, U1 snRNP can shield premature pA sites. The negative impact of unshielded pA sites...... on transcription requires promoter proximity, as demonstrated using artificial constructs and supported by a genome-wide data set. Importantly, transcription down-regulation can be recapitulated in a gene context devoid of splice sites by placing a functional bona fide pA site/transcription terminator within ∼500...

  14. General Algorithm (High level)

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. General Algorithm (High level). Iteratively. Use Tightness Property to remove points of P1,..,Pi. Use random sampling to get a Random Sample (of enough points) from the next largest cluster, Pi+1. Use the Random Sampling Procedure to approximate ci+1 using the ...

  15. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  16. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  17. Challenges of diagnosing acute HIV-1 subtype C infection in African women: performance of a clinical algorithm and the need for point-of-care nucleic-acid based testing.

    Directory of Open Access Journals (Sweden)

    Koleka Mlisana

    Full Text Available Prompt diagnosis of acute HIV infection (AHI benefits the individual and provides opportunities for public health intervention. The aim of this study was to describe most common signs and symptoms of AHI, correlate these with early disease progression and develop a clinical algorithm to identify acute HIV cases in resource limited setting.245 South African women at high-risk of HIV-1 were assessed for AHI and received monthly HIV-1 antibody and RNA testing. Signs and symptoms at first HIV-positive visit were compared to HIV-negative visits. Logistic regression identified clinical predictors of AHI. A model-based score was assigned to each predictor to create a risk score for every woman.Twenty-eight women seroconverted after a total of 390 person-years of follow-up with an HIV incidence of 7.2/100 person-years (95%CI 4.5-9.8. Fifty-seven percent reported ≥1 sign or symptom at the AHI visit. Factors predictive of AHI included age <25 years (OR = 3.2; 1.4-7.1, rash (OR = 6.1; 2.4-15.4, sore throat (OR = 2.7; 1.0-7.6, weight loss (OR = 4.4; 1.5-13.4, genital ulcers (OR = 8.0; 1.6-39.5 and vaginal discharge (OR = 5.4; 1.6-18.4. A risk score of 2 correctly predicted AHI in 50.0% of cases. The number of signs and symptoms correlated with higher HIV-1 RNA at diagnosis (r = 0.63; p<0.001.Accurate recognition of signs and symptoms of AHI is critical for early diagnosis of HIV infection. Our algorithm may assist in risk-stratifying individuals for AHI, especially in resource-limited settings where there is no routine testing for AHI. Independent validation of the algorithm on another cohort is needed to assess its utility further. Point-of-care antigen or viral load technology is required, however, to detect asymptomatic, antibody negative cases enabling early interventions and prevention of transmission.

  18. Sequential unconstrained minimization algorithms for constrained optimization

    International Nuclear Information System (INIS)

    Byrne, Charles

    2008-01-01

    The problem of minimizing a function f(x):R J → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G k (x)=f(x)+g k (x), to obtain x k . The auxiliary functions g k (x):D subset of R J → R + are nonnegative on the set D, each x k is assumed to lie within D, and the objective is to minimize the continuous function f:R J → R over x in the set C = D-bar, the closure of D. We assume that such minimizers exist, and denote one such by x-circumflex. We assume that the functions g k (x) satisfy the inequalities 0≤g k (x)≤G k-1 (x)-G k-1 (x k-1 ), for k = 2, 3, .... Using this assumption, we show that the sequence {(x k )} is decreasing and converges to f(x-circumflex). If the restriction of f(x) to D has bounded level sets, which happens if x-circumflex is unique and f(x) is closed, proper and convex, then the sequence {x k } is bounded, and f(x*)=f(x-circumflex), for any cluster point x*. Therefore, if x-circumflex is unique, x* = x-circumflex and {x k } → x-circumflex. When x-circumflex is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton–Raphson method. The proof techniques used for SUMMA can be extended to obtain related results

  19. Equilibrium properties of proximity effect

    International Nuclear Information System (INIS)

    Esteve, D.; Pothier, H.; Gueron, S.; Birge, N.O.; Devoret, M.

    1996-01-01

    The proximity effect in diffusive normal-superconducting (NS) nano-structures is described by the Usadel equations for the electron pair correlations. We show that these equations obey a variational principle with a potential which generalizes the Ginzburg-Landau energy functional. We discuss simple examples of NS circuits using this formalism. In order to test the theoretical predictions of the Usadel equations, we have measured the density of states as a function of energy on a long N wire in contact with a S wire at one end, at different distances from the NS interface. (authors)

  20. Equilibrium properties of proximity effect

    Energy Technology Data Exchange (ETDEWEB)

    Esteve, D.; Pothier, H.; Gueron, S.; Birge, N.O.; Devoret, M.

    1996-12-31

    The proximity effect in diffusive normal-superconducting (NS) nano-structures is described by the Usadel equations for the electron pair correlations. We show that these equations obey a variational principle with a potential which generalizes the Ginzburg-Landau energy functional. We discuss simple examples of NS circuits using this formalism. In order to test the theoretical predictions of the Usadel equations, we have measured the density of states as a function of energy on a long N wire in contact with a S wire at one end, at different distances from the NS interface. (authors). 12 refs.

  1. An anatomical study of the proximal aspect of the medial femoral condyle to define the proximal-distal condylar length

    Directory of Open Access Journals (Sweden)

    Chia-Ming Chang

    2017-01-01

    Full Text Available Objective: Despite its possible role in knee arthroplasty, the proximal-distal condylar length (PDCL of the femur has never been reported in the literature. We conducted an anatomic study of the proximal aspect of the medial femoral condyle to propose a method for measuring the PDCL. Materials and Methods: Inspection of dried bone specimens was carried out to assure the most proximal condylar margin (MPCM as the eligible starting point to measure the PDCL. Simulation surgery was performed on seven pairs of cadaveric knees to verify the clinical application of measuring the PDCL after locating the MPCM. Interobserver reliability of this procedure was also analyzed. Results: Observation of the bone specimens showed that the MPCM is a concavity formed by the junction of the distal end of the supracondylar ridge and the proximal margin of the medial condyle. This anatomically distinctive structure made the MPCM an unambiguous landmark. The cadaveric simulation surgical dissection demonstrated that the MPCM is easily accessed in a surgical setting, making the measurement of the PDCL plausible. The intraclass correlation coefficient was 0.78, indicating good interobserver reliability for this technique. Conclusion: This study has suggested that the PDCL can be measured based on the MPCM in a surgical setting. PDCL measurement might be useful in joint line position management, selection of femoral component sizes, and other applications related to the proximal-distal dimension of the knee. Further investigation is required.

  2. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  3. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  4. Where genetic algorithms excel.

    Science.gov (United States)

    Baum, E B; Boneh, D; Garrett, C

    2001-01-01

    We analyze the performance of a genetic algorithm (GA) we call Culling, and a variety of other algorithms, on a problem we refer to as the Additive Search Problem (ASP). We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Noisy ASP is the first problem we are aware of where a genetic-type algorithm bests all known competitors. We generalize ASP to k-ASP to study whether GAs will achieve "implicit parallelism" in a problem with many more schemata. GAs fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a mean field theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GAs can beat competing methods.

  5. A novel electronic algorithm using host biomarker point-of-care tests for the management of febrile illnesses in Tanzanian children (e-POCT: A randomized, controlled non-inferiority trial.

    Directory of Open Access Journals (Sweden)

    Kristina Keitel

    2017-10-01

    Full Text Available The management of childhood infections remains inadequate in resource-limited countries, resulting in high mortality and irrational use of antimicrobials. Current disease management tools, such as the Integrated Management of Childhood Illness (IMCI algorithm, rely solely on clinical signs and have not made use of available point-of-care tests (POCTs that can help to identify children with severe infections and children in need of antibiotic treatment. e-POCT is a novel electronic algorithm based on current evidence; it guides clinicians through the entire consultation and recommends treatment based on a few clinical signs and POCT results, some performed in all patients (malaria rapid diagnostic test, hemoglobin, oximeter and others in selected subgroups only (C-reactive protein, procalcitonin, glucometer. The objective of this trial was to determine whether the clinical outcome of febrile children managed by the e-POCT tool was non-inferior to that of febrile children managed by a validated electronic algorithm derived from IMCI (ALMANACH, while reducing the proportion with antibiotic prescription.We performed a randomized (at patient level, blocks of 4, controlled non-inferiority study among children aged 2-59 months presenting with acute febrile illness to 9 outpatient clinics in Dar es Salaam, Tanzania. In parallel, routine care was documented in 2 health centers. The primary outcome was the proportion of clinical failures (development of severe symptoms, clinical pneumonia on/after day 3, or persistent symptoms at day 7 by day 7 of follow-up. Non-inferiority would be declared if the proportion of clinical failures with e-POCT was no worse than the proportion of clinical failures with ALMANACH, within statistical variability, by a margin of 3%. The secondary outcomes included the proportion with antibiotics prescribed on day 0, primary referrals, and severe adverse events by day 30 (secondary hospitalizations and deaths. We enrolled 3

  6. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  7. Modular endoprosthetic replacement for metastatic tumours of the proximal femur

    Directory of Open Access Journals (Sweden)

    Carter Simon R

    2008-11-01

    Full Text Available Abstract Background and aims Endoprosthetic replacements of the proximal femur are commonly required to treat destructive metastases with either impending or actual pathological fractures at this site. Modular prostheses provide an off the shelf availability and can be adapted to most reconstructive situations for proximal femoral replacements. The aim of this study was to assess the clinical and functional outcomes following modular tumour prosthesis reconstruction of the proximal femur in 100 consecutive patients with metastatic tumours and to compare them with the published results of patients with modular and custom made endoprosthetic replacements. Methods 100 consecutive patients who underwent modular tumour prosthetic reconstruction of the proximal femur for metastases using the METS system from 2001 to 2007 were studied. The patient, tumour and treatment factors in relation to overall survival, local control, implant survival and complications were analysed. Functional scores were obtained from surviving patients. Results and conclusion There were 45 male and 55 female patients. The mean age was 60.2 years. The indications were metastases. Seventy five patients presented with pathological fracture or with failed fixation and 25 patients were at a high risk of developing a fracture. The mean follow up was 15.9 months [range 0–77]. Three patients died within 2 weeks following surgery. 69 patients have died and 31 are alive. Of the 69 patients who were dead 68 did not need revision surgery indicating that the implant provided single definitive treatment which outlived the patient. There were three dislocations (2/5 with THR and 1/95 with unipolar femoral heads. 6 patients had deep infections. The estimated five year implant survival (Kaplan-Meier analysis was 83.1% with revision as end point. The mean TESS score was 64% (54%–82%. We conclude that METS modular tumour prosthesis for proximal femur provides versatility; low implant related

  8. Stability and the proximity theorem in Casimir actuated nano devices

    Science.gov (United States)

    Esquivel-Sirvent, R.; Reyes, L.; Bárcenas, J.

    2006-10-01

    A brief description of the stability problem in micro and nano electromechanical devices (MEMS/NEMS) actuated by Casimir forces is given. To enhance the stability, we propose the use of curved surfaces and recalculate the stability conditions by means of the proximity force approximation. The use of curved surfaces changes the bifurcation point, and the radius of curvature becomes a control parameter, allowing a rescaling of the elastic restitution constant and/or of the typical dimensions of the device.

  9. Realities of proximity facility siting

    International Nuclear Information System (INIS)

    DeMott, D.L.

    1981-01-01

    Numerous commercial nuclear power plant sites have 2 to 3 reactors located together, and a group of Facilities with capabilities for fuel fabrication, a nuclear reactor, a storage area for spent fuel, and a maintenance area for contaminated equipment and radioactive waste storage are being designed and constructed in the US. The proximity of these facilities to each other provides that the ordinary flow of materials remain within a limited area. Interactions between the various facilities include shared resources such as communication, fire protection, security, medical services, transportation, water, electrical, personnel, emergency planning, transport of hazardous material between facilities, and common safety and radiological requirements between facilities. This paper will explore the advantages and disadvantages of multiple facilities at one site. Problem areas are identified, and recommendations for planning and coordination are discussed

  10. Hybrid external fixation of the proximal tibia: strategies to improve frame stability.

    Science.gov (United States)

    Roberts, Craig S; Dodds, James C; Perry, Kelvin; Beck, Dennis; Seligson, David; Voor, Michael J

    2003-07-01

    To determine the specific frame construction strategies that can increase the stability of hybrid (ring with tensioned wires proximally connected by bars to half-pins distally) external fixation of proximal tibia fractures. DESIGN Repeated measures biomechanical testing. Laboratory. Composite fiberglass tibias. Using the Heidelberg and Ilizarov systems, external fixators were tested on composite fiberglass tibias with a 1-cm proximal osteotomy (OTA fracture classification 41-A3.3) in seven frame configurations: unilateral frames with 5-mm diameter half-pins and 6-mm diameter half-pins; hybrid (as described above), with and without a 6-mm anterior proximal half-pin; a "box" hybrid (additional ring group distal to the fracture connected by symmetrically spaced bars to the proximal rings) with and without an anterior, proximal half-pin; and a full, four-ring configuration. Each configuration was loaded in four positions (central, medial, posterior, and posteromedial). Displacement at point of loading of proximal fragment. The "box" hybrid was stiffer than the standard hybrid for all loading positions. The addition of an anterior half-pin stiffened the standard hybrid and the "box" hybrid. The most dramatic improvements in the stability of hybrid frames used for proximal tibial fractures result from addition of an anterior, proximal half-pin.

  11. Proximal supination osteotomy of the first metatarsal for hallux valgus.

    Science.gov (United States)

    Yasuda, Toshito; Okuda, Ryuzo; Jotoku, Tsuyoshi; Shima, Hiroaki; Hida, Takashi; Neo, Masashi

    2015-06-01

    Risk factors for hallux valgus recurrence include postoperative round-shaped lateral edge of the first metatarsal head and postoperative incomplete reduction of the sesamoids. To prevent the occurrence of such conditions, we developed a proximal supination osteotomy of the first metatarsal. Our aim was to describe this novel technique and report the outcomes in this report. Sixty-six patients (83 feet) underwent a distal soft tissue procedure combined with a proximal supination osteotomy. After the proximal crescentic osteotomy, the proximal fragment was pushed medially, and the distal fragment was abducted, and then the distal fragment of the first metatarsal was manually supinated. Outcomes were assessed using the American Orthopaedic Foot & Ankle Society (AOFAS) score and radiographic examinations. The average follow-up duration was 34 (range, 25 to 52) months. The mean AOFAS score improved significantly from 58.0 points preoperatively to 93.8 points postoperatively (P hallux valgus and intermetatarsal angle decreased significantly from 38.6 and 18.0 degrees preoperatively to 11.0 and 7.9 degrees postoperatively, respectively (both, P hallux valgus, defined as a hallux valgus angle ≥ 25 degrees. The rates of occurrence of a positive round sign and incomplete reduction of the sesamoids significantly decreased postoperatively, which may have contributed to the low hallux valgus recurrence rates. We conclude that a proximal supination osteotomy was an effective procedure for correction of hallux valgus and can achieve a low rate of hallux valgus recurrence. Level IV, retrospective case series. © The Author(s) 2015.

  12. comparative proximate composition and antioxidant vitamins

    African Journals Online (AJOL)

    DR. AMINU

    Keywords: Comparative, proximate composition, antioxidant vitamins, honey. INTRODUCTION ... solution of inverted sugars and complex mixture of other saccharides ... enzymatic browning in apple slices and grape juice. (Khan, 1985).

  13. Proximate, Mineral and Phytochemical Composition of Dioscorea ...

    African Journals Online (AJOL)

    ADOWIE PERE

    Keywords: Dioscorea dumetorum, proximate composition, mineral analysis, phytochemical screening ... were analyzed using atomic absorption ... determined using a Hack Dr/200 Spectrophotometer. ... Lead Acetate. +. +. + .... cosmetics.

  14. Proximate composition and antinutrient content of pumpkin ...

    African Journals Online (AJOL)

    Proximate composition and antinutrient content of pumpkin ( Cucurbita pepo ) and sorghum ( Sorghum bicolor ) flour blends fermented with Lactobacillus plantarum , Aspergillus niger and Bacillus subtilis.

  15. Tipping Point

    Medline Plus

    Full Text Available ... en español Blog About OnSafety CPSC Stands for Safety The Tipping Point Home > 60 Seconds of Safety (Videos) > The Tipping Point The Tipping Point by ... danger death electrical fall furniture head injury product safety television tipover tv Watch the video in Adobe ...

  16. Transverse and Longitudinal proximity effect

    Science.gov (United States)

    Jalan, Pryianka; Chand, Hum; Srianand, Raghunathan

    2018-04-01

    With close pairs (˜1.5arcmin) of quasars (QSOs), absorption in the spectra of a background quasar in the vicinity of a foreground quasar can be used to study the environment of the latter quasar at kpc-Mpc scales. For this we used a sample of 205 quasar pairs from the Sloan Digital Sky-Survey Data Release 12 (SDSS DR12) in the redshift range of 2.5 to 3.5 by studying their H I Ly-α absorption. We study the environment of QSOs both in the longitudinal as well as in the transverse direction by carrying out a statistical comparison of the Ly-α absorption lines in the quasar vicinity to that of the absorption lines caused by the inter-galactic medium (IGM). This comparison was done with IGM, matched in absorption redshift and signal-to-noise ratio (SNR) to that of the proximity region. In contrast to the measurements along the line-of-sight, the regions transverse to the quasars exhibit enhanced H I Ly-α absorption. This discrepancy can either be interpreted as due to an anisotropic emission from the quasars or as a consequence of their finite lifetime.

  17. Space Network Time Distribution and Synchronization Protocol Development for Mars Proximity Link

    Science.gov (United States)

    Woo, Simon S.; Gao, Jay L.; Mills, David

    2010-01-01

    Time distribution and synchronization in deep space network are challenging due to long propagation delays, spacecraft movements, and relativistic effects. Further, the Network Time Protocol (NTP) designed for terrestrial networks may not work properly in space. In this work, we consider the time distribution protocol based on time message exchanges similar to Network Time Protocol (NTP). We present the Proximity-1 Space Link Interleaved Time Synchronization (PITS) algorithm that can work with the CCSDS Proximity-1 Space Data Link Protocol. The PITS algorithm provides faster time synchronization via two-way time transfer over proximity links, improves scalability as the number of spacecraft increase, lowers storage space requirement for collecting time samples, and is robust against packet loss and duplication which underlying protocol mechanisms provide.

  18. Proximal Participation: A Pathway into Work

    Science.gov (United States)

    Chan, Selena

    2013-01-01

    In a longitudinal case study of apprentices, the term proximal participation was coined to describe the entry process of young people, with unclear career destinations, into the trade of baking. This article unravels the significance of proximal participation in the decision-making processes of young people who enter a trade through initial…

  19. Bimalleolar ankle fracture with proximal fibular fracture

    NARCIS (Netherlands)

    Colenbrander, R. J.; Struijs, P. A. A.; Ultee, J. M.

    2005-01-01

    A 56-year-old female patient suffered a bimalleolar ankle fracture with an additional proximal fibular fracture. This is an unusual fracture type, seldom reported in literature. It was operatively treated by open reduction and internal fixation of the lateral malleolar fracture. The proximal fibular

  20. Novel implant for peri-prosthetic proximal tibia fractures.

    Science.gov (United States)

    Tran, Ton; Chen, Bernard K; Wu, Xinhua; Pun, Chung Lun

    2018-03-01

    Repair of peri-prosthetic proximal tibia fractures is very challenging in patients with a total knee replacement or arthroplasty. The tibial component of the knee implant severely restricts the fixation points of the tibial implant to repair peri-prosthetic fractures. A novel implant has been designed with an extended flange over the anterior of tibial condyle to provide additional points of fixation, overcoming limitations of existing generic locking plates used for proximal tibia fractures. Furthermore, the screws fixed through the extended flange provide additional support to prevent the problem of subsidence of tibial component of knee implant. The design methodology involved extraction of bone data from CT scans into a flexible CAD format, implant design and structural evaluation and optimisation using FEM as well as prototype development and manufacture by selective laser melting 3D printing technology with Ti6Al4 V powder. A prototype tibia implant was developed based on a patient-specific bone structure, which was regenerated from the CT images of patient's tibia. The design is described in detail and being applied to fit up to 80% of patients, for both left and right sides based on the average dimensions and shape of the bone structure from a wide range of CT images. A novel tibial implant has been developed to repair peri-prosthetic proximal tibia fractures which overcomes significant constraints from the tibial component of existing knee implant. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. SU-F-T-15: Evaluation of 192Ir, 60Co and 169Yb Sources for High Dose Rate Prostate Brachytherapy Inverse Planning Using An Interior Point Constraint Generation Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Mok Tsze Chung, E; Aleman, D [University of Toronto, Toronto, Ontario (Canada); Safigholi, H; Nicolae, A; Davidson, M; Ravi, A; Song, W [Odette Cancer Centre, Sunnybrook Health Sciences Centre, Toronto, Ontario (Canada)

    2016-06-15

    Purpose: The effectiveness of using a combination of three sources, {sup 60}Co, {sup 192}Ir and {sup 169}Yb, is analyzed. Different combinations are compared against a single {sup 192}Ir source on prostate cancer cases. A novel inverse planning interior point algorithm is developed in-house to generate the treatment plans. Methods: Thirteen prostate cancer patients are separated into two groups: Group A includes eight patients with the prostate as target volume, while group B consists of four patients with a boost nodule inside the prostate that is assigned 150% of the prescription dose. The mean target volume is 35.7±9.3cc and 30.6±8.5cc for groups A and B, respectively. All patients are treated with each source individually, then with paired sources, and finally with all three sources. To compare the results, boost volume V150 and D90, urethra Dmax and D10, and rectum Dmax and V80 are evaluated. For fair comparison, all plans are normalized to a uniform V100=100. Results: Overall, double- and triple-source plans were better than single-source plans. The triple-source plans resulted in an average decrease of 21.7% and 1.5% in urethra Dmax and D10, respectively, and 8.0% and 0.8% in rectum Dmax and V80, respectively, for group A. For group B, boost volume V150 and D90 increased by 4.7% and 3.0%, respectively, while keeping similar dose delivered to the urethra and rectum. {sup 60}Co and {sup 192}Ir produced better plans than their counterparts in the double-source category, whereas {sup 60}Co produced more favorable results than the remaining individual sources. Conclusion: This study demonstrates the potential advantage of using a combination of two or three sources, reflected in dose reduction to organs-at-risk and more conformal dose to the target. three sources, reflected in dose reduction to organs-at-risk and more conformal dose to the target. Our results show that {sup 60}Co, {sup 192}Ir and {sup 169}Yb produce the best plans when used simultaneously and

  2. Streaming Algorithms for Line Simplification

    DEFF Research Database (Denmark)

    Abam, Mohammad; de Berg, Mark; Hachenberger, Peter

    2010-01-01

    this problem in a streaming setting, where we only have a limited amount of storage, so that we cannot store all the points. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points and compare the error of our...... simplification to the error of the optimal simplification with k points. We obtain the algorithms with O(1) competitive ratio for three cases: convex paths, where the error is measured using the Hausdorff distance (or Fréchet distance), xy-monotone paths, where the error is measured using the Hausdorff distance...... (or Fréchet distance), and general paths, where the error is measured using the Fréchet distance. In the first case the algorithm needs O(k) additional storage, and in the latter two cases the algorithm needs O(k 2) additional storage....

  3. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  4. A unified framework of descent algorithms for nonlinear programs and variational inequalities

    International Nuclear Information System (INIS)

    Patriksson, M.

    1993-01-01

    We present a framework of algorithms for the solution of continuous optimization and variational inequality problems. In the general algorithm, a search direction finding auxiliary problems is obtained by replacing the original cost function with an approximating monotone cost function. The proposed framework encompasses algorithm classes presented earlier by Cohen, Dafermos, Migdalas, and Tseng, and includes numerous descent and successive approximation type methods, such as Newton methods, Jacobi and Gauss-Siedel type decomposition methods for problems defined over Cartesian product sets, and proximal point methods, among others. The auxiliary problem of the general algorithm also induces equivalent optimization reformulation and descent methods for asymmetric variational inequalities. We study the convergence properties of the general algorithm when applied to unconstrained optimization, nondifferentiable optimization, constrained differentiable optimization, and variational inequalities; the emphasis of the convergence analyses is placed on basic convergence results, convergence using different line search strategies and truncated subproblem solutions, and convergence rate results. This analysis offer a unification of known results; moreover, it provides strengthenings of convergence results for many existing algorithms, and indicates possible improvements of their realizations. 482 refs

  5. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  6. Uncemented allograft-prosthetic composite reconstruction of the proximal femur

    Directory of Open Access Journals (Sweden)

    Li Min

    2014-01-01

    Full Text Available Background: Allograft-prosthetic composite can be divided into three groups names cemented, uncemented, and partially cemented. Previous studies have mainly reported outcomes in cemented and partially cemented allograft-prosthetic composites, but have rarely focused on the uncemented allograft-prosthetic composites. The objectives of our study were to describe a surgical technique for using proximal femoral uncemented allograft-prosthetic composite and to present the radiographic and clinical results. Materials and Methods: Twelve patients who underwent uncemented allograft-prosthetic composite reconstruction of the proximal femur after bone tumor resection were retrospectively evaluated at an average followup of 24.0 months. Clinical records and radiographs were evaluated. Results: In our series, union occurred in all the patients (100%; range 5-9 months. Until the most recent followup, there were no cases with infection, nonunion of the greater trochanter, junctional bone resorption, dislocation, allergic reaction, wear of acetabulum socket, recurrence, and metastasis. But there were three periprosthetic fractures which were fixed using cerclage wire during surgery. Five cases had bone resorption in and around the greater trochanter. The average Musculoskeletal Tumor Society (MSTS score and Harris hip score (HHS were 26.2 points (range 24-29 points and 80.6 points (range 66.2-92.7 points, respectively. Conclusions: These results showed that uncemented allograft-prosthetic composite could promote bone union through compression at the host-allograft junction and is a good choice for proximal femoral resection. Although this technology has its own merits, long term outcomes are yet not validated.

  7. Proximal Alternating Direction Method with Relaxed Proximal Parameters for the Least Squares Covariance Adjustment Problem

    Directory of Open Access Journals (Sweden)

    Minghua Xu

    2014-01-01

    Full Text Available We consider the problem of seeking a symmetric positive semidefinite matrix in a closed convex set to approximate a given matrix. This problem may arise in several areas of numerical linear algebra or come from finance industry or statistics and thus has many applications. For solving this class of matrix optimization problems, many methods have been proposed in the literature. The proximal alternating direction method is one of those methods which can be easily applied to solve these matrix optimization problems. Generally, the proximal parameters of the proximal alternating direction method are greater than zero. In this paper, we conclude that the restriction on the proximal parameters can be relaxed for solving this kind of matrix optimization problems. Numerical experiments also show that the proximal alternating direction method with the relaxed proximal parameters is convergent and generally has a better performance than the classical proximal alternating direction method.

  8. Fast algorithm for Morphological Filters

    International Nuclear Information System (INIS)

    Lou Shan; Jiang Xiangqian; Scott, Paul J

    2011-01-01

    In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.

  9. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  10. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  11. Fixed Points

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 5; Issue 5. Fixed Points - From Russia with Love - A Primer of Fixed Point Theory. A K Vijaykumar. Book Review Volume 5 Issue 5 May 2000 pp 101-102. Fulltext. Click here to view fulltext PDF. Permanent link:

  12. Tipping Point

    Medline Plus

    Full Text Available ... OnSafety CPSC Stands for Safety The Tipping Point Home > 60 Seconds of Safety (Videos) > The Tipping Point ... 24 hours a day. For young children whose home is a playground, it’s the best way to ...

  13. Tipping Point

    Medline Plus

    Full Text Available ... 60 Seconds of Safety (Videos) > The Tipping Point The Tipping Point by CPSC Blogger September 22, 2009 appliance child Childproofing CPSC danger death electrical fall furniture head injury product safety television tipover tv Watch the video in Adobe Flash ...

  14. Proximal Hamstring Tendinosis and Partial Ruptures.

    Science.gov (United States)

    Startzman, Ashley N; Fowler, Oliver; Carreira, Dominic

    2017-07-01

    Proximal hamstring tendinosis and partial hamstring origin ruptures are painful conditions of the proximal thigh and hip that may occur in the acute, chronic, or acute on chronic setting. Few publications exist related to their diagnosis and management. This systematic review discusses the incidence, treatment, and prognosis of proximal hamstring tendinosis and partial hamstring ruptures. Conservative treatment measures include nonsteroidal anti-inflammatory drugs, physical therapy, rest, and ice. If these measures fail, platelet-rich plasma or shockwave therapy may be considered. When refractory to conservative management, these injuries may be treated with surgical debridement and hamstring reattachment. [Orthopedics. 2017; 40(4):e574-e582.]. Copyright 2017, SLACK Incorporated.

  15. An Approximate Proximal Bundle Method to Minimize a Class of Maximum Eigenvalue Functions

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2014-01-01

    Full Text Available We present an approximate nonsmooth algorithm to solve a minimization problem, in which the objective function is the sum of a maximum eigenvalue function of matrices and a convex function. The essential idea to solve the optimization problem in this paper is similar to the thought of proximal bundle method, but the difference is that we choose approximate subgradient and function value to construct approximate cutting-plane model to solve the above mentioned problem. An important advantage of the approximate cutting-plane model for objective function is that it is more stable than cutting-plane model. In addition, the approximate proximal bundle method algorithm can be given. Furthermore, the sequences generated by the algorithm converge to the optimal solution of the original problem.

  16. RFID Location Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Zi Min

    2016-01-01

    Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.

  17. Promoting proximal formative assessment with relational discourse

    Science.gov (United States)

    Scherr, Rachel E.; Close, Hunter G.; McKagan, Sarah B.

    2012-02-01

    The practice of proximal formative assessment - the continual, responsive attention to students' developing understanding as it is expressed in real time - depends on students' sharing their ideas with instructors and on teachers' attending to them. Rogerian psychology presents an account of the conditions under which proximal formative assessment may be promoted or inhibited: (1) Normal classroom conditions, characterized by evaluation and attention to learning targets, may present threats to students' sense of their own competence and value, causing them to conceal their ideas and reducing the potential for proximal formative assessment. (2) In contrast, discourse patterns characterized by positive anticipation and attention to learner ideas increase the potential for proximal formative assessment and promote self-directed learning. We present an analysis methodology based on these principles and demonstrate its utility for understanding episodes of university physics instruction.

  18. THE PROXIMATE COMPOSITION OF AFRICAN BUSH MANGO ...

    African Journals Online (AJOL)

    BIG TIMMY

    Information regarding previous studies on these physico-chemical ... This behaviour may be attributed to its high myristic acid ... The authors express deep appreciation to the. Heads of ... of a typical rural processing method on the proximate ...

  19. Proximate composition and nutritional characterization of Chia ...

    African Journals Online (AJOL)

    ... dairy product associated with several beneficial nutritional and health effects. ... The results for amino acids showed that the essential and non-essential amino ... proximate composition and nutritional (amino acids, fatty acids, and minerals ...

  20. Algorithmic Verification of Linearizability for Ordinary Differential Equations

    KAUST Repository

    Lyakhov, Dmitry A.; Gerdt, Vladimir P.; Michels, Dominik L.

    2017-01-01

    one by a point transformation of the dependent and independent variables. The first algorithm is based on a construction of the Lie point symmetry algebra and on the computation of its derived algebra. The second algorithm exploits the differential

  1. Dew Point

    OpenAIRE

    Goldsmith, Shelly

    1999-01-01

    Dew Point was a solo exhibition originating at PriceWaterhouseCoopers Headquarters Gallery, London, UK and toured to the Centre de Documentacio i Museu Textil, Terrassa, Spain and Gallery Aoyama, Tokyo, Japan.

  2. Tipping Point

    Medline Plus

    Full Text Available ... Point by CPSC Blogger September 22, 2009 appliance child Childproofing CPSC danger death electrical fall furniture head injury product safety television tipover tv Watch the video in Adobe Flash ...

  3. Tipping Point

    Science.gov (United States)

    ... Point by CPSC Blogger September 22, 2009 appliance child Childproofing CPSC danger death electrical fall furniture head injury product safety television tipover tv Watch the video in Adobe Flash ...

  4. Tipping Point

    Medline Plus

    Full Text Available ... Point by CPSC Blogger September 22, 2009 appliance child Childproofing CPSC danger death electrical fall furniture head ... see news reports about horrible accidents involving young children and furniture, appliance and tv tip-overs. The ...

  5. Tipping Point

    Medline Plus

    Full Text Available ... Point by CPSC Blogger September 22, 2009 appliance child Childproofing CPSC danger death electrical fall furniture head ... TV falls with about the same force as child falling from the third story of a building. ...

  6. Tipping Point

    Medline Plus

    Full Text Available ... Tipping Point by CPSC Blogger September 22, 2009 appliance child Childproofing CPSC danger death electrical fall furniture ... about horrible accidents involving young children and furniture, appliance and tv tip-overs. The force of a ...

  7. In-Place Algorithms for Computing (Layers of) Maxima

    DEFF Research Database (Denmark)

    Blunck, Henrik; Vahrenhold, Jan

    2010-01-01

    We describe space-efficient algorithms for solving problems related to finding maxima among points in two and three dimensions. Our algorithms run in optimal time and occupy only constant extra......We describe space-efficient algorithms for solving problems related to finding maxima among points in two and three dimensions. Our algorithms run in optimal time and occupy only constant extra...

  8. Diversity-Guided Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Ursem, Rasmus Kjær

    2002-01-01

    Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...

  9. Proximal focal femoral deficiency: A case report

    Directory of Open Access Journals (Sweden)

    Shashank Sharma

    2015-01-01

    Full Text Available Proximal focal femoral deficiency (PFFD is a rare congenital anomaly resulting in limb shortening and disability in young. The exact cause of the disease is not known and it may present as varying grades of affection involving the proximal femur and the acetabulum. Recognition of this rare abnormality on radiographs can help manage these cases better since early institution of therapy may help in achieving adequate growth of the femur.

  10. Proximity sensor system development. CRADA final report

    International Nuclear Information System (INIS)

    Haley, D.C.; Pigoski, T.M.

    1998-01-01

    Lockheed Martin Energy Research Corporation (LMERC) and Merritt Systems, Inc. (MSI) entered into a Cooperative Research and Development Agreement (CRADA) for the development and demonstration of a compact, modular proximity sensing system suitable for application to a wide class of manipulator systems operated in support of environmental restoration and waste management activities. In teleoperated modes, proximity sensing provides the manipulator operator continuous information regarding the proximity of the manipulator to objects in the workspace. In teleoperated and robotic modes, proximity sensing provides added safety through the implementation of active whole arm collision avoidance capabilities. Oak Ridge National Laboratory (ORNL), managed by LMERC for the United States Department of Energy (DOE), has developed an application specific integrated circuit (ASIC) design for the electronics required to support a modular whole arm proximity sensing system based on the use of capacitive sensors developed at Sandia National Laboratories. The use of ASIC technology greatly reduces the size of the electronics required to support the selected sensor types allowing deployment of many small sensor nodes over a large area of the manipulator surface to provide maximum sensor coverage. The ASIC design also provides a communication interface to support sensor commands from and sensor data transmission to a distributed processing system which allows modular implementation and operation of the sensor system. MSI is a commercial small business specializing in proximity sensing systems based upon infrared and acoustic sensors

  11. Proximity sensor system development. CRADA final report

    Energy Technology Data Exchange (ETDEWEB)

    Haley, D.C. [Oak Ridge National Lab., TN (United States); Pigoski, T.M. [Merrit Systems, Inc. (United States)

    1998-01-01

    Lockheed Martin Energy Research Corporation (LMERC) and Merritt Systems, Inc. (MSI) entered into a Cooperative Research and Development Agreement (CRADA) for the development and demonstration of a compact, modular proximity sensing system suitable for application to a wide class of manipulator systems operated in support of environmental restoration and waste management activities. In teleoperated modes, proximity sensing provides the manipulator operator continuous information regarding the proximity of the manipulator to objects in the workspace. In teleoperated and robotic modes, proximity sensing provides added safety through the implementation of active whole arm collision avoidance capabilities. Oak Ridge National Laboratory (ORNL), managed by LMERC for the United States Department of Energy (DOE), has developed an application specific integrated circuit (ASIC) design for the electronics required to support a modular whole arm proximity sensing system based on the use of capacitive sensors developed at Sandia National Laboratories. The use of ASIC technology greatly reduces the size of the electronics required to support the selected sensor types allowing deployment of many small sensor nodes over a large area of the manipulator surface to provide maximum sensor coverage. The ASIC design also provides a communication interface to support sensor commands from and sensor data transmission to a distributed processing system which allows modular implementation and operation of the sensor system. MSI is a commercial small business specializing in proximity sensing systems based upon infrared and acoustic sensors.

  12. Multiple intramedullary nailing of proximal phalangeal fractures of hand

    Directory of Open Access Journals (Sweden)

    Patankar Hemant

    2008-01-01

    Full Text Available Background: Proximal phalangeal fractures are commonly encountered fractures in the hand. Majority of them are stable and can be treated by non-operative means. However, unstable fractures i.e. those with shortening, displacement, angulation, rotational deformity or segmental fractures need surgical intervention. This prospective study was undertaken to evaluate the functional outcome after surgical stabilization of these fractures with joint-sparing multiple intramedullary nailing technique. Materials and Methods: Thirty-five patients with 35 isolated unstable proximal phalangeal shaft fractures of hand were managed by surgical stabilization with multiple intramedullary nailing technique. Fractures of the thumb were excluded. All the patients were followed up for a minimum of six months. They were assessed radiologically and clinically. The clinical evaluation was based on two criteria. 1. total active range of motion for digital functional assessment as suggested by the American Society for Surgery of Hand and 2. grip strength. Results: All the patients showed radiological union at six weeks. The overall results were excellent in all the patients. Adventitious bursitis was observed at the point of insertion of nails in one patient. Conclusion: Joint-sparing multiple intramedullary nailing of unstable proximal phalangeal fractures of hand provides satisfactory results with good functional outcome and fewer complications.

  13. Elementary functions algorithms and implementation

    CERN Document Server

    Muller, Jean-Michel

    2016-01-01

    This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...

  14. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  15. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  16. Parkinson's Disease Prevalence and Proximity to Agricultural Cultivated Fields

    Science.gov (United States)

    Yitshak Sade, Maayan; Zlotnik, Yair; Kloog, Itai; Novack, Victor; Peretz, Chava; Ifergane, Gal

    2015-01-01

    The risk for developing Parkinson's disease (PD) is a combination of multiple environmental and genetic factors. The Negev (Southern Israel) contains approximately 252.5 km2 of agricultural cultivated fields (ACF). We aimed to estimate the prevalence and incidence of PD and to examine possible geographical clustering and associations with agricultural exposures. We screened all “Clalit” Health Services members in the Negev (70% of the population) between the years 2000 and 2012. Individual demographic, clinical, and medication prescription data were available. We used a refined medication tracer algorithm to identify PD patients. We used mixed Poisson models to calculate the smoothed standardized incidence rates (SIRs) for each locality. We identified ACF and calculate the size and distance of the fields from each locality. We identified 3,792 cases of PD. SIRs were higher than expected in Jewish rural localities (median SIR [95% CI]: 1.41 [1.28; 1.53] in 2001–2004, 1.62 [1.48; 1.76] in 2005–2008, and 1.57 [1.44; 1.80] in 2009–2012). Highest SIR was observed in localities located in proximity to large ACF (SIR 1.54, 95% CI 1.32; 1.79). In conclusion, in this population based study we found that PD SIRs were higher than expected in rural localities. Furthermore, it appears that proximity to ACF and the field size contribute to PD risk. PMID:26357584

  17. Optical proximity correction for anamorphic extreme ultraviolet lithography

    Science.gov (United States)

    Clifford, Chris; Lam, Michael; Raghunathan, Ananthan; Jiang, Fan; Fenger, Germain; Adam, Kostas

    2017-10-01

    The change from isomorphic to anamorphic optics in high numerical aperture (NA) extreme ultraviolet (EUV) scanners necessitates changes to the mask data preparation flow. The required changes for each step in the mask tape out process are discussed, with a focus on optical proximity correction (OPC). When necessary, solutions to new problems are demonstrated, and verified by rigorous simulation. Additions to the OPC model include accounting for anamorphic effects in the optics, mask electromagnetics, and mask manufacturing. The correction algorithm is updated to include awareness of anamorphic mask geometry for mask rule checking (MRC). OPC verification through process window conditions is enhanced to test different wafer scale mask error ranges in the horizontal and vertical directions. This work will show that existing models and methods can be updated to support anamorphic optics without major changes. Also, the larger mask size in the Y direction can result in better model accuracy, easier OPC convergence, and designs which are more tolerant to mask errors.

  18. Locking plate fixation for proximal humerus fractures.

    LENUS (Irish Health Repository)

    Burke, Neil G

    2012-02-01

    Locking plates are increasingly used to surgically treat proximal humerus fractures. Knowledge of the bone quality of the proximal humerus is important. Studies have shown the medial and dorsal aspects of the proximal humeral head to have the highest bone strength, and this should be exploited by fixation techniques, particularly in elderly patients with osteoporosis. The goals of surgery for proximal humeral fractures should involve minimal soft tissue dissection and achieve anatomic reduction of the head complex with sufficient stability to allow for early shoulder mobilization. This article reviews various treatment options, in particular locking plate fixation. Locking plate fixation is associated with a high complication rate, such as avascular necrosis (7.9%), screw cutout (11.6%), and revision surgery (13.7%). These complications are frequently due to the varus deformation of the humeral head. Strategic screw placement in the humeral head would minimize the possibility of loss of fracture reduction and potential hardware complications. Locking plate fixation is a good surgical option for the management of proximal humerus fractures. Complications can be avoided by using better bone stock and by careful screw placement in the humeral head.

  19. Giant proximity effect in ferromagnetic bilayers

    Science.gov (United States)

    Ramos, Silvia; Charlton, Tim; Quintanilla, Jorge; Suter, Andreas; Moodera, Jagadeesh; Prokscha, Thomas; Salman, Zaher; Forgan, Ted

    2013-03-01

    The proximity effect is a phenomenon where an ordered state leaks from a material into an adjacent one over some finite distance, ξ. For superconductors, this distance is ~ the coherence length. Nevertheless much longer-range, ``giant'' proximity effects have been observed in cuprate junctions. This surprising effect can be understood as a consequence of critical opalescence. Since this occurs near all second order phase transitions, giant proximity effects should be very general and, in particular, they should be present in magnetic systems. The ferromagnetic proximity effect has the advantage that its order parameter (magnetization) can be observed directly. We investigate the above phenomenon in Co/EuS bilayer films, where both materials undergo ferromagnetic transitions but at rather different temperatures (bulk TC of 1400K for Co and 16.6K for EuS). A dramatic increase in the range of the proximity effect is expected near the TC of EuS. We present the results of our measurements of the magnetization profiles as a function of temperature, carried out using the complementary techniques of low energy muon rotation and polarized neutron reflectivity. Work supported by EPSRC, STFC and ONR grant N00014-09-1-0177 and NSF grant DMR 0504158.

  20. Proximity operations concept design study, task 6

    Science.gov (United States)

    Williams, A. N.

    1990-01-01

    The feasibility of using optical technology to perform the mission of the proximity operations communications subsystem on Space Station Freedom was determined. Proximity operations mission requirements are determined and the relationship to the overall operational environment of the space station is defined. From this information, the design requirements of the communication subsystem are derived. Based on these requirements, a preliminary design is developed and the feasibility of implementation determined. To support the Orbital Maneuvering Vehicle and National Space Transportation System, the optical system development is straightforward. The requirements on extra-vehicular activity are such as to allow large fields of uncertainty, thus exacerbating the acquisition problem; however, an approach is given that could mitigate this problem. In general, it is found that such a system could indeed perform the proximity operations mission requirement, with some development required to support extra-vehicular activity.

  1. Endomedullar nail of metacarpal and proximal phalanges

    International Nuclear Information System (INIS)

    Mendez Olaya, Francisco Javier; Sanchez Mesa, Pedro Antonio

    2002-01-01

    Prospective study, series of cases; it included patients with diaphysis fractures and union diaphysis-neck or union diaphysis-base of metacarpal and proximal phalanges, in whom was practiced anterograde intramedullary nailing previous closed reduction of the fracture, using prevent intramedullary nail of 1.6 mm. (cem 16) for the metacarpal fractures, and two nail prevent of 1.0 mm. (cem 10) for the proximal phalangeal fractures. Indications: transverse and oblique short fractures, spiral and with comminuting bicortical. Pursuit average is 5.7 months. Frequency surgical intervened patient: 2.2 each month, using this surgical technique a total of 20 (twenty) patients have been operated, 21 (twenty one) fractures; 16 (sixteen) metacarcal fractures and 5 (five) proximal phalangeal fractures, all of them tested using clinical and radiological parameters. Results: good 82%, regular 18%, and bad 0% obtaining bony consolidation and early rehabilitation with incorporation to their habitual works

  2. Correlation between social proximity and mobility similarity.

    Science.gov (United States)

    Fan, Chao; Liu, Yiding; Huang, Junming; Rong, Zhihai; Zhou, Tao

    2017-09-20

    Human behaviors exhibit ubiquitous correlations in many aspects, such as individual and collective levels, temporal and spatial dimensions, content, social and geographical layers. With rich Internet data of online behaviors becoming available, it attracts academic interests to explore human mobility similarity from the perspective of social network proximity. Existent analysis shows a strong correlation between online social proximity and offline mobility similarity, namely, mobile records between friends are significantly more similar than between strangers, and those between friends with common neighbors are even more similar. We argue the importance of the number and diversity of common friends, with a counter intuitive finding that the number of common friends has no positive impact on mobility similarity while the diversity plays a key role, disagreeing with previous studies. Our analysis provides a novel view for better understanding the coupling between human online and offline behaviors, and will help model and predict human behaviors based on social proximity.

  3. Evaluation and Management of Proximal Humerus Fractures

    Directory of Open Access Journals (Sweden)

    Ekaterina Khmelnitskaya

    2012-01-01

    Full Text Available Proximal humerus fractures are common injuries, especially among older osteoporotic women. Restoration of function requires a thorough understanding of the neurovascular, musculotendinous, and bony anatomy. This paper addresses the relevant anatomy and highlights various management options, including indication for arthroplasty. In the vast majority of cases, proximal humerus fractures may be treated nonoperatively. In the case of displaced fractures, when surgical intervention may be pursued, numerous constructs have been investigated. Of these, the proximal humerus locking plate is the most widely used. Arthroplasty is generally reserved for comminuted 4-part fractures, head-split fractures, or fractures with significant underlying arthritic changes. Reverse total shoulder arthroplasty is reserved for patients with a deficient rotator cuff, or highly comminuted tuberosities.

  4. The Life Saving Effects of Hospital Proximity

    DEFF Research Database (Denmark)

    Bertoli, Paola; Grembi, Veronica

    We assess the lifesaving effect of hospital proximity using data on fatality rates of road-traffic accidents. While most of the literature on this topic is based on changes in distance to the nearest hospital triggered by hospital closures and use OLS estimates, our identification comes from......) increases the fatality rate by 13.84% on the sample average. This is equal to a 0.92 additional death per every 100 accidents. We show that OLS estimates provide a downward biased measure of the real effect of hospital proximity because they do not fully solve spatial sorting problems. Proximity matters...... more when the road safety is low; the emergency service is not properly organized, and the nearest hospital has lower quality standards....

  5. [Partial replantation following proximal limb injury].

    Science.gov (United States)

    Dubert, T; Malikov, S A; Dinh, A; Kupatadze, D D; Oberlin, C; Alnot, J Y; Nabokov, B B

    2000-11-01

    Proximal replantation is a technically feasible but life-threatening procedure. Indications must be restricted to patients in good condition with a good functional prognosis. The goal of replantation must be focused not only on reimplanting the amputated limb but also on achieving a good functional outcome. For the lower limb, simple terminalization remains the best choice in many cases. When a proximal amputation is not suitable for replantation, the main aim of the surgical procedure must be to reconstruct a stump long enough to permit fitting a prosthesis preserving the function of the adjacent joint. If the proximal stump beyond the last joint is very short, it may be possible to restore some length by partial replantation of spared tissues from the amputated part. We present here the results we obtained following this policy. This series included 16 cases of partial replantations, 14 involving the lower limb and 2 the upper limb. All were osteocutaneous microsurgical transfers. For the lower limb, all transfers recovered protective sensitivity following tibial nerve repair. The functional calcaeoplantar unit was used in 13 cases. The transfer of this specialized weight bearing tissue provided a stable distal surface making higher support unnecessary. In one case, we raised a 13-cm vascularized tibial segment covered with foot skin for additional length. For the upper limb, the osteocutaneous transfer, based on the radial artery, was not reinnervated, but this lack of sensitivity did not impair prosthesis fitting. One vascular failure was finally amputated. This was the only unsuccessful result. For all other patients, the surgical procedure facilitated prosthesis fitting and preserved the proximal joint function despite an initially very proximal amputation. The advantages of partial replantation are obvious compared with simple terminalization or secondary reconstruction. There is no secondary donor site and, because there is no major muscle mass in the

  6. The developmental spectrum of proximal radioulnar synostosis

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, Alison M. [University of Manitoba, Winnipeg Regional Health Association Program of Genetics and Metabolism, Winnipeg, MB (Canada); University of Manitoba, Department of Paediatrics and Child Health, Winnipeg, MB (Canada); University of Manitoba, Department of Biochemistry and Medical Genetics, Winnipeg, MB (Canada); University of Manitoba, WRHA Program of Genetics and Metabolism, Departments of Paediatrics and Child Health, Biochemistry and Medical Genetics, Winnipeg, MB (Canada); Kibria, Lisa [University of Manitoba, Department of School of Medical Rehabilitation, Winnipeg, MB (Canada); Reed, Martin H. [University of Manitoba, Department of Paediatrics and Child Health, Winnipeg, MB (Canada); University of Manitoba, Department of Biochemistry and Medical Genetics, Winnipeg, MB (Canada); University of Manitoba, Department of Diagnostic Imaging, Winnipeg, MB (Canada)

    2010-01-15

    Proximal radioulnar synostosis is a rare upper limb malformation. The elbow is first identifiable at 35 days (after conception), at which stage the cartilaginous anlagen of the humerus, radius and ulna are continuous. Subsequently, longitudinal segmentation produces separation of the distal radius and ulna. However, temporarily, the proximal ends are united and continue to share a common perichondrium. We investigated the hypothesis that posterior congenital dislocation of the radial head and proximal radioulnar fusion are different clinical manifestations of the same primary developmental abnormality. Records were searched for ''proximal radioulnar fusion/posterior radial head dislocation'' in patients followed at the local Children's Hospital and Rehabilitation Centre for Children. Relevant radiographic, demographic and clinical data were recorded. Ethics approval was obtained through the University Research Ethics Board. In total, 28 patients met the inclusion criteria. The majority of patients (16) had bilateral involvement; eight with posterior dislocation of the radial head only; five had posterior radial head dislocation with radioulnar fusion and two had radioulnar fusion without dislocation. One patient had bilateral proximal radioulnar fusion and posterior dislocation of the left radial head. Nine patients had only left-sided involvement, and three had only right-sided involvement.The degree of proximal fusion varied, with some patients showing 'complete' proximal fusion and others showing fusion that occurred slightly distal to the radial head: 'partially separated.' Associated disorders in our cohort included Poland syndrome (two patients), Cornelia de Lange syndrome, chromosome anomalies (including tetrasomy X) and Cenani Lenz syndactyly. The suggestion of a developmental relationship between posterior dislocation of the radial head and proximal radioulnar fusion is supported by the fact that both anomalies

  7. Proximity effects in ferromagnet/superconductor structures

    International Nuclear Information System (INIS)

    Yu, H.L.; Sun, G.Y.; Yang, L.Y.; Xing, D.Y.

    2004-01-01

    The Nambu spinor Green's function approach is applied to study proximity effects in ferromagnet/superconductor (FM/SC) structures. They include the induced superconducting order parameter and density of states (DOS) with superconducting feature on the FM side, and spin-dependent DOS within the energy gap on the SC side. The latter indicates an appearance of gapless superconductivity and a coexistence of ferromagnetism and superconductivity in a small regime near the interface. The influence of exchange energy in FM and barrier strength at interface on the proximity effects is discussed

  8. Ultimate and proximate explanations of strong reciprocity.

    Science.gov (United States)

    Vromen, Jack

    2017-08-23

    Strong reciprocity (SR) has recently been subject to heated debate. In this debate, the "West camp" (West et al. in Evol Hum Behav 32(4):231-262, 2011), which is critical of the case for SR, and the "Laland camp" (Laland et al. in Science, 334(6062):1512-1516, 2011, Biol Philos 28(5):719-745, 2013), which is sympathetic to the case of SR, seem to take diametrically opposed positions. The West camp criticizes advocates of SR for conflating proximate and ultimate causation. SR is said to be a proximate mechanism that is put forward by its advocates as an ultimate explanation of human cooperation. The West camp thus accuses advocates of SR for not heeding Mayr's original distinction between ultimate and proximate causation. The Laland camp praises advocates of SR for revising Mayr's distinction. Advocates of SR are said to replace Mayr's uni-directional view on the relation between ultimate and proximate causes by the bi-directional one of reciprocal causation. The paper argues that both the West camp and the Laland camp misrepresent what advocates of SR are up to. The West camp is right that SR is a proximate cause of human cooperation. But rather than putting forward SR as an ultimate explanation, as the West camp argues, advocates of SR believe that SR itself is in need of ultimate explanation. Advocates of SR tend to take gene-culture co-evolutionary theory as the correct meta-theoretical framework for advancing ultimate explanations of SR. Appearances notwithstanding, gene-culture coevolutionary theory does not imply Laland et al.'s notion of reciprocal causation. "Reciprocal causation" suggests that proximate and ultimate causes interact simultaneously, while advocates of SR assume that they interact sequentially. I end by arguing that the best way to understand the debate is by disambiguating Mayr's ultimate-proximate distinction. I propose to reserve "ultimate" and "proximate" for different sorts of explanations, and to use other terms for distinguishing

  9. Infiltrating/sealing proximal caries lesions

    DEFF Research Database (Denmark)

    Martignon, S; Ekstrand, K R; Gomez, J

    2012-01-01

    This randomized split-mouth controlled clinical trial aimed at assessing the therapeutic effects of infiltration vs. sealing for controlling caries progression on proximal surfaces. Out of 90 adult students/patients assessed at university clinics and agreeing to participate, 39, each with 3...... differences in lesion progression between infiltration and placebo (P = 0.0012) and between sealing and placebo (P = 0.0269). The study showed that infiltration and sealing are significantly better than placebo treatment for controlling caries progression on proximal lesions. No significant difference...

  10. MM Algorithms for Geometric and Signomial Programming.

    Science.gov (United States)

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  11. A Parametric k-Means Algorithm

    Science.gov (United States)

    Tarpey, Thaddeus

    2007-01-01

    Summary The k points that optimally represent a distribution (usually in terms of a squared error loss) are called the k principal points. This paper presents a computationally intensive method that automatically determines the principal points of a parametric distribution. Cluster means from the k-means algorithm are nonparametric estimators of principal points. A parametric k-means approach is introduced for estimating principal points by running the k-means algorithm on a very large simulated data set from a distribution whose parameters are estimated using maximum likelihood. Theoretical and simulation results are presented comparing the parametric k-means algorithm to the usual k-means algorithm and an example on determining sizes of gas masks is used to illustrate the parametric k-means algorithm. PMID:17917692

  12. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  13. Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory

    KAUST Repository

    Richtarik, Peter; Taká č, Martin

    2017-01-01

    We develop a family of reformulations of an arbitrary consistent linear system into a stochastic problem. The reformulations are governed by two user-defined parameters: a positive definite matrix defining a norm, and an arbitrary discrete or continuous distribution over random matrices. Our reformulation has several equivalent interpretations, allowing for researchers from various communities to leverage their domain specific insights. In particular, our reformulation can be equivalently seen as a stochastic optimization problem, stochastic linear system, stochastic fixed point problem and a probabilistic intersection problem. We prove sufficient, and necessary and sufficient conditions for the reformulation to be exact. Further, we propose and analyze three stochastic algorithms for solving the reformulated problem---basic, parallel and accelerated methods---with global linear convergence rates. The rates can be interpreted as condition numbers of a matrix which depends on the system matrix and on the reformulation parameters. This gives rise to a new phenomenon which we call stochastic preconditioning, and which refers to the problem of finding parameters (matrix and distribution) leading to a sufficiently small condition number. Our basic method can be equivalently interpreted as stochastic gradient descent, stochastic Newton method, stochastic proximal point method, stochastic fixed point method, and stochastic projection method, with fixed stepsize (relaxation parameter), applied to the reformulations.

  14. Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory

    KAUST Repository

    Richtarik, Peter

    2017-06-04

    We develop a family of reformulations of an arbitrary consistent linear system into a stochastic problem. The reformulations are governed by two user-defined parameters: a positive definite matrix defining a norm, and an arbitrary discrete or continuous distribution over random matrices. Our reformulation has several equivalent interpretations, allowing for researchers from various communities to leverage their domain specific insights. In particular, our reformulation can be equivalently seen as a stochastic optimization problem, stochastic linear system, stochastic fixed point problem and a probabilistic intersection problem. We prove sufficient, and necessary and sufficient conditions for the reformulation to be exact. Further, we propose and analyze three stochastic algorithms for solving the reformulated problem---basic, parallel and accelerated methods---with global linear convergence rates. The rates can be interpreted as condition numbers of a matrix which depends on the system matrix and on the reformulation parameters. This gives rise to a new phenomenon which we call stochastic preconditioning, and which refers to the problem of finding parameters (matrix and distribution) leading to a sufficiently small condition number. Our basic method can be equivalently interpreted as stochastic gradient descent, stochastic Newton method, stochastic proximal point method, stochastic fixed point method, and stochastic projection method, with fixed stepsize (relaxation parameter), applied to the reformulations.

  15. Phytochemical screening, proximate analysis and acute toxicity ...

    African Journals Online (AJOL)

    Phytochemical screening results indicate the presence of saponins, flavonoids, phytosterols and phenols. Acute toxicity study showed there was no mortality at 8000 mg/kg of the extract. The results indicate that the plant is rich in phytochemicals and is relatively safe. Key words: Phytochemicals, acute toxicity, proximate ...

  16. PROXIMATE AND ELEMENTAL COMPOSITION OF WHITE GRUBS

    African Journals Online (AJOL)

    DR. AMINU

    This study determined the proximate and mineral element composition of whole white grubs using standard methods of analysis. ... and 12.75 ± 3.65% respectively. Mineral contents of white grub in terms of relative concentration .... of intracellular Ca, bone mineralization, blood coagulation, and plasma membrane potential ...

  17. Phytochemical Screening and Proximate Analysis of Newbouldia ...

    African Journals Online (AJOL)

    The study was conducted to assess the phytochemical and proximate composition of Newboudia laevis leaves and Allium sativum bulb extracts. The leaves and bulbs extracts were analyzed for their chemical composition and antinutritional factors (ANFs) which include moisture, crude protein, crude fat, crude fiber, total ash ...

  18. Phytochemical Screening, Proximate and Mineral Composition of ...

    African Journals Online (AJOL)

    Leaves of sweet potato (Ipomoea batatas) grown in Tepi area was studied for their class of phytochemicals, mineral and proximate composition using standard analytical methods. The phytochemical screening revealed the presence of alkaloids, flavonoid, terpenoids, saponins, quinones, phenol, tannins, amino acid and ...

  19. Phytochemical screening, proximate and elemental analysis of ...

    African Journals Online (AJOL)

    Citrus sinensis was screened for its phytochemical composition and was evaluated for the proximate and elemental analysis. The phytochemical analysis indicated the presence of reducing sugar, saponins, cardiac glycosides, tannins and flavonoids. The elemental analysis indicated the presence of the following mineral ...

  20. Modified Koyanagi Technique in Management of Proximal ...

    African Journals Online (AJOL)

    xp

    Modified Koyanagi Technique in Management of Proximal Hypospadias. Adham Elsaied, Basem Saied, and Mohammed El- ... All operations were performed by the authors,using fine instruments and under 3.5X loupe ... the other needed an operation to close the fistula six months later. The case with meatal recession had ...

  1. Proximity focusing RICH with TOF capabilities

    International Nuclear Information System (INIS)

    Korpar, S.; Adachi, I.; Fujita, K.; Fukushima, T.; Gorisek, A.; Hayashi, D.; Iijima, T.; Ikado, T.; Ishikawa, T.; Kawai, H.; Kozakai, Y.; Krizan, P.; Kuratani, A.; Mazuka, Y.; Nakagawa, T.; Nishida, S.; Ogawa, S.; Pestotnik, R.; Seki, T.; Sumiyoshi, T.; Tabata, M.; Unno, Y.

    2007-01-01

    A proximity focusing RICH counter with a multi-channel micro-channel plate (MCP) PMT was tested as a time-of-flight counter. Cherenkov photons emitted in the radiator medium as well as in the entrance window of the PMT were used for the time-of-flight measurement, and an excellent performance of the counter could be demonstrated

  2. Proximate composition and mycological characterization of peanut ...

    African Journals Online (AJOL)

    SARAH

    2013-12-30

    Dec 30, 2013 ... ABSTRACT. Objective: The aim of this work was to contribute to the food safety of Ivorian consumers by investigating the proximate composition and the toxic fungal contamination of peanut butters offered for retail sale on the different markets of Abidjan. Methodology and results: Peanut butter samples (45) ...

  3. Prosthetic replacement for proximal humeral fractures.

    Science.gov (United States)

    Kontakis, George; Tosounidis, Theodoros; Galanakis, Ioannis; Megas, Panagiotis

    2008-12-01

    The ideal management of complex proximal humeral fractures continues to be debatable. Evolution of proximal humeral fracture management, during the past decade, led to the implementation of many innovations in surgical treatment. Even though the pendulum of treatment seems to swing towards new trends such as locked plating, hemiarthroplasty remains a valid and reliable option that serves the patient's needs well. Hemiarthroplasty is indicated for complex proximal humeral fractures in elderly patients with poor bone stock and when internal fixation is difficult or unreliable. Hemiarthroplasty provides a better result when it is performed early post-injury. Stem height, retroversion and tuberosity positioning are technical aspects of utmost importance. Additionally reverse total shoulder arthroplasty is an alternative new modality that can be used as a primary solution in selected patients with proximal humeral fracture treatment. Failed hemiarthroplasty and fracture sequelae can be successfully managed with reverse total shoulder arthroplasty. Individual decision-making and tailored treatment that takes into consideration the personality of the fracture and the patient's characteristics should be used.

  4. Phytochemistry and proximate composition of ginger ( Zingiber ...

    African Journals Online (AJOL)

    The results of the phytochemical screening showed that alkaloids, carbohydrates, glycosides, proteins, saponins, steroids, flavonoids and terpenoids were present, while reducing sugars, tannins, oils and acid compounds were absent. Similarly, the results of the proximate analysis of the rhizome showed that ginger ...

  5. Disability occurrence and proximity to death

    NARCIS (Netherlands)

    Klijs, Bart; Mackenbach, Johan P.; Kunst, Anton E.

    2010-01-01

    Purpose. This paper aims to assess whether disability occurrence is related more strongly to proximity to death than to age. Method. Self reported disability and vital status were available from six annual waves and a subsequent 12-year mortality follow-up of the Dutch GLOBE longitudinal study.

  6. Proximate composition, bread characteristics and sensory ...

    African Journals Online (AJOL)

    This study was carried out to investigate proximate composition, bread characteristics and sensory evaluation of cocoyam-wheat composite breads at different levels of cocoyam flour substitution for human consumption.A whole wheat bread (WWB) and cocoyam-composite breads (CCB1,CCB 2 and CCB 3) were prepared ...

  7. The Mean as Balance Point

    Science.gov (United States)

    O'Dell, Robin S.

    2012-01-01

    There are two primary interpretations of the mean: as a leveler of data (Uccellini 1996, pp. 113-114) and as a balance point of a data set. Typically, both interpretations of the mean are ignored in elementary school and middle school curricula. They are replaced with a rote emphasis on calculation using the standard algorithm. When students are…

  8. Algorithm 865

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  9. Calculation of nondiffused proximity functions from cloud-chamber data

    International Nuclear Information System (INIS)

    Zaider, M.

    1987-01-01

    To a large extent the cloud chamber is an ideal microdosimetric device: by measuring the positions of ionizing events in charged-particle tracks one can generate - with a flexibility matched only by Monte-Carlo simulations-any microdosimetric quantity of interest, ranging from lineal energy spectra (in volumes of practically arbitrary shape and size) to proximity functions, that is, distributions of distances between energy transfer points in the track. Cloud-chamber data analyzed in such ways have been indeed reported for a variety of radiations. In view of these clear advantages it is certainly surprising that, within the microdosimetric community, only one group (at Harwell, UK) is actively involved in such work and that, furthermore, cloud-chamber results are used essentially only as a testing ground for Monte-Carlo calculations. It appears that this reluctance can be traced to the fact that the tracks are distorted by the diffusion of droplets during their growth. This diffusion - which is of the order of several nanometers (in unit-density material), although rather insignificant vis-a-vis conventional microdosimetry, can be a serious limitation in view of modern theories of radiation action which emphasize energy deposition events at the nanometer level. The purpose of this research activity is to show that, using a rather straight-forward mathematical procedure, one can unfold the effect of diffusion from proximity functions. Since the nondiffused proximity function can be used to calculate other microdosimetric quantities an important limitation of the cloud-chamber data can thus be avoided

  10. A Penalization-Gradient Algorithm for Variational Inequalities

    Directory of Open Access Journals (Sweden)

    Abdellatif Moudafi

    2011-01-01

    Full Text Available This paper is concerned with the study of a penalization-gradient algorithm for solving variational inequalities, namely, find x̅∈C such that 〈Ax̅,y-x̅〉≥0 for all y∈C, where A:H→H is a single-valued operator, C is a closed convex set of a real Hilbert space H. Given Ψ:H→R  ∪  {+∞} which acts as a penalization function with respect to the constraint x̅∈C, and a penalization parameter βk, we consider an algorithm which alternates a proximal step with respect to ∂Ψ and a gradient step with respect to A and reads as xk=(I+λkβk∂Ψ-1(xk-1-λkAxk-1. Under mild hypotheses, we obtain weak convergence for an inverse strongly monotone operator and strong convergence for a Lipschitz continuous and strongly monotone operator. Applications to hierarchical minimization and fixed-point problems are also given and the multivalued case is reached by replacing the multivalued operator by its Yosida approximate which is always Lipschitz continuous.

  11. Music analysis and point-set compression

    DEFF Research Database (Denmark)

    Meredith, David

    2015-01-01

    COSIATEC, SIATECCompress and Forth’s algorithm are point-set compression algorithms developed for discovering repeated patterns in music, such as themes and motives that would be of interest to a music analyst. To investigate their effectiveness and versatility, these algorithms were evaluated...... on three analytical tasks that depend on the discovery of repeated patterns: classifying folk song melodies into tune families, discovering themes and sections in polyphonic music, and discovering subject and countersubject entries in fugues. Each algorithm computes a compressed encoding of a point......-set representation of a musical object in the form of a list of compact patterns, each pattern being given with a set of vectors indicating its occurrences. However, the algorithms adopt different strategies in their attempts to discover encodings that maximize compression.The best-performing algorithm on the folk...

  12. Proximal tubular hypertrophy and enlarged glomerular and proximal tubular urinary space in obese subjects with proteinuria.

    Directory of Open Access Journals (Sweden)

    Ana Tobar

    Full Text Available BACKGROUND: Obesity is associated with glomerular hyperfiltration, increased proximal tubular sodium reabsorption, glomerular enlargement and renal hypertrophy. A single experimental study reported an increased glomerular urinary space in obese dogs. Whether proximal tubular volume is increased in obese subjects and whether their glomerular and tubular urinary spaces are enlarged is unknown. OBJECTIVE: To determine whether proximal tubules and glomerular and tubular urinary space are enlarged in obese subjects with proteinuria and glomerular hyperfiltration. METHODS: Kidney biopsies from 11 non-diabetic obese with proteinuria and 14 non-diabetic lean patients with a creatinine clearance above 50 ml/min and with mild or no interstitial fibrosis were retrospectively analyzed using morphometric methods. The cross-sectional area of the proximal tubular epithelium and lumen, the volume of the glomerular tuft and of Bowman's space and the nuclei number per tubular profile were estimated. RESULTS: Creatinine clearance was higher in the obese than in the lean group (P=0.03. Proteinuria was similarly increased in both groups. Compared to the lean group, the obese group displayed a 104% higher glomerular tuft volume (P=0.001, a 94% higher Bowman's space volume (P=0.003, a 33% higher cross-sectional area of the proximal tubular epithelium (P=0.02 and a 54% higher cross-sectional area of the proximal tubular lumen (P=0.01. The nuclei number per proximal tubular profile was similar in both groups, suggesting that the increase in tubular volume is due to hypertrophy and not to hyperplasia. CONCLUSIONS: Obesity-related glomerular hyperfiltration is associated with proximal tubular epithelial hypertrophy and increased glomerular and tubular urinary space volume in subjects with proteinuria. The expanded glomerular and urinary space is probably a direct consequence of glomerular hyperfiltration. These effects may be involved in the pathogenesis of obesity

  13. Calorimetry end-point predictions

    International Nuclear Information System (INIS)

    Fox, M.A.

    1981-01-01

    This paper describes a portion of the work presently in progress at Rocky Flats in the field of calorimetry. In particular, calorimetry end-point predictions are outlined. The problems associated with end-point predictions and the progress made in overcoming these obstacles are discussed. The two major problems, noise and an accurate description of the heat function, are dealt with to obtain the most accurate results. Data are taken from an actual calorimeter and are processed by means of three different noise reduction techniques. The processed data are then utilized by one to four algorithms, depending on the accuracy desired to determined the end-point

  14. SINA: A test system for proximity fuses

    Science.gov (United States)

    Ruizenaar, M. G. A.

    1989-04-01

    SINA, a signal generator that can be used for testing proximity fuses, is described. The circuitry of proximity fuses is presented; the output signal of the RF circuit results from a mixing of the emitted signal and received signal that is Doppler shifted in frequency by the relative motion of the fuse with respect to the reflecting target of surface. With SINA, digitized and stored target and clutter signals (previously measured) can be transformed to Doppler signals, for example during a real flight. SINA can be used for testing fuse circuitry, for example in the verification of results of computer simulations of the low frequency Doppler signal processing. The software of SINA and its use are explained.

  15. Greedy algorithm with weights for decision tree construction

    KAUST Repository

    Moshkov, Mikhail

    2010-12-01

    An approximate algorithm for minimization of weighted depth of decision trees is considered. A bound on accuracy of this algorithm is obtained which is unimprovable in general case. Under some natural assumptions on the class NP, the considered algorithm is close (from the point of view of accuracy) to best polynomial approximate algorithms for minimization of weighted depth of decision trees.

  16. Greedy algorithm with weights for decision tree construction

    KAUST Repository

    Moshkov, Mikhail

    2010-01-01

    An approximate algorithm for minimization of weighted depth of decision trees is considered. A bound on accuracy of this algorithm is obtained which is unimprovable in general case. Under some natural assumptions on the class NP, the considered algorithm is close (from the point of view of accuracy) to best polynomial approximate algorithms for minimization of weighted depth of decision trees.

  17. Isolated Proximal Tibiofibular Dislocation during Soccer

    Directory of Open Access Journals (Sweden)

    Casey Chiu

    2015-01-01

    Full Text Available Proximal tibiofibular dislocations are rarely encountered in the Emergency Department (ED. We present a case involving a man presenting to the ED with left knee pain after making a sharp left turn on the soccer field. His physical exam was only remarkable for tenderness over the lateral fibular head. His X-rays showed subtle abnormalities of the tibiofibular joint. The dislocation was reduced and the patient was discharged from the ED with orthopedic follow-up.

  18. Superconducting proximity effect in topological materials

    Science.gov (United States)

    Reeg, Christopher R.

    In recent years, there has been a renewed interest in the proximity effect due to its role in the realization of topological superconductivity. In this dissertation, we discuss several results that have been obtained in the field of proximity-induced superconductivity and relate the results to the search for Majorana fermions. First, we show that repulsive electron-electron interactions can induce a non-Majorana zero-energy bound state at the interface between a conventional superconductor and a normal metal. We show that this state is very sensitive to disorder, owing to its lack of topological protection. Second, we show that Rashba spin-orbit coupling, which is one of the key ingredients in engineering a topological superconductor, induces triplet pairing in the proximity effect. When the spin-orbit coupling is strong (i.e., when the characteristic energy scale for spin-orbit coupling is comparable to the Fermi energy), the induced singlet and triplet pairing amplitudes can be comparable in magnitude. Finally, we discuss how the size of the proximity-induced gap, which appears in a low-dimensional material coupled to a superconductor, evolves as the thickness of the (quasi-)low-dimensional material is increased. We show that the induced gap can be comparable to the bulk energy gap of the underlying superconductor in materials that are much thicker than the Fermi wavelength, even in the presence of an interfacial barrier and strong Fermi surface mismatch. This result has important experimental consequences for topological superconductivity, as a sizable gap is required to isolate and detect the Majorana modes.

  19. [Proximity and breastfeeding at the maternity hospital].

    Science.gov (United States)

    Fradin-Charrier, Anne-Claire

    2015-01-01

    The establishment of breastfeeding, as well as its duration, are facilitated through the proximity of the mother with her new baby. However, in maternity hospitals, breastfeeding mothers very often leave their baby in the nursery at night time. A study carried out in 2014 in several maternity hospitals put forward suggestions and highlighted areas to improve in everyday practice. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  20. Unconventional Algorithms: Complementarity of Axiomatics and Construction

    Directory of Open Access Journals (Sweden)

    Gordana Dodig Crnkovic

    2012-10-01

    Full Text Available In this paper, we analyze axiomatic and constructive issues of unconventional computations from a methodological and philosophical point of view. We explain how the new models of algorithms and unconventional computations change the algorithmic universe, making it open and allowing increased flexibility and expressive power that augment creativity. At the same time, the greater power of new types of algorithms also results in the greater complexity of the algorithmic universe, transforming it into the algorithmic multiverse and demanding new tools for its study. That is why we analyze new powerful tools brought forth by local mathematics, local logics, logical varieties and the axiomatic theory of algorithms, automata and computation. We demonstrate how these new tools allow efficient navigation in the algorithmic multiverse. Further work includes study of natural computation by unconventional algorithms and constructive approaches.

  1. Proximity effects in topological insulator heterostructures

    International Nuclear Information System (INIS)

    Li Xiao-Guang; Wu Guang-Fen; Zhang Gu-Feng; Culcer Dimitrie; Zhang Zhen-Yu; Chen Hua

    2013-01-01

    Topological insulators (TIs) are bulk insulators that possess robust helical conducting states along their interfaces with conventional insulators. A tremendous research effort has recently been devoted to Tl-based heterostructures, in which conventional proximity effects give rise to a series of exotic physical phenomena. This paper reviews our recent studies on the potential existence of topological proximity effects at the interface between a topological insulator and a normal insulator or other topologically trivial systems. Using first-principles approaches, we have realized the tunability of the vertical location of the topological helical state via intriguing dual-proximity effects. To further elucidate the control parameters of this effect, we have used the graphene-based heterostructures as prototypical systems to reveal a more complete phase diagram. On the application side of the topological helical states, we have presented a catalysis example, where the topological helical state plays an essential role in facilitating surface reactions by serving as an effective electron bath. These discoveries lay the foundation for accurate manipulation of the real space properties of the topological helical state in TI-based heterostructures and pave the way for realization of the salient functionality of topological insulators in future device applications. (topical review - low-dimensional nanostructures and devices)

  2. [Augmentation technique on the proximal humerus].

    Science.gov (United States)

    Scola, A; Gebhard, F; Röderer, G

    2015-09-01

    The treatment of osteoporotic fractures is still a challenge. The advantages of augmentation with respect to primary in vitro stability and the clinical use for the proximal humerus are presented in this article. In this study six paired human humeri were randomized into an augmented and a non-augmented group. Osteosynthesis was performed with a PHILOS plate (Synthes®). In the augmented group the two screws finding purchase in the weakest cancellous bone were augmented. The specimens were tested in a 3-part fracture model in a varus bending test. The augmented PHILOS plates withstood significantly more load cycles until failure. The correlation to bone mineral density (BMD) showed that augmentation could partially compensate for low BMD. The augmentation of the screws in locked plating in a proximal humerus fracture model is effective in improving the primary stability in a cyclic varus bending test. The targeted augmentation of two particular screws in a region of low bone quality within the humeral head was almost as effective as four screws with twice the amount of bone cement. Screw augmentation combined with a knowledge of the local bone quality could be more effective in enhancing the primary stability of a proximal humerus locking plate because the effect of augmentation can be exploited more effectively limiting it to the degree required. The technique of augmentation is simple and can be applied in open and minimally invasive procedures. When the correct procedure is used, complications (cement leakage into the joint) can be avoided.

  3. A proximity effect in adults' contamination intuitions

    Directory of Open Access Journals (Sweden)

    Laura R. Kim

    2011-04-01

    Full Text Available Magical beliefs about contagion via contact (Rozin, Nemeroff, Wane, and Sherrod, 1989 may emerge when people overgeneralize real-world mechanisms of contamination beyond their appropriate boundaries (Lindeman and Aarnio, 2007. Do people similarly overextend knowledge of airborne contamination mechanisms? Previous work has shown that very young children believe merely being close to a contamination source can contaminate an item (Springer and Belk 1994; we asked whether this same hyper-avoidant intuition is also reflected in adults' judgments. In two studies, we measured adults' ratings of the desirability of an object that had made contact with a source of contamination, an object nearby that had made no contact with the contaminant, and an object far away that had also made no contact. Adults showed a clear proximity effect, wherein objects near the contamination source were perceived to be less desirable than those far away, even though a separate group of adults unanimously acknowledged that contaminants could not possibly have made contact with either the nearby or far-away object (Study 1. The proximity effect also remained robust when a third group of adults was explicitly told that no contaminating particles had made contact with the objects at any time (Study 2. We discuss implications of our findings for extending the scope of magical contagion effects beyond the contact principle, for understanding the persistence of intuitive theories despite broad acceptance of science-based theories, and for constraining interpretations of the developmental work on proximity beliefs.

  4. MOVING WINDOW SEGMENTATION FRAMEWORK FOR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2012-07-01

    Full Text Available As lidar point clouds become larger streamed processing becomes more attractive. This paper presents a framework for the streamed segmentation of point clouds with the intention of segmenting unstructured point clouds in real-time. The framework is composed of two main components. The first component segments points within a window shifting over the point cloud. The second component stitches the segments within the windows together. In this fashion a point cloud can be streamed through these two components in sequence, thus producing a segmentation. The algorithm has been tested on airborne lidar point cloud and some results of the performance of the framework are presented.

  5. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  6. Treatment of proximal ulna and olecranon fractures by dorsal plating

    NARCIS (Netherlands)

    Kloen, Peter; Buijze, Geert A.

    2009-01-01

    OBJECTIVE : Anatomic reconstruction of proximal ulna and olecranon fractures allowing early mobilization and prevention of ulnohumeral arthritis. INDICATIONS : Comminuted olecranon or proximal ulna fractures (including Monteggia fractures), olecranon fractures extending distally from the coronoid

  7. The female geriatric proximal humeral fracture: protagonist for straight antegrade nailing?

    Science.gov (United States)

    Lindtner, Richard A; Kralinger, Franz S; Kapferer, Sebastian; Hengg, Clemens; Wambacher, Markus; Euler, Simon A

    2017-10-01

    Straight antegrade humeral nailing (SAHN) has become a standard technique for the surgical fixation of proximal humeral fractures, which predominantly affect elderly females. The nail's proximal anchoring point has been demonstrated to be critical to ensure reliable fixation in osteoporotic bone and to prevent iatrogenic damage to the superior rotator cuff bony insertion. Anatomical variations of the proximal humerus, however, may preclude satisfactory anchoring of the nail's proximal end and may bare the risk of rotator cuff violation, even though the nail is inserted as recommended. The aim of this study was to evaluate the anatomical suitability of proximal humeri of geriatric females aged 75 years and older for SAHN. Specifically, we sought to assess the proportion of humeri not anatomically amenable to SAHN for proximal humeral fracture. A total of 303 proximal humeri of 241 females aged 75 years and older (mean age 84.5 ± 5.0 years; range 75-102 years) were analyzed for this study. Multiplanar two-dimensional reformations (true ap, true lateral, and axial) were reconstructed from shoulder computed tomography (CT) data sets. The straight antegrade nail's ideal entry point, "critical point" (CP), and critical distance (CD; distance between ideal entry point and CP) were determined. The rate of proximal humeri not anatomically suitable for SAHN (critical type) was assessed regarding proximal reaming diameters of currently available straight antegrade humeral nails. Overall, 35.6% (108/303) of all proximal humeri were found to be "critical types" (CD straight antegrade nails currently in use. Moreover, 43.2% (131/303) of the humeri were considered "critical types" with regard to the alternatively used larger proximal reaming diameter of 11.5 mm. Mean CD was 9.0 ± 1.7 mm (range 3.5-13.5 mm) and did not correlate with age (r = -0.04, P = 0.54). No significant differences in CD and rate of "critical types" were found between left and right humeri

  8. Strong Proximities on Smooth Manifolds and Vorono\\" i Diagrams

    OpenAIRE

    Peters, J. F.; Guadagni, C.

    2015-01-01

    This article introduces strongly near smooth manifolds. The main results are (i) second countability of the strongly hit and far-miss topology on a family $\\mathcal{B}$ of subsets on the Lodato proximity space of regular open sets to which singletons are added, (ii) manifold strong proximity, (iii) strong proximity of charts in manifold atlases implies that the charts have nonempty intersection. The application of these results is given in terms of the nearness of atlases and charts of proxim...

  9. NEUTRON ALGORITHM VERIFICATION TESTING

    International Nuclear Information System (INIS)

    COWGILL, M.; MOSBY, W.; ARGONNE NATIONAL LABORATORY-WEST

    2000-01-01

    Active well coincidence counter assays have been performed on uranium metal highly enriched in 235 U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the 235 U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the 235 U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility

  10. Distribution Bottlenecks in Classification Algorithms

    NARCIS (Netherlands)

    Zwartjes, G.J.; Havinga, Paul J.M.; Smit, Gerardus Johannes Maria; Hurink, Johann L.

    2012-01-01

    The abundance of data available on Wireless Sensor Networks makes online processing necessary. In industrial applications for example, the correct operation of equipment can be the point of interest while raw sampled data is of minor importance. Classi﬿cation algorithms can be used to make state

  11. Adaptive protection algorithm and system

    Science.gov (United States)

    Hedrick, Paul [Pittsburgh, PA; Toms, Helen L [Irwin, PA; Miller, Roger M [Mars, PA

    2009-04-28

    An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.

  12. Correlation analysis of alveolar bone loss in buccal/palatal and proximal surfaces in rats

    Directory of Open Access Journals (Sweden)

    Carolina Barrera de Azambuja

    2012-12-01

    Full Text Available The aim was to correlate alveolar bone loss in the buccal/palatal and the mesial/distal surfaces of upper molars in rats. Thirty-three, 60-day-old, male Wistar rats were divided in two groups, one treated with alcohol and the other not treated with alcohol. All rats received silk ligatures on the right upper second molars for 4 weeks. The rats were then euthanized and their maxillae were split and defleshed with sodium hypochlorite (9%. The cemento-enamel junction (CEJ was stained with 1% methylene blue and the alveolar bone loss in the buccal/palatal surfaces was measured linearly in 5 points on standardized digital photographs. Measurement of the proximal sites was performed by sectioning the hemimaxillae, restaining the CEJ and measuring the alveolar bone loss linearly in 3 points. A calibrated and blinded examiner performed all the measurements. Intraclass Correlation Coefficient revealed values of 0.96 and 0.89 for buccal/lingual and proximal surfaces, respectively. The Pearson Correlation Coefficient (r between measurements in buccal/palatal and proximal surfaces was 0.35 and 0.05 for the group treated with alcohol, with and without ligatures, respectively. The best correlations between buccal/palatal and proximal surfaces were observed in animals not treated with alcohol, in sites both with and without ligatures (r = 0.59 and 0.65, respectively. A positive correlation was found between alveolar bone loss in buccal/palatal and proximal surfaces. The correlation is stronger in animals that were not treated with alcohol, in sites without ligatures. Areas with and without ligature-induced periodontal destruction allow detection of alveolar bone loss in buccal/palatal and proximal surfaces.

  13. Algorithmic chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  14. A study of Hough Transform-based fingerprint alignment algorithms

    CSIR Research Space (South Africa)

    Mlambo, CS

    2014-10-01

    Full Text Available the implementation of each algorithm. The comparison is performed by considering the alignment results computed using each group of algorithms when varying number of minutiae points, rotation angle, and translation. In addition, the memory usage, computing time...

  15. Critical Proximity as a Methodological Move in Techno-Anthropology

    DEFF Research Database (Denmark)

    Birkbak, Andreas; Petersen, Morten Krogh; Elgaard Jensen, Torben

    2015-01-01

    proximity.’ Critical proximity offers an alternative to critical distance, especially with respect to avoiding premature references to abstract panoramas such as democratization and capitalist exploitation in the quest to conduct ‘critical’ analysis. Critical proximity implies, instead, granting the beings...

  16. 75 FR 5009 - Proximity Detection Systems for Underground Mines

    Science.gov (United States)

    2010-02-01

    ... Proximity Detection Systems for Underground Mines AGENCY: Mine Safety and Health Administration, Labor... information regarding whether the use of proximity detection systems would reduce the risk of accidents where... . Information on MSHA-approved proximity detection systems is available on the Internet at http://www.msha.gov...

  17. Proximal sensing for soil carbon accounting

    Science.gov (United States)

    England, Jacqueline R.; Viscarra Rossel, Raphael A.

    2018-05-01

    Maintaining or increasing soil organic carbon (C) is vital for securing food production and for mitigating greenhouse gas (GHG) emissions, climate change, and land degradation. Some land management practices in cropping, grazing, horticultural, and mixed farming systems can be used to increase organic C in soil, but to assess their effectiveness, we need accurate and cost-efficient methods for measuring and monitoring the change. To determine the stock of organic C in soil, one requires measurements of soil organic C concentration, bulk density, and gravel content, but using conventional laboratory-based analytical methods is expensive. Our aim here is to review the current state of proximal sensing for the development of new soil C accounting methods for emissions reporting and in emissions reduction schemes. We evaluated sensing techniques in terms of their rapidity, cost, accuracy, safety, readiness, and their state of development. The most suitable method for measuring soil organic C concentrations appears to be visible-near-infrared (vis-NIR) spectroscopy and, for bulk density, active gamma-ray attenuation. Sensors for measuring gravel have not been developed, but an interim solution with rapid wet sieving and automated measurement appears useful. Field-deployable, multi-sensor systems are needed for cost-efficient soil C accounting. Proximal sensing can be used for soil organic C accounting, but the methods need to be standardized and procedural guidelines need to be developed to ensure proficient measurement and accurate reporting and verification. These are particularly important if the schemes use financial incentives for landholders to adopt management practices to sequester soil organic C. We list and discuss requirements for developing new soil C accounting methods based on proximal sensing, including requirements for recording, verification, and auditing.

  18. Modulation Algorithms for Manipulating Nuclear Spin States

    OpenAIRE

    Liu, Boyang; Zhang, Ming; Dai, Hong-Yi

    2013-01-01

    We exploit the impact of exact frequency modulation on transition time of steering nuclear spin states from theoretical point of view. 1-stage and 2-stage Frequency-Amplitude-Phase modulation (FAPM) algorithms are proposed in contrast with 1-stage and 3-stage Amplitude-Phase modulation (APM) algorithms. The sufficient conditions are further present for transiting nuclear spin states within the specified time by these four modulation algorithms. It is demonstrated that transition time performa...

  19. Algorithms for reconstructing images for industrial applications

    International Nuclear Information System (INIS)

    Lopes, R.T.; Crispim, V.R.

    1986-01-01

    Several algorithms for reconstructing objects from their projections are being studied in our Laboratory, for industrial applications. Such algorithms are useful locating the position and shape of different composition of materials in the object. A Comparative study of two algorithms is made. The two investigated algorithsm are: The MART (Multiplicative - Algebraic Reconstruction Technique) and the Convolution Method. The comparison are carried out from the point view of the quality of the image reconstructed, number of views and cost. (Author) [pt

  20. Keldysh proximity action for disordered superconductors

    International Nuclear Information System (INIS)

    Feigel'man, M.V.; Larkin, A.I.; Skvortsov, M.A.

    2005-01-01

    We review a novel approach to the superconductive proximity effect in disordered normal-superconducting (N-S) structures. The method is based on the multicharge Keldysh action and is suitable for the treatment of interaction and fluctuation effects. As an application of the formalism, we study the subgap conductance and noise in two-dimensional N-S system in the presence of the electron-electron interaction in the Cooper channel. It is shown that singular nature of the interaction correction at large scales leads to a nonmonotonous temperature, voltage and magnetic field dependence of the Andreev conductance. (author)

  1. Phonon structure in proximity tunnel junctions

    International Nuclear Information System (INIS)

    Zarate, H.G.; Carbotte, J.P.

    1985-01-01

    We have iterated to convergence, for the first time, a set of four coupled real axis Eliashberg equations for the superconducting gap and renormalization functions on each side of a proximity sandwich. We find that the phenomenological procedures developed to extract the size of the normal side electron-phonon interaction from tunneling data are often reasonable but may in some cases need modifications. In all the cases considered the superconducting phonon structure reflected on the normal side, as well as other structures, shows considerable agreement with experiment as to size, shape, and variation with barrier transmission coefficient. Finally, we study the effects of depairing on these structures

  2. Proximal iliotibial band syndrome: case report

    Directory of Open Access Journals (Sweden)

    Guilherme Guadagnini Falotico

    2013-08-01

    Full Text Available OBJECTIVE: The overuse injuries in the hip joint occur commonly in sports practitioners and currently due to technical advances in diagnostic imaging, especially magnetic resonance imaging (MRI, are often misdiagnosed. Recently, a group of people were reported, all female, with pain and swelling in the pelvic region.T2-weighted MRI showed increased signal in the enthesis of the iliotibial band (ITB along the lower border of the iliac tubercle. We report a case of a 34 year old woman, non-professional runner, with pain at the iliac crest with no history of trauma and whose MRI was compatible with the proximal iliotibial band syndrome.

  3. Noise measurements on proximity effect bridges

    International Nuclear Information System (INIS)

    Decker, S.K.; Mercereau, J.E.

    1975-01-01

    Audio frequency noise density measurements were performed on weakly superconducting proximity effect bridges on using a cooled transformer and room temperature low noise preamplifier. The noise temperature of the measuring system is approximately 4 0 K for a 0.9 Ω resistor. Noise density was measured as a function of bias current and temperature for the bridges. Excess noise above that expected from Johnson noise for a resistor equal to the dynamic resistance of the bridges was observed in the region near the critical current of the device. At high currents compared to the critical current, the noise density closely approaches that given by Johnson noise

  4. Robotics in hostile environment I. S. I. S. robot - automatic positioning and docking with proximity and force feed back sensors

    Energy Technology Data Exchange (ETDEWEB)

    Gery, D

    1987-01-01

    Recent improvements in control command systems and the development of tactile proximity and force feed back sensors make it possible to robotize complex inspection and maintenance operations in hostile environment, which could have not been possible by classical remotely operated manipulators. We describe the I.S.I.S. robot characteristics, the control command system software principles and the tactile and force-torque sensors which have been developed for the different sequences of an hostile environment inspection and repair: access trajectories generation with obstacles shunning, final positioning and docking using parametric algorithms taking into account measurement of the end of arm proximity and force-torque sensors.

  5. Handbook of floating-point arithmetic

    CERN Document Server

    Muller, Jean-Michel; de Dinechin, Florent; Jeannerod, Claude-Pierre; Joldes, Mioara; Lefèvre, Vincent; Melquiond, Guillaume; Revol, Nathalie; Torres, Serge

    2018-01-01

    This handbook is a definitive guide to the effective use of modern floating-point arithmetic, which has considerably evolved, from the frequently inconsistent floating-point number systems of early computing to the recent IEEE 754-2008 standard. Most of computational mathematics depends on floating-point numbers, and understanding their various implementations will allow readers to develop programs specifically tailored for the standard’s technical features. Algorithms for floating-point arithmetic are presented throughout the book and illustrated where possible by example programs which show how these techniques appear in actual coding and design. The volume itself breaks its core topic into four parts: the basic concepts and history of floating-point arithmetic; methods of analyzing floating-point algorithms and optimizing them; implementations of IEEE 754-2008 in hardware and software; and useful extensions to the standard floating-point system, such as interval arithmetic, double- and triple-word arithm...

  6. Cortical thickness estimation of the proximal femur from multi-view dual-energy X-ray absorptiometry (DXA)

    Science.gov (United States)

    Tsaousis, N.; Gee, A. H.; Treece, G. M.; Poole, K. E. S.

    2013-02-01

    Hip fracture is the leading cause of acute orthopaedic hospital admission amongst the elderly, with around a third of patients not surviving one year post-fracture. Although various preventative therapies are available, patient selection is difficult. The current state-of-the-art risk assessment tool (FRAX) ignores focal structural defects, such as cortical bone thinning, a critical component in characterizing hip fragility. Cortical thickness can be measured using CT, but this is expensive and involves a significant radiation dose. Instead, Dual-Energy X-ray Absorptiometry (DXA) is currently the preferred imaging modality for assessing hip fracture risk and is used routinely in clinical practice. Our ambition is to develop a tool to measure cortical thickness using multi-view DXA instead of CT. In this initial study, we work with digitally reconstructed radiographs (DRRs) derived from CT data as a surrogate for DXA scans: this enables us to compare directly the thickness estimates with the gold standard CT results. Our approach involves a model-based femoral shape reconstruction followed by a data-driven algorithm to extract numerous cortical thickness point estimates. In a series of experiments on the shaft and trochanteric regions of 48 proximal femurs, we validated our algorithm and established its performance limits using 20 views in the range 0°-171°: estimation errors were 0:19 +/- 0:53mm (mean +/- one standard deviation). In a more clinically viable protocol using four views in the range 0°-51°, where no other bony structures obstruct the projection of the femur, measurement errors were -0:07 +/- 0:79 mm.

  7. K-means Clustering: Lloyd's algorithm

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. K-means Clustering: Lloyd's algorithm. Refines clusters iteratively. Cluster points using Voronoi partitioning of the centers; Centroids of the clusters determine the new centers. Bad example k = 3, n =4.

  8. A Sweepline Algorithm for Generalized Delaunay Triangulations

    DEFF Research Database (Denmark)

    Skyum, Sven

    We give a deterministic O(n log n) sweepline algorithm to construct the generalized Voronoi diagram for n points in the plane or rather its dual the generalized Delaunay triangulation. The algorithm uses no transformations and it is developed solely from the sweepline paradigm together...

  9. Two General Extension Algorithms of Latin Hypercube Sampling

    Directory of Open Access Journals (Sweden)

    Zhi-zhao Liu

    2015-01-01

    Full Text Available For reserving original sampling points to reduce the simulation runs, two general extension algorithms of Latin Hypercube Sampling (LHS are proposed. The extension algorithms start with an original LHS of size m and construct a new LHS of size m+n that contains the original points as many as possible. In order to get a strict LHS of larger size, some original points might be deleted. The relationship of original sampling points in the new LHS structure is shown by a simple undirected acyclic graph. The basic general extension algorithm is proposed to reserve the most original points, but it costs too much time. Therefore, a general extension algorithm based on greedy algorithm is proposed to reduce the extension time, which cannot guarantee to contain the most original points. These algorithms are illustrated by an example and applied to evaluating the sample means to demonstrate the effectiveness.

  10. Residential proximity to agricultural fumigant use and IQ, attention and hyperactivity in 7-year old children.

    Science.gov (United States)

    Gunier, Robert B; Bradman, Asa; Castorina, Rosemary; Holland, Nina T; Avery, Dylan; Harley, Kim G; Eskenazi, Brenda

    2017-10-01

    Our objective was to examine the relationship between residential proximity to agricultural fumigant use and neurodevelopment in 7-year old children. Participants were living in the agricultural Salinas Valley, California and enrolled in the Center for the Health Assessment of Mothers and Children Of Salinas (CHAMACOS) study. We administered the Wechsler Intelligence Scale for Children (4th Edition) to assess cognition and the Behavioral Assessment System for Children (2nd Edition) to assess behavior. We estimated agricultural fumigant use within 3, 5 and 8km of residences during pregnancy and from birth to age 7 using California's Pesticide Use Report data. We evaluated the association between prenatal (n = 285) and postnatal (n = 255) residential proximity to agricultural use of methyl bromide, chloropicrin, metam sodium and 1,3-dichloropropene with neurodevelopment. We observed decreases of 2.6 points (95% Confidence Interval (CI): -5.2, 0.0) and 2.4 points (95% CI: -4.7, -0.2) in Full-Scale intelligence quotient for each ten-fold increase in methyl bromide and chloropicrin use within 8km of the child's residences from birth to 7-years of age, respectively. There were no associations between residential proximity to use of other fumigants and cognition or proximity to use of any fumigant and hyperactivity or attention problems. These findings should be explored in larger studies. Copyright © 2017. Published by Elsevier Inc.

  11. Event-triggered Decision Propagation in Proximity Networks

    Directory of Open Access Journals (Sweden)

    Soumik eSarkar

    2014-12-01

    Full Text Available This paper proposes a novel event-triggered formulation as an extension of the recently develo-ped generalized gossip algorithm for decision/awareness propagation in mobile sensor networksmodeled as proximity networks. The key idea is to expend energy for communication (messagetransmission and reception only when there is any event of interest in the region of surveillance.The idea is implemented by using an agent’s belief about presence of a hotspot as feedback tochange its probability of (communication activity. In the original formulation, the evolution ofnetwork topology and the dynamics of decision propagation were completely decoupled whichis no longer the case as a consequence of this feedback policy. Analytical results and numeri-cal experiments are presented to show a significant gain in energy savings with no change inthe first moment characteristics of decision propagation. However, numerical experiments showthat the second moment characteristics may change and theoretical results are provided forupper and lower bounds for second moment characteristics. Effects of false alarms on networkformation and communication activity are also investigated.

  12. Congenital anomalies and proximity to landfill sites.

    LENUS (Irish Health Repository)

    Boyle, E

    2004-01-01

    The occurrence of congenital anomalies in proximity to municipal landfill sites in the Eastern Region (counties Dublin, Kildare, Wicklow) was examined by small area (district electoral division), distance and clustering tendancies in relation to 83 landfills, five of which were major sites. The study included 2136 cases of congenital anomaly, 37,487 births and 1423 controls between 1986 and 1990. For the more populous areas of the region 50% of the population lived within 2-3 km of a landfill and within 4-5 km for more rural areas. In the area-level analysis, the standardised prevalence ratios, empirical and full Bayesian modelling, and Kulldorff\\'s spatial scan statistic found no association between the residential area of cases and location of landfills. In the case control analysis, the mean distance of cases and controls from the nearest landfill was similar. The odds ratios of cases compared to controls for increasing distances from all landfills and major landfills showed no significant difference from the baseline value of 1. The kernel and K methods showed no tendency of cases to cluster in relationship to landfills. In conclusion, congenital anomalies were not found to occur more commonly in proximity to municipal landfills.

  13. Obesity and supermarket access: proximity or price?

    Science.gov (United States)

    Drewnowski, Adam; Aggarwal, Anju; Hurvitz, Philip M; Monsivais, Pablo; Moudon, Anne V

    2012-08-01

    We examined whether physical proximity to supermarkets or supermarket price was more strongly associated with obesity risk. The Seattle Obesity Study (SOS) collected and geocoded data on home addresses and food shopping destinations for a representative sample of adult residents of King County, Washington. Supermarkets were stratified into 3 price levels based on average cost of the market basket. Sociodemographic and health data were obtained from a telephone survey. Modified Poisson regression was used to test the associations between obesity and supermarket variables. Only 1 in 7 respondents reported shopping at the nearest supermarket. The risk of obesity was not associated with street network distances between home and the nearest supermarket or the supermarket that SOS participants reported as their primary food source. The type of supermarket, by price, was found to be inversely and significantly associated with obesity rates, even after adjusting for individual-level sociodemographic and lifestyle variables, and proximity measures (adjusted relative risk=0.34; 95% confidence interval=0.19, 0.63) Improving physical access to supermarkets may be one strategy to deal with the obesity epidemic; improving economic access to healthy foods is another.

  14. Software Modules for the Proximity-1 Space Link Interleaved Time Synchronization (PITS) Protocol

    Science.gov (United States)

    Woo, Simon S.; Veregge, John R.; Gao, Jay L.; Clare, Loren P.; Mills, David

    2012-01-01

    The Proximity-1 Space Link Interleaved Time Synchronization (PITS) protocol provides time distribution and synchronization services for space systems. A software prototype implementation of the PITS algorithm has been developed that also provides the test harness to evaluate the key functionalities of PITS with simulated data source and sink. PITS integrates time synchronization functionality into the link layer of the CCSDS Proximity-1 Space Link Protocol. The software prototype implements the network packet format, data structures, and transmit- and receive-timestamp function for a time server and a client. The software also simulates the transmit and receive-time stamp exchanges via UDP (User Datagram Protocol) socket between a time server and a time client, and produces relative time offsets and delay estimates.

  15. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  16. Common iliac vein thrombosis as a result of proximal venous stenosis following renal transplantation: A case report

    Directory of Open Access Journals (Sweden)

    Atish Chopra

    2016-12-01

    Full Text Available Proximal iliac vein stenosis resulting in iliac vein thrombus and venous outflow obstruction in renal transplant patients is an exceedingly rare occurrence. We present a case of a 63-year-old male who underwent deceased donor renal transplantation and presented 12 days later with ipsilateral lower extremity swelling and plateauing serum creatinine. Further work-up demonstrated proximal iliac vein deep venous thrombosis and anticoagulation was initiated. However, propagation of the thrombus developed despite receiving therapeutic anticoagulation. Subsequent venography demonstrated proximal iliac venous stenosis and the patient underwent successful catheter-directed alteplase thrombolysis, inferior vena cava filter placement and iliac vein stenting with salvage of the renal allograft. A diagnostic strategy and management algorithm for iliac vein stenosis and thrombosis in a renal transplant recipient is proposed.

  17. Children’s proximal societal conditions

    DEFF Research Database (Denmark)

    Stanek, Anja Hvidtfeldt

    or the children’s everyday life, but something that is represented through societal structures and actual persons participating (in political ways) within the institutional settings, in ways that has meaning to children’s possibilities to participate, learn and develop. Understanding school or daycare as (part of......) the children’s proximal societal conditions for development and learning, means for instance that considerations about an inclusive agenda in a (Danish) welfare state with well-developed school- and daycare system, are no longer simply thoughts about the school having space for as many pupils as possible...... (schools for all). Such thoughts can or should be supplemented by reflections about which version of ‘the societal’ we wish to present our children with, and which version of ‘the societal’ we wish to set up as the condition for children’s participation and development. These questions require an ethical...

  18. Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis

    Science.gov (United States)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-01-01

    To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.

  19. Perturbation resilience and superiorization of iterative algorithms

    International Nuclear Information System (INIS)

    Censor, Y; Davidi, R; Herman, G T

    2010-01-01

    Iterative algorithms aimed at solving some problems are discussed. For certain problems, such as finding a common point in the intersection of a finite number of convex sets, there often exist iterative algorithms that impose very little demand on computer resources. For other problems, such as finding that point in the intersection at which the value of a given function is optimal, algorithms tend to need more computer memory and longer execution time. A methodology is presented whose aim is to produce automatically for an iterative algorithm of the first kind a 'superiorized version' of it that retains its computational efficiency but nevertheless goes a long way toward solving an optimization problem. This is possible to do if the original algorithm is 'perturbation resilient', which is shown to be the case for various projection algorithms for solving the consistent convex feasibility problem. The superiorized versions of such algorithms use perturbations that steer the process in the direction of a superior feasible point, which is not necessarily optimal, with respect to the given function. After presenting these intuitive ideas in a precise mathematical form, they are illustrated in image reconstruction from projections for two different projection algorithms superiorized for the function whose value is the total variation of the image

  20. Bayesian analysis of Markov point processes

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2006-01-01

    Recently Møller, Pettitt, Berthelsen and Reeves introduced a new MCMC methodology for drawing samples from a posterior distribution when the likelihood function is only specified up to a normalising constant. We illustrate the method in the setting of Bayesian inference for Markov point processes...... a partially ordered Markov point process as the auxiliary variable. As the method requires simulation from the "unknown" likelihood, perfect simulation algorithms for spatial point processes become useful....

  1. Multimodal optimization by using hybrid of artificial bee colony algorithm and BFGS algorithm

    Science.gov (United States)

    Anam, S.

    2017-10-01

    Optimization has become one of the important fields in Mathematics. Many problems in engineering and science can be formulated into optimization problems. They maybe have many local optima. The optimization problem with many local optima, known as multimodal optimization problem, is how to find the global solution. Several metaheuristic methods have been proposed to solve multimodal optimization problems such as Particle Swarm Optimization (PSO), Genetics Algorithm (GA), Artificial Bee Colony (ABC) algorithm, etc. The performance of the ABC algorithm is better than or similar to those of other population-based algorithms with the advantage of employing a fewer control parameters. The ABC algorithm also has the advantages of strong robustness, fast convergence and high flexibility. However, it has the disadvantages premature convergence in the later search period. The accuracy of the optimal value cannot meet the requirements sometimes. Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is a good iterative method for finding a local optimum. Compared with other local optimization methods, the BFGS algorithm is better. Based on the advantages of the ABC algorithm and the BFGS algorithm, this paper proposes a hybrid of the artificial bee colony algorithm and the BFGS algorithm to solve the multimodal optimization problem. The first step is that the ABC algorithm is run to find a point. In the second step is that the point obtained by the first step is used as an initial point of BFGS algorithm. The results show that the hybrid method can overcome from the basic ABC algorithm problems for almost all test function. However, if the shape of function is flat, the proposed method cannot work well.

  2. Characteristic point algorithm in laser ektacytometry of red blood cells

    Science.gov (United States)

    Nikitin, S. Yu.; Ustinov, V. D.

    2018-01-01

    We consider the problem of measuring red blood cell deformability by laser diffractometry in shear flow (ektacytometry). A new equation is derived that relates the parameters of the diffraction pattern to the width of the erythrocyte deformability distribution. The numerical simulation method shows that this equation provides a higher accuracy of measurements in comparison with the analogous equation obtained by us earlier.

  3. Algorithms and Data Structures for Strings, Points and Integers

    DEFF Research Database (Denmark)

    Vind, Søren Juhl

    a string under a compression scheme that can achieve better than entropy compression. We also give improved results for the substring concatenation problem, and an extension of our structure can be used as a black box to get an improved solution to the previously studied dynamic text static pattern problem....... Compressed Pattern Matching. In the streaming model, input data flows past a client one item at a time, but is far too large for the client to store. The annotated streaming model extends the model by introducing a powerful but untrusted annotator (representing “the cloud”) that can annotate input elements...... with additional information, sent as one-way communication to the client. We generalize the annotated streaming model to be able to solve problems on strings and present a data structure that allows us to trade off client space and annotation size. This lets us exploit the power of the annotator. In compressed...

  4. Beam finding algorithms at the interaction point of B factories

    International Nuclear Information System (INIS)

    Kozanecki, W.

    1992-10-01

    We review existing methods to bring beams in collision in circular machines, and examine collision alignment strategies proposed for e + e - B-factories. The two-ring feature of such machines, while imposing more stringent demands on beam control, also opens up new diagnostic possibilities

  5. A medial point cloud based algorithm for dental cast segmentation

    NARCIS (Netherlands)

    Kustra, J.; Jager, de M.K.J.; Jalba, A.C.; Telea, A.C.

    2014-01-01

    Although the detailed oral anatomy can be acquired using techniques such as Computer Tomography, the development of mass consumer products is dependent of non-invasive, safe acquisition techniques such as the bite imprint, commonly used in orthodontic treatments. However, bite imprints provide only

  6. Inter-organizational proximity in the context of logistics – research challenges

    Directory of Open Access Journals (Sweden)

    Patrycja Klimas

    2015-03-01

    Full Text Available Background: One of major areas of modern research econnected with management issues covers inter-organizational networks (including supply chains and cooperation processes aimed at improvement of the effectiveness of their performance to be found in such networks. The logistics is the main factor responsible for effectiveness of the supply chain.  A possible and a quite new direction of research in the area of the performance of processes of the inter-organizational cooperation is the proximity hypothesis that is considered in five dimensions (geographical, organizational, social, cognitive, and institutional. However, according to many authors, there is a lack of research on supply chains conducted from the logistics point of view. The proximity hypothesis in this area of research can be seen as a kind of novum. Therefore, this paper presents the proximity concept from the perspective of the management science, the overview of prior research covering the inter-organizational proximity with supply chain from the logistics point of view as well as the possible future directions of the empirical efforts. Methods: The aim of this paper is to present previous theoretical and empirical results of research covering inter-organizational proximity in logistics and to show current and up-to-date research challenges in this area. The method of the critical analysis of literature is used to realize the goal constructed this way. Results: Knowledge about the influence of the inter-organizational proximity on the performance of supply chains is rather limited, and the research conducted so far, is rather fragmentary and not free of limitations of the conceptual and methodological nature. Additional rationales for further research in this area include knowledge and cognitive gaps indentified in this paper. According to authors the aim of future empirical research should be as follows: (1 unification and update of used conceptual and methodological approaches

  7. Sub-quadratic decoding of one-point hermitian codes

    DEFF Research Database (Denmark)

    Nielsen, Johan Sebastian Rosenkilde; Beelen, Peter

    2015-01-01

    We present the first two sub-quadratic complexity decoding algorithms for one-point Hermitian codes. The first is based on a fast realization of the Guruswami-Sudan algorithm using state-of-the-art algorithms from computer algebra for polynomial-ring matrix minimization. The second is a power...... decoding algorithm: an extension of classical key equation decoding which gives a probabilistic decoding algorithm up to the Sudan radius. We show how the resulting key equations can be solved by the matrix minimization algorithms from computer algebra, yielding similar asymptotic complexities....

  8. Detection of proximal caries using digital radiographic systems with different resolutions.

    Science.gov (United States)

    Nikneshan, Sima; Abbas, Fatemeh Mashhadi; Sabbagh, Sedigheh

    2015-01-01

    Dental radiography is an important tool for detection of caries and digital radiography is the latest advancement in this regard. Spatial resolution is a characteristic of digital receptors used for describing the quality of images. This study was aimed to compare the diagnostic accuracy of two digital radiographic systems with three different resolutions for detection of noncavitated proximal caries. Diagnostic accuracy. Seventy premolar teeth were mounted in 14 gypsum blocks. Digora; Optime and RVG Access were used for obtaining digital radiographs. Six observers evaluated the proximal surfaces in radiographs for each resolution in order to determine the depth of caries based on a 4-point scale. The teeth were then histologically sectioned, and the results of histologic analysis were considered as the gold standard. Data were entered using SPSS version 18 software and the Kruskal-Wallis test was used for data analysis. P detection of proximal caries (P > 0.05). RVG access system had the highest specificity (87.7%) and Digora; Optime at high resolution had the lowest specificity (84.2%). Furthermore, Digora; Optime had higher sensitivity for detection of caries exceeding outer half of enamel. Judgment of oral radiologists for detection of the depth of caries had higher reliability than that of restorative dentistry specialists. The three resolutions of Digora; Optime and RVG access had similar accuracy in detection of noncavitated proximal caries.

  9. Dental flossing as a diagnostic method for proximal gingivitis: a validation study.

    Science.gov (United States)

    Grellmann, Alessandra Pascotini; Kantorski, Karla Zanini; Ardenghi, Thiago Machado; Moreira, Carlos Heitor Cunha; Danesi, Cristiane Cademartori; Zanatta, Fabricio Batistin

    2016-05-20

    This study evaluated the clinical diagnosis of proximal gingivitis by comparing two methods: dental flossing and the gingival bleeding index (GBI). One hundred subjects (aged at least 18 years, with 15% of positive proximal sites for GBI, without proximal attachment loss) were randomized into five evaluation protocols. Each protocol consisted of two assessments with a 10-minute interval between them: first GBI/second floss, first floss/second GBI, first GBI/second GBI, first tooth floss/second floss, and first gum floss-second floss. The dental floss was slid against the tooth surface (TF) and the gingival tissue (GF). The evaluated proximal sites should present teeth with established point of contact and probing depth ≤ 3mm. One trained and calibrated examiner performed all the assessments. The mean percentages of agreement and disagreement were calculated for the sites with gingival bleeding in both evaluation methods (GBI and flossing). The primary outcome was the percentage of disagreement between the assessments in the different protocols. The data were analyzed by one-way ANOVA, McNemar, chi-square and Tukey's post hoc tests, with a 5% significance level. When gingivitis was absent in the first assessment (negative GBI), bleeding was detected in the second assessment by TF and GF in 41.7% (p gingivitis in the second assessment (negative GBI), TF and GF detected bleeding in the first assessment in 38.9% (p = 0.004) and 58.3% (p gingivitis than GBI.

  10. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  11. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  12. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  13. PROXIMAL DISABILITY AND SPINAL DEFORMITY INDEX IN PATIENTS WITH PROXIMAL FEMUR FRACTURES

    Directory of Open Access Journals (Sweden)

    Sylvio Mistro Neto

    2015-12-01

    Full Text Available Objective : To evaluate the quality of life related to the spine in patients with proximal femoral fractures. Methods : Study conducted in a tertiary public hospital in patients with proximal femoral fractures caused by low-energy trauma, through the Oswestry Disability Index questionnaire to asses complaints related to the spine at the time of life prior to the femoral fracture. The thoracic and lumbar spine of patients were also evaluated applying the radiographic index described by Gennant (Spinal Deformity Index, which assesses the number and severity of fractures. Results : Seventeen subjects completed the study. All had some degree of vertebral fracture. Patients were classified in the categories of severe and very severe disability in the questionnaire about quality of life. It was found that the higher SDI, the better the quality of life. Conclusion : There is a strong association of disability related to the spine in patients with proximal femoral fracture, and this complaint must be systematically evaluated in patients with appendicular fracture.

  14. Initial outcome and efficacy of S3 proximal humerus locking plate in the treatment of proximal humerus fractures

    International Nuclear Information System (INIS)

    Zhang Zhiming; Zhu Xuesong; Bao Zhaohua; Yang Huilin

    2012-01-01

    Objective: to explore the initial outcome and efficacy of S 3 proximal humerus locking plate in the treatment of proximal humerus fractures. Methods: Twenty-two patients with proximal humerus fracture were treated with the S 3 proximal humerus locking plate. Most of the fractures were complex, two-part (n=4), three-part (n=11) and four-part (n=7) fractures according to the Neer classification of the proximal humerus fractures. Results: All patients were followed up for 3∼15 months. There were no complications related to the implant including loosening or breakage of the plate. Good and excellent results were documented in 17 patients fair results in 4 patients according the Neer scores of shoulder. Conclusion: New design concepts of S 3 proximal humerus plate provide the subchondral support and the internal fixation support. With the addition of the proper exercise of the shoulder joint, the outcomes would be satisfied. (authors)

  15. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  16. The BR eigenvalue algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  17. An Approximate Redistributed Proximal Bundle Method with Inexact Data for Minimizing Nonsmooth Nonconvex Functions

    Directory of Open Access Journals (Sweden)

    Jie Shen

    2015-01-01

    Full Text Available We describe an extension of the redistributed technique form classical proximal bundle method to the inexact situation for minimizing nonsmooth nonconvex functions. The cutting-planes model we construct is not the approximation to the whole nonconvex function, but to the local convexification of the approximate objective function, and this kind of local convexification is modified dynamically in order to always yield nonnegative linearization errors. Since we only employ the approximate function values and approximate subgradients, theoretical convergence analysis shows that an approximate stationary point or some double approximate stationary point can be obtained under some mild conditions.

  18. A polylogarithmic competitive algorithm for the k-server problem

    NARCIS (Netherlands)

    Bansal, N.; Buchbinder, N.; Madry, A.; Naor, J.

    2011-01-01

    We give the first polylogarithmic-competitive randomized online algorithm for the $k$-server problem on an arbitrary finite metric space. In particular, our algorithm achieves a competitive ratio of O(log^3 n log^2 k log log n) for any metric space on n points. Our algorithm improves upon the

  19. Proximity Operations and Docking Sensor Development

    Science.gov (United States)

    Howard, Richard T.; Bryan, Thomas C.; Brewster, Linda L.; Lee, James E.

    2009-01-01

    The Next Generation Advanced Video Guidance Sensor (NGAVGS) has been under development for the last three years as a long-range proximity operations and docking sensor for use in an Automated Rendezvous and Docking (AR&D) system. The first autonomous rendezvous and docking in the history of the U.S. Space Program was successfully accomplished by Orbital Express, using the Advanced Video Guidance Sensor (AVGS) as the primary docking sensor. That flight proved that the United States now has a mature and flight proven sensor technology for supporting Crew Exploration Vehicles (CEV) and Commercial Orbital Transport Systems (COTS) Automated Rendezvous and Docking (AR&D). NASA video sensors have worked well in the past: the AVGS used on the Demonstration of Autonomous Rendezvous Technology (DART) mission operated successfully in spot mode out to 2 km, and the first generation rendezvous and docking sensor, the Video Guidance Sensor (VGS), was developed and successfully flown on Space Shuttle flights in 1997 and 1998. 12 Parts obsolescence issues prevent the construction of more AVGS units, and the next generation sensor was updated to allow it to support the CEV and COTS programs. The flight proven AR&D sensor has been redesigned to update parts and add additional capabilities for CEV and COTS with the development of the Next Generation AVGS at the Marshall Space Flight Center. The obsolete imager and processor are being replaced with new radiation tolerant parts. In addition, new capabilities include greater sensor range, auto ranging capability, and real-time video output. This paper presents some sensor hardware trades, use of highly integrated laser components, and addresses the needs of future vehicles that may rendezvous and dock with the International Space Station (ISS) and other Constellation vehicles. It also discusses approaches for upgrading AVGS to address parts obsolescence, and concepts for minimizing the sensor footprint, weight, and power requirements

  20. A Fast and Accurate Algorithm for l1 Minimization Problems in Compressive Sampling (Preprint)

    Science.gov (United States)

    2013-01-22

    However, updating uk+1 via the formulation of Step 2 in Algorithm 1 can be implemented through the use of the component-wise Gauss - Seidel iteration which...may accelerate the rate of convergence of the algorithm and therefore reduce the total CPU-time consumed. The efficiency of component-wise Gauss - Seidel ...Micchelli, L. Shen, and Y. Xu, A proximity algorithm accelerated by Gauss - Seidel iterations for L1/TV denoising models, Inverse Problems, 28 (2012), p

  1. The Chandra Source Catalog: Algorithms

    Science.gov (United States)

    McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.

  2. Does a point lie inside a polygon

    International Nuclear Information System (INIS)

    Milgram, M.S.

    1988-01-01

    A superficially simple problem in computational geometry is that of determining whether a query point P lies in the interior of a polygon if it lies in the polygon's plane. Answering this question is often required when tracking particles in a Monte Carlo program; it is asked frequently and an efficient algorithm is crucial. Littlefield has recently rediscovered Shimrat's algorithm, while in separate works, Wooff, Preparata and Shamos and Mehlhorn, as well as Yamaguchi, give other algorithms. A practical algorithm answering this question when the polygon's plane is skewed in space is not immediately evident from most of these methods. Additionally, all but one fails when two sides extend to infinity (open polygons). In this paper the author review the above methods and present a new, efficient algorithm, valid for all convex polygons, open or closed, and topologically connected in n-dimensional space (n ≥ 2)

  3. Algorithmic randomness and physical entropy

    International Nuclear Information System (INIS)

    Zurek, W.H.

    1989-01-01

    Algorithmic randomness provides a rigorous, entropylike measure of disorder of an individual, microscopic, definite state of a physical system. It is defined by the size (in binary digits) of the shortest message specifying the microstate uniquely up to the assumed resolution. Equivalently, algorithmic randomness can be expressed as the number of bits in the smallest program for a universal computer that can reproduce the state in question (for instance, by plotting it with the assumed accuracy). In contrast to the traditional definitions of entropy, algorithmic randomness can be used to measure disorder without any recourse to probabilities. Algorithmic randomness is typically very difficult to calculate exactly but relatively easy to estimate. In large systems, probabilistic ensemble definitions of entropy (e.g., coarse-grained entropy of Gibbs and Boltzmann's entropy H=lnW, as well as Shannon's information-theoretic entropy) provide accurate estimates of the algorithmic entropy of an individual system or its average value for an ensemble. One is thus able to rederive much of thermodynamics and statistical mechanics in a setting very different from the usual. Physical entropy, I suggest, is a sum of (i) the missing information measured by Shannon's formula and (ii) of the algorithmic information content---algorithmic randomness---present in the available data about the system. This definition of entropy is essential in describing the operation of thermodynamic engines from the viewpoint of information gathering and using systems. These Maxwell demon-type entities are capable of acquiring and processing information and therefore can ''decide'' on the basis of the results of their measurements and computations the best strategy for extracting energy from their surroundings. From their internal point of view the outcome of each measurement is definite

  4. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  5. Optimal Point-to-Point Trajectory Tracking of Redundant Manipulators using Generalized Pattern Search

    Directory of Open Access Journals (Sweden)

    Thi Rein Myo

    2008-11-01

    Full Text Available Optimal point-to-point trajectory planning for planar redundant manipulator is considered in this study. The main objective is to minimize the sum of the position error of the end-effector at each intermediate point along the trajectory so that the end-effector can track the prescribed trajectory accurately. An algorithm combining Genetic Algorithm and Pattern Search as a Generalized Pattern Search GPS is introduced to design the optimal trajectory. To verify the proposed algorithm, simulations for a 3-D-O-F planar manipulator with different end-effector trajectories have been carried out. A comparison between the Genetic Algorithm and the Generalized Pattern Search shows that The GPS gives excellent tracking performance.

  6. Visibility of noisy point cloud data

    KAUST Repository

    Mehra, Ravish

    2010-06-01

    We present a robust algorithm for estimating visibility from a given viewpoint for a point set containing concavities, non-uniformly spaced samples, and possibly corrupted with noise. Instead of performing an explicit surface reconstruction for the points set, visibility is computed based on a construction involving convex hull in a dual space, an idea inspired by the work of Katz et al. [26]. We derive theoretical bounds on the behavior of the method in the presence of noise and concavities, and use the derivations to develop a robust visibility estimation algorithm. In addition, computing visibility from a set of adaptively placed viewpoints allows us to generate locally consistent partial reconstructions. Using a graph based approximation algorithm we couple such reconstructions to extract globally consistent reconstructions. We test our method on a variety of 2D and 3D point sets of varying complexity and noise content. © 2010 Elsevier Ltd. All rights reserved.

  7. Design of relative trajectories for in orbit proximity operations

    Science.gov (United States)

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele

    2018-04-01

    This paper presents an innovative approach to design relative trajectories suitable for close-proximity operations in orbit, by assigning high-level constraints regarding their stability, shape and orientation. Specifically, this work is relevant to space mission scenarios, e.g. formation flying, on-orbit servicing, and active debris removal, which involve either the presence of two spacecraft carrying out coordinated maneuvers, or a servicing/recovery spacecraft (chaser) performing monitoring, rendezvous and docking with respect to another space object (target). In the above-mentioned scenarios, an important aspect is the capability of reducing collision risks and of providing robust and accurate relative navigation solutions. To this aim, the proposed approach exploits a relative motion model relevant to two-satellite formations, and developed in mean orbit parameters, which takes the perturbation effect due to secular Earth oblateness, as well as the motion of the target along a small-eccentricity orbit, into account. This model is used to design trajectories which ensure safe relative motion, to minimize collision risks and relax control requirements, providing at the same time favorable conditions, in terms of target-chaser relative observation geometry for pose determination and relative navigation with passive or active electro-optical sensors on board the chaser. Specifically, three design strategies are proposed in the context of a space target monitoring scenario, considering as design cases both operational spacecraft and debris, characterized by highly variable shape, size and absolute rotational dynamics. The effectiveness of the proposed design approach in providing favorable observation conditions for target-chaser relative pose estimation is demonstrated within a simulation environment which reproduces the designed target-chaser relative trajectory, the operation of an active LIDAR installed on board the chaser, and pose estimation algorithms.

  8. Proximity coupling in superconductor-graphene heterostructures

    Science.gov (United States)

    Lee, Gil-Ho; Lee, Hu-Jong

    2018-05-01

    This review discusses the electronic properties and the prospective research directions of superconductor-graphene heterostructures. The basic electronic properties of graphene are introduced to highlight the unique possibility of combining two seemingly unrelated physics, superconductivity and relativity. We then focus on graphene-based Josephson junctions, one of the most versatile superconducting quantum devices. The various theoretical methods that have been developed to describe graphene Josephson junctions are examined, together with their advantages and limitations, followed by a discussion on the advances in device fabrication and the relevant length scales. The phase-sensitive properties and phase-particle dynamics of graphene Josephson junctions are examined to provide an understanding of the underlying mechanisms of Josephson coupling via graphene. Thereafter, microscopic transport of correlated quasiparticles produced by Andreev reflections at superconducting interfaces and their phase-coherent behaviors are discussed. Quantum phase transitions studied with graphene as an electrostatically tunable 2D platform are reviewed. The interplay between proximity-induced superconductivity and the quantum-Hall phase is discussed as a possible route to study topological superconductivity and non-Abelian physics. Finally, a brief summary on the prospective future research directions is given.

  9. [Ophthalmologists in the proximity of Adolf Hitler].

    Science.gov (United States)

    Rohrbach, J M

    2012-10-01

    Adolf Hitler met or at least knew about 5 ophthalmologists. The chair of ophthalmology in Berlin, Walther Löhlein, personally examined Hitler's eyes at least two times. The chair of ophthalmology in Breslau, Walter Dieter, developed "air raid protection spectacles" with the aid of high representatives of the NS-system and probably Adolf Hitler himself. Heinrich Wilhelm Kranz became rector of the universities of Giessen and Frankfurt/Main. He was known as a very strict advocate of the NS-race hygiene. Werner Zabel made plans for Hitler's diet and tried to interfere with Hitler's medical treatment. Finally, Hellmuth Unger was an influential representative of the medical press and a famous writer. Three of his novels with medical topics were made into a film which Hitler probably saw. Hitler had, so to say, a small "ophthalmological proximity" which, however, did not play a significant role for himself or the NS-state. © Georg Thieme Verlag KG Stuttgart · New York.

  10. Semiconductor detectors with proximity signal readout

    International Nuclear Information System (INIS)

    Asztalos, Stephen J.

    2012-01-01

    Semiconductor-based radiation detectors are routinely used for the detection, imaging, and spectroscopy of x-rays, gamma rays, and charged particles for applications in the areas of nuclear and medical physics, astrophysics, environmental remediation, nuclear nonproliferation, and homeland security. Detectors used for imaging and particle tracking are more complex in that they typically must also measure the location of the radiation interaction in addition to the deposited energy. In such detectors, the position measurement is often achieved by dividing or segmenting the electrodes into many strips or pixels and then reading out the signals from all of the electrode segments. Fine electrode segmentation is problematic for many of the standard semiconductor detector technologies. Clearly there is a need for a semiconductor-based radiation detector technology that can achieve fine position resolution while maintaining the excellent energy resolution intrinsic to semiconductor detectors, can be fabricated through simple processes, does not require complex electrical interconnections to the detector, and can reduce the number of required channels of readout electronics. Proximity electrode signal readout (PESR), in which the electrodes are not in physical contact with the detector surface, satisfies this need

  11. Imaging of rectus femoris proximal tendinopathies

    International Nuclear Information System (INIS)

    Pesquer, Lionel; Poussange, Nicolas; Meyer, Philippe; Dallaudiere, Benjamin; Feldis, Matthieu; Sonnery-Cottet, Bertrand; Graveleau, Nicolas

    2016-01-01

    The rectus femoris is the most commonly injured muscle of the anterior thigh among athletes, especially soccer players. Although the injury pattern of the muscle belly is well documented, less is known about the anatomy and specific lesions of the proximal tendons. For each head, three distinctive patterns may be encountered according to the location of the injury, which can be at the enthesis, within the tendon, or at the musculotendinous junction. In children, injuries correspond most commonly to avulsion of the anteroinferior iliac spine from the direct head and can lead to subspine impingement. Calcific tendinitis and traumatic tears may be encountered in adults. Recent studies have shown that traumatic injuries of the indirect head may be underdiagnosed and that injuries of both heads may have a surgical issue. Finally, in the case of tears, functional outcome and treatment may vary if the rupture involves one or both tendons and if the tear is partial or complete. Thus, it is mandatory for the radiologist to know the different ultrasound and magnetic resonance imaging (MRI) patterns of these lesions in order to provide accurate diagnosis and treatment. The purpose of this article is to recall the anatomy of the two heads of rectus femoris, describe a reliable method of assessment with ultrasound and MRI and know the main injury patterns, through our own experience and literature review. (orig.)

  12. Proximal spinal muscular atrophy: current orthopedic perspective

    Directory of Open Access Journals (Sweden)

    Haaker G

    2013-11-01

    Full Text Available Gerrit Haaker, Albert Fujak Department of Orthopaedic Surgery, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany Abstract: Spinal muscular atrophy (SMA is a hereditary neuromuscular disease of lower motor neurons that is caused by a defective "survival motor neuron" (SMN protein that is mainly associated with proximal progressive muscle weakness and atrophy. Although SMA involves a wide range of disease severity and a high mortality and morbidity rate, recent advances in multidisciplinary supportive care have enhanced quality of life and life expectancy. Active research for possible treatment options has become possible since the disease-causing gene defect was identified in 1995. Nevertheless, a causal therapy is not available at present, and therapeutic management of SMA remains challenging; the prolonged survival is increasing, especially orthopedic, respiratory and nutritive problems. This review focuses on orthopedic management of the disease, with discussion of key aspects that include scoliosis, muscular contractures, hip joint disorders, fractures, technical devices, and a comparative approach of conservative and surgical treatment. Also emphasized are associated complications including respiratory involvement, perioperative care and anesthesia, nutrition problems, and rehabilitation. The SMA disease course can be greatly improved with adequate therapy with established orthopedic procedures in a multidisciplinary therapeutic approach. Keywords: spinal muscular atrophy, scoliosis, contractures, fractures, lung function, treatment, rehabilitation, surgery, ventilation, nutrition, perioperative management

  13. Imaging of rectus femoris proximal tendinopathies

    Energy Technology Data Exchange (ETDEWEB)

    Pesquer, Lionel; Poussange, Nicolas; Meyer, Philippe; Dallaudiere, Benjamin; Feldis, Matthieu [Clinique du Sport de Bordeaux, Centre d' Imagerie Osteo-articulaire, Merignac (France); Sonnery-Cottet, Bertrand [Groupe Ramsay Generale de Sante - Hopital Prive Jean Mermoz, Centre Orthopedique Santy, Lyon (France); Graveleau, Nicolas [Clinique du Sport de Bordeaux, Centre de Chirurgie Orthopedique et Sportive, Merignac (France)

    2016-07-15

    The rectus femoris is the most commonly injured muscle of the anterior thigh among athletes, especially soccer players. Although the injury pattern of the muscle belly is well documented, less is known about the anatomy and specific lesions of the proximal tendons. For each head, three distinctive patterns may be encountered according to the location of the injury, which can be at the enthesis, within the tendon, or at the musculotendinous junction. In children, injuries correspond most commonly to avulsion of the anteroinferior iliac spine from the direct head and can lead to subspine impingement. Calcific tendinitis and traumatic tears may be encountered in adults. Recent studies have shown that traumatic injuries of the indirect head may be underdiagnosed and that injuries of both heads may have a surgical issue. Finally, in the case of tears, functional outcome and treatment may vary if the rupture involves one or both tendons and if the tear is partial or complete. Thus, it is mandatory for the radiologist to know the different ultrasound and magnetic resonance imaging (MRI) patterns of these lesions in order to provide accurate diagnosis and treatment. The purpose of this article is to recall the anatomy of the two heads of rectus femoris, describe a reliable method of assessment with ultrasound and MRI and know the main injury patterns, through our own experience and literature review. (orig.)

  14. A note on the linear memory Baum-Welch algorithm

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    2009-01-01

    We demonstrate the simplicity and generality of the recently introduced linear space Baum-Welch algorithm for hidden Markov models. We also point to previous literature on the subject.......We demonstrate the simplicity and generality of the recently introduced linear space Baum-Welch algorithm for hidden Markov models. We also point to previous literature on the subject....

  15. Proximate analysis of female population of wild feather back fish ...

    African Journals Online (AJOL)

    User

    2011-05-09

    May 9, 2011 ... Key words: Body composition, Notopterus notopterus, condition factor, wild fish. INTRODUCTION. Proximate body composition is the analysis of water, fat, protein and ash contents of the fish (Love, 1980). Proximate composition is a good indicator of physiology which is needed for routine analysis of ...

  16. Proximate analysis of Lentinus squarrosulus (Mont.) Singer and ...

    African Journals Online (AJOL)

    Each of the mushroom species was separated into its stipe and pileus and used for proximate analysis. There was a highly significant difference (p<0.01) in the proximate composition of the two species. P. atroumbonata had significantly higher crude protein, crude fibre and moisture content than L. squarrosulus while the ...

  17. Proximal Focal Femoral Deficiency in Ibadan a Developing ...

    African Journals Online (AJOL)

    The cultural aversion to amputation in our environment makes it difficult to employ that option of treatment. Proximal focal femoral deficiency in Ibadan a developing country's perspective and a review of the literature. Keywords: Proximal focal femoral deficiency , congenital malformations , limb malformations , lower limb ...

  18. Proximity approach to problems in topology and analysis

    CERN Document Server

    Naimpally, Somashekhar

    2009-01-01

    Dieses Buch konzentriert das aktuelle Gesamtwissen zum Proximity-Konzept und stellt es dem Leser in gut strukturierter Form dar. Hauptaugenmerk liegt auf den vielfältigen Möglichkeiten, die sich aus dem Proximity-Konzept der räumlichen Nähe und seiner Verallgemeinerung im Nearness-Konzept ergeben.

  19. Systemic calciphylaxis presenting as a painful, proximal myopathy.

    OpenAIRE

    Edelstein, C. L.; Wickham, M. K.; Kirby, P. A.

    1992-01-01

    A renal transplant patient who presented with a painful, proximal myopathy due to systemic calciphylaxis is described. The myopathy preceded the characteristic skin and soft tissue necrosis. Systemic calciphylaxis should be considered in a dialysis or a renal transplant patient presenting with a painful proximal myopathy even in the absence of necrotic skin lesions.

  20. Genetics Home Reference: proximal 18q deletion syndrome

    Science.gov (United States)

    ... characteristic features. Most cases of proximal 18q deletion syndrome are the result of a new (de novo) deletion and are not inherited from a ... J, Fox PT, Stratton RF, Perry B, Hale DE. Recurrent interstitial deletions of proximal 18q: a new syndrome involving expressive speech delay. Am J Med Genet ...

  1. Proximal soil sensors and data fusion for precision agriculture

    NARCIS (Netherlands)

    Mahmood, H.S.

    2013-01-01

    different remote and proximal soil sensors are available today that can scan entire fields and give detailed information on various physical, chemical, mechanical and biological soil properties. The first objective of this thesis was to evaluate different proximal soil sensors available today and to

  2. Two Stages repair of proximal hypospadias: Review of 33 cases

    African Journals Online (AJOL)

    HussamHassan

    Background/Purpose: Proximal hypospadias with chordee is the most challenging variant of hypospadias to reconstruct. During the last 10 years, the approach to sever hypospadias has been controversial. Materials & Methods: During the period from June 2002 to December 2009, I performed 33 cases with proximal.

  3. Photovoltaic Cells Mppt Algorithm and Design of Controller Monitoring System

    Science.gov (United States)

    Meng, X. Z.; Feng, H. B.

    2017-10-01

    This paper combined the advantages of each maximum power point tracking (MPPT) algorithm, put forward a kind of algorithm with higher speed and higher precision, based on this algorithm designed a maximum power point tracking controller with ARM. The controller, communication technology and PC software formed a control system. Results of the simulation and experiment showed that the process of maximum power tracking was effective, and the system was stable.

  4. Proximity correction of high-dosed frame with PROXECCO

    Science.gov (United States)

    Eisenmann, Hans; Waas, Thomas; Hartmann, Hans

    1994-05-01

    Usefulness of electron beam lithography is strongly related to the efficiency and quality of methods used for proximity correction. This paper addresses the above issue by proposing an extension to the new proximity correction program PROXECCO. The combination of a framing step with PROXECCO produces a pattern with a very high edge accuracy and still allows usage of the fast correction procedure. Making a frame with a higher dose imitates a fine resolution correction where the coarse part is disregarded. So after handling the high resolution effect by means of framing, an additional coarse correction is still needed. Higher doses have a higher contribution to the proximity effect. This additional proximity effect is taken into account with the help of the multi-dose input of PROXECCO. The dose of the frame is variable, depending on the deposited energy coming from backscattering of the proximity. Simulation proves the very high edge accuracy of the applied method.

  5. Sim3C: simulation of Hi-C and Meta3C proximity ligation sequencing technologies.

    Science.gov (United States)

    DeMaere, Matthew Z; Darling, Aaron E

    2018-02-01

    Chromosome conformation capture (3C) and Hi-C DNA sequencing methods have rapidly advanced our understanding of the spatial organization of genomes and metagenomes. Many variants of these protocols have been developed, each with their own strengths. Currently there is no systematic means for simulating sequence data from this family of sequencing protocols, potentially hindering the advancement of algorithms to exploit this new datatype. We describe a computational simulator that, given simple parameters and reference genome sequences, will simulate Hi-C sequencing on those sequences. The simulator models the basic spatial structure in genomes that is commonly observed in Hi-C and 3C datasets, including the distance-decay relationship in proximity ligation, differences in the frequency of interaction within and across chromosomes, and the structure imposed by cells. A means to model the 3D structure of randomly generated topologically associating domains is provided. The simulator considers several sources of error common to 3C and Hi-C library preparation and sequencing methods, including spurious proximity ligation events and sequencing error. We have introduced the first comprehensive simulator for 3C and Hi-C sequencing protocols. We expect the simulator to have use in testing of Hi-C data analysis algorithms, as well as more general value for experimental design, where questions such as the required depth of sequencing, enzyme choice, and other decisions can be made in advance in order to ensure adequate statistical power with respect to experimental hypothesis testing.

  6. Proximal methods for the resolution of inverse problems: application to positron emission tomography

    International Nuclear Information System (INIS)

    Pustelnik, N.

    2010-12-01

    The objective of this work is to propose reliable, efficient and fast methods for minimizing convex criteria, that are found in inverse problems for imagery. We focus on restoration/reconstruction problems when data is degraded with both a linear operator and noise, where the latter is not assumed to be necessarily additive.The reliability of the method is ensured through the use of proximal algorithms, the convergence of which is guaranteed when a convex criterion is considered. Efficiency is sought through the choice of criteria adapted to the noise characteristics, the linear operators and the image specificities. Of particular interest are regularization terms based on total variation and/or sparsity of signal frame coefficients. As a consequence of the use of frames, two approaches are investigated, depending on whether the analysis or the synthesis formulation is chosen. Fast processing requirements lead us to consider proximal algorithms with a parallel structure. Theoretical results are illustrated on several large size inverse problems arising in image restoration, stereoscopy, multi-spectral imagery and decomposition into texture and geometry components. We focus on a particular application, namely Positron Emission Tomography (PET), which is particularly difficult because of the presence of a projection operator combined with Poisson noise, leading to highly corrupted data. To optimize the quality of the reconstruction, we make use of the spatio-temporal characteristics of brain tissue activity. (author)

  7. Morphological analysis of the proximal femur by computed tomography in Japanese subjects

    International Nuclear Information System (INIS)

    Hagiwara, Masashi

    1995-01-01

    In order to evaluate the morphological features of the proximal femur in the Japanese, 100 femora of normal Japanese subjects (normal group) and 60 femora of 43 Japanese patients with secondary osteoarthrosis of the hip (OA group) were analyzed using CT images. The scans for the dried bones (normal group) were done at a setting of 80 kV and 20 mA, for 2 sec duration. The scans were reconstructed using the soft tissue algorithm built into the GE-9800 scanner. The patient scans (OA group) were done at 120 kV and 170 mA also for 2 sec duration, and reconstructed using the same bone algorithm. The results were as follows: Thinning of the femoral cortex occurred in normal females over 60 years of age. The canal flare index at the proximal part of the femoral diaphysis was negatively correlated with the canal diameter at the isthmus. The index at the upper part was greater than that at the lower part. The two groups showed no statistical difference in this index. In the metaphysis, the canal flare index at the anterior portion was twice that at the posterior portion. In absolute terms, the OA group had a reduced flare or curve along the medial portion. In cross-section, the canal shape of the diaphysis was more elliptical in the OA group than in the normal group. The longitudinal axis of the canal was directed more sagittally in the OA group than in the normal group. (author)

  8. Quantum Computation and Algorithms

    International Nuclear Information System (INIS)

    Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.

    1999-01-01

    It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution

  9. Track length estimation applied to point detectors

    International Nuclear Information System (INIS)

    Rief, H.; Dubi, A.; Elperin, T.

    1984-01-01

    The concept of the track length estimator is applied to the uncollided point flux estimator (UCF) leading to a new algorithm of calculating fluxes at a point. It consists essentially of a line integral of the UCF, and although its variance is unbounded, the convergence rate is that of a bounded variance estimator. In certain applications, involving detector points in the vicinity of collimated beam sources, it has a lower variance than the once-more-collided point flux estimator, and its application is more straightforward

  10. Interactive orbital proximity operations planning system

    Science.gov (United States)

    Grunwald, Arthur J.; Ellis, Stephen R.

    1990-01-01

    An interactive graphical planning system for on-site planning of proximity operations in the congested multispacecraft environment about the space station is presented. The system shows the astronaut a bird's eye perspective of the space station, the orbital plane, and the co-orbiting spacecraft. The system operates in two operational modes: (1) a viewpoint mode, in which the astronaut is able to move the viewpoint around in the orbital plane to range in on areas of interest; and (2) a trajectory design mode, in which the trajectory is planned. Trajectory design involves the composition of a set of waypoints which result in a fuel-optimal trajectory which satisfies all operational constraints, such as departure and arrival constraints, plume impingement constraints, and structural constraints. The main purpose of the system is to present the trajectory and the constraints in an easily interpretable graphical format. Through a graphical interactive process, the trajectory waypoints are edited until all operational constraints are satisfied. A series of experiments was conducted to evaluate the system. Eight airline pilots with no prior background in orbital mechanics participated in the experiments. Subject training included a stand-alone training session of about 6 hours duration, in which the subjects became familiar with orbital mechanics concepts and performed a series of exercises to familiarize themselves with the control and display features of the system. They then carried out a series of production runs in which 90 different trajectory design situations were randomly addressed. The purpose of these experiments was to investigate how the planning time, planning efforts, and fuel expenditures were affected by the planning difficulty. Some results of these experiments are presented.

  11. Selective proximal vagotomy with and without pyloroplasty

    International Nuclear Information System (INIS)

    Brodersen, E.

    1984-01-01

    It was the aim of the study described here to gain information relevant to the well-being of patients subjected to selective proximal vagotomy with or without pyroloplasty as soon as possible after surgery. For this purpose, particular care was taken to ascertain the frequency of recidivation and the post-operative occurrence of disturbances in the emptying of gastric contents. In 35 patients solely undergoing SPV and a further 12 individuals, where both SPV and pyroloplasty had been performed, gastric emptying was monitored using a gamma camera and computer system. All patients were given a standardised test meal consisting of 500 ml ready-made milk labeled with 2 mCi 99mTc-HSA. After the patients had been assigned to different study groups according to the gastric emptying rates established in the individual cases, it became evident that there was a correlation between gastric emptying time (T/2) and the occurrence of post-operative discomfort. In the majority of patients the gastric emptying rate was found to be increased as compared to individuals with a healthy stomach. Among a total of 8 patients showing delayed gastric emptying only one, who solely underwent SPV, reported post-operative discomfort. Markedly increased rates of gastric emptying (T/2 ≤ 5 min) were predominantly determined in patients subjected to SPV in conjunction with pyroloplasty. A dumping syndrome and diarrhea were diagnosed in every third patient. Clinical follow-up studies and questionnaires distributed among the study patients showed relapses to occur with a frequency of 6.7%, the recidivation of ulcera being confined to the group of patients merely undergoing SPV. (TRV) [de

  12. Nanocrystal Bioassembly: Asymmetry, Proximity, and Enzymatic Manipulation

    Energy Technology Data Exchange (ETDEWEB)

    Claridge, Shelley A. [Univ. of California, Berkeley, CA (United States)

    2008-05-01

    Research at the interface between biomolecules and inorganic nanocrystals has resulted in a great number of new discoveries. In part this arises from the synergistic duality of the system: biomolecules may act as self-assembly agents for organizing inorganic nanocrystals into functional materials; alternatively, nanocrystals may act as microscopic or spectroscopic labels for elucidating the behavior of complex biomolecular systems. However, success in either of these functions relies heavily uponthe ability to control the conjugation and assembly processes.In the work presented here, we first design a branched DNA scaffold which allows hybridization of DNA-nanocrystal monoconjugates to form discrete assemblies. Importantly, the asymmetry of the branched scaffold allows the formation of asymmetric2assemblies of nanocrystals. In the context of a self-assembled device, this can be considered a step toward the ability to engineer functionally distinct inputs and outputs.Next we develop an anion-exchange high performance liquid chromatography purification method which allows large gold nanocrystals attached to single strands of very short DNA to be purified. When two such complementary conjugates are hybridized, the large nanocrystals are brought into close proximity, allowing their plasmon resonances to couple. Such plasmon-coupled constructs are of interest both as optical interconnects for nanoscale devices and as `plasmon ruler? biomolecular probes.We then present an enzymatic ligation strategy for creating multi-nanoparticle building blocks for self-assembly. In constructing a nanoscale device, such a strategy would allow pre-assembly and purification of components; these constructs can also act as multi-label probes of single-stranded DNA conformational dynamics. Finally we demonstrate a simple proof-of-concept of a nanoparticle analog of the polymerase chain reaction.

  13. Proximally exposed A-bomb survivors. 2

    International Nuclear Information System (INIS)

    Kamada, Nanao

    1992-01-01

    Methods for observing chromosomes can be chronologically divided into the era of non-differential staining technique (1962-1975) and the era of differential staining method (since 1976). This paper reviews the literature of chromosomal aberrations in bone marrow cells found in the two eras. Findings during the era of 1962-1975 include the frequency of chromosomal aberrations in bone marrow cells, comparison of chromosomal aberrations in bone marrow cells and T lymphocytes, and annual variation of chromosomal aberrations. The frequency of chromosomal aberrations was high in proximally exposed A-bomb survivors (90.5% and 52.6% in A-bomb survivors exposed within 500 m and at 501-1,000 m, respectively); on the contrary, it was low in those exposed far from 1,000 m (6.2% or less). The frequency of chromosomal aberrations in bone marrow cells was lower than that in T lymphocytes (21.5% vs 27.1% in those exposed within 500 m and 14.1% vs 23% in those exposed at 501-1,000 m). Annual analysis for chromosomal aberrations has shown the somewhat dependence upon medullary hematopoiesis and virus infection. The advent of differential staining technique since 1976 has made it possible to clarify the type of chromosomal aberrations and site of breakage. Of 710 bone marrow cells taken from 13 A-bomb survivors exposed within 1,000 m, 121 cells (from 11 A-bomb survivors) exhibited chromosomal aberrations. In differential staining analysis, all 121 cells but one were found to be of stable type, such as translocation and inversion. Furthermore, the site of breakage was found to be non-randomly distributed. Analysis of chromosomal aberrations in bone marrow cells has advantages of reflecting dynamic condition of these cells and determining gradual progression into leukemia. (N.K.)

  14. Hypospadias and residential proximity to pesticide applications.

    Science.gov (United States)

    Carmichael, Suzan L; Yang, Wei; Roberts, Eric M; Kegley, Susan E; Wolff, Craig; Guo, Liang; Lammer, Edward J; English, Paul; Shaw, Gary M

    2013-11-01

    Experimental evidence suggests pesticides may be associated with hypospadias. Examine the association of hypospadias with residential proximity to commercial agricultural pesticide applications. The study population included male infants born from 1991 to 2004 to mothers residing in 8 California counties. Cases (n = 690) were ascertained by the California Birth Defects Monitoring Program; controls were selected randomly from the birth population (n = 2195). We determined early pregnancy exposure to pesticide applications within a 500-m radius of mother's residential address, using detailed data on applications and land use. Associations with exposures to physicochemical groups of pesticides and specific chemicals were assessed using logistic regression adjusted for maternal race or ethnicity and age and infant birth year. Forty-one percent of cases and controls were classified as exposed to 57 chemical groups and 292 chemicals. Despite >500 statistical comparisons, there were few elevated odds ratios with confidence intervals that excluded 1 for chemical groups or specific chemicals. Those that did were for monochlorophenoxy acid or ester herbicides; the insecticides aldicarb, dimethoate, phorate, and petroleum oils; and adjuvant polyoxyethylene sorbitol among all cases; 2,6-dinitroaniline herbicides, the herbicide oxyfluorfen, and the fungicide copper sulfate among mild cases; and chloroacetanilide herbicides, polyalkyloxy compounds used as adjuvants, the insecticides aldicarb and acephate, and the adjuvant nonyl-phenoxy-poly(ethylene oxy)ethanol among moderate and severe cases. Odds ratios ranged from 1.9 to 2.9. Most pesticides were not associated with elevated hypospadias risk. For the few that were associated, results should be interpreted with caution until replicated in other study populations.

  15. Ligament augmentation for prevention of proximal junctional kyphosis and proximal junctional failure in adult spinal deformity.

    Science.gov (United States)

    Safaee, Michael M; Deviren, Vedat; Dalle Ore, Cecilia; Scheer, Justin K; Lau, Darryl; Osorio, Joseph A; Nicholls, Fred; Ames, Christopher P

    2018-05-01

    OBJECTIVE Proximal junctional kyphosis (PJK) is a well-recognized, yet incompletely defined, complication of adult spinal deformity surgery. There is no standardized definition for PJK, but most studies describe PJK as an increase in the proximal junctional angle (PJA) of greater than 10°-20°. Ligament augmentation is a novel strategy for PJK reduction that provides strength to the upper instrumented vertebra (UIV) and adjacent segments while also reducing junctional stress at those levels. METHODS In this study, ligament augmentation was used in a consecutive series of adult spinal deformity patients at a single institution. Patient demographics, including age; sex; indication for surgery; revision surgery; surgical approach; and use of 3-column osteotomies, vertebroplasty, or hook fixation at the UIV, were collected. The PJA was measured preoperatively and at last follow-up using 36-inch radiographs. Data on change in PJA and need for revision surgery were collected. Univariate and multivariate analyses were performed to identify factors associated with change in PJA and proximal junctional failure (PJF), defined as PJK requiring surgical correction. RESULTS A total of 200 consecutive patients were included: 100 patients before implementation of ligament augmentation and 100 patients after implementation of this technique. The mean age of the ligament augmentation cohort was 66 years, and 67% of patients were women. Over half of these cases (51%) were revision surgeries, with 38% involving a combined anterior or lateral and posterior approach. The mean change in PJA was 6° in the ligament augmentation group compared with 14° in the control group (p historical cohort, ligament augmentation is associated with a significant decrease in PJK and PJF. These data support the implementation of ligament augmentation in surgery for adult spinal deformity, particularly in patients with a high risk of developing PJK and PJF.

  16. Fast Change Point Detection for Electricity Market Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Berkeley, UC; Gu, William; Choi, Jaesik; Gu, Ming; Simon, Horst; Wu, Kesheng

    2013-08-25

    Electricity is a vital part of our daily life; therefore it is important to avoid irregularities such as the California Electricity Crisis of 2000 and 2001. In this work, we seek to predict anomalies using advanced machine learning algorithms. These algorithms are effective, but computationally expensive, especially if we plan to apply them on hourly electricity market data covering a number of years. To address this challenge, we significantly accelerate the computation of the Gaussian Process (GP) for time series data. In the context of a Change Point Detection (CPD) algorithm, we reduce its computational complexity from O($n^{5}$) to O($n^{2}$). Our efficient algorithm makes it possible to compute the Change Points using the hourly price data from the California Electricity Crisis. By comparing the detected Change Points with known events, we show that the Change Point Detection algorithm is indeed effective in detecting signals preceding major events.

  17. Quantum multiple scattering: Eigenmode expansion and its applications to proximity resonance

    International Nuclear Information System (INIS)

    Li Sheng; Heller, Eric J.

    2003-01-01

    We show that for a general system of N s-wave point scatterers, there are always N eigenmodes. These eigenmodes or eigenchannels play the same role as spherical harmonics for a spherically symmetric target--they give a phase shift only. In other words, the T matrix of the system is of rank N, and the eigenmodes are eigenvectors corresponding to nonzero eigenvalues of the T matrix. The eigenmode expansion approach can give insight to the total scattering cross section; the position, width, and superradiant or subradiant nature of resonance peaks; the unsymmetric Fano line shape of sharp proximity resonance peaks based on the high-energy tail of a broadband; and other properties. Off-resonant eigenmodes for identical proximate scatterers are approximately angular-momentum eigenstates

  18. Customizing Extensor Reconstruction in Vascularized Toe Joint Transfers to Finger Proximal Interphalangeal Joints: A Strategic Approach for Correcting Extensor Lag.

    Science.gov (United States)

    Loh, Charles Yuen Yung; Hsu, Chung-Chen; Lin, Cheng-Hung; Chen, Shih-Heng; Lien, Shwu-Huei; Lin, Chih-Hung; Wei, Fu-Chan; Lin, Yu-Te

    2017-04-01

    Vascularized toe proximal interphalangeal joint transfer allows the restoration of damaged joints. However, extensor lag and poor arc of motion have been reported. The authors present their outcomes of treatment according to a novel reconstructive algorithm that addresses extensor lag and allows for consistent results postoperatively. Vascularized toe joint transfers were performed in a consecutive series of 26 digits in 25 patients. The average age was 30.5 years, with 14 right and 12 left hands. Reconstructed digits included eight index, 10 middle, and eight ring fingers. Simultaneous extensor reconstructions were performed and eight were centralization of lateral bands, five were direct extensor digitorum longus-to-extensor digitorum communis repairs, and 13 were central slip reconstructions. The average length of follow-up was 16.7 months. The average extension lag was 17.9 degrees. The arc of motion was 57.7 degrees (81.7 percent functional use of pretransfer toe proximal interphalangeal joint arc of motion). There was no significant difference in the reconstructed proximal interphalangeal joint arc of motion for the handedness (p = 0.23), recipient digits (p = 0.37), or surgical experience in vascularized toe joint transfer (p = 0.25). The outcomes of different techniques of extensor mechanism reconstruction were similar in terms of extensor lag, arc of motion, and reconstructed finger arc of motion compared with the pretransfer toe proximal interphalangeal joint arc of motion. With this treatment algorithm, consistent outcomes can be produced with minimal extensor lag and maximum use of potential toe proximal interphalangeal joint arc of motion. Therapeutic, IV.

  19. Efficient triangulation of Poisson-disk sampled point sets

    KAUST Repository

    Guo, Jianwei

    2014-05-06

    In this paper, we present a simple yet efficient algorithm for triangulating a 2D input domain containing a Poisson-disk sampled point set. The proposed algorithm combines a regular grid and a discrete clustering approach to speedup the triangulation. Moreover, our triangulation algorithm is flexible and performs well on more general point sets such as adaptive, non-maximal Poisson-disk sets. The experimental results demonstrate that our algorithm is robust for a wide range of input domains and achieves significant performance improvement compared to the current state-of-the-art approaches. © 2014 Springer-Verlag Berlin Heidelberg.

  20. Area, and Power Performance Analysis of a Floating-Point Based Application on FPGAs

    National Research Council Canada - National Science Library

    Govindu, Gokul

    2003-01-01

    .... However the inevitable quantization effects and the complexity of converting the floating-point algorithm into a fixed point one, limit the use of fixed-point arithmetic for high precision embedded computing...

  1. NeatSort - A practical adaptive algorithm

    OpenAIRE

    La Rocca, Marcello; Cantone, Domenico

    2014-01-01

    We present a new adaptive sorting algorithm which is optimal for most disorder metrics and, more important, has a simple and quick implementation. On input $X$, our algorithm has a theoretical $\\Omega (|X|)$ lower bound and a $\\mathcal{O}(|X|\\log|X|)$ upper bound, exhibiting amazing adaptive properties which makes it run closer to its lower bound as disorder (computed on different metrics) diminishes. From a practical point of view, \\textit{NeatSort} has proven itself competitive with (and of...

  2. Fermion cluster algorithms

    International Nuclear Information System (INIS)

    Chandrasekharan, Shailesh

    2000-01-01

    Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm

  3. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  4. An efficient quantum algorithm for spectral estimation

    Science.gov (United States)

    Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth

    2017-03-01

    We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

  5. A Direct Search Algorithm for Global Optimization

    Directory of Open Access Journals (Sweden)

    Enrique Baeyens

    2016-06-01

    Full Text Available A direct search algorithm is proposed for minimizing an arbitrary real valued function. The algorithm uses a new function transformation and three simplex-based operations. The function transformation provides global exploration features, while the simplex-based operations guarantees the termination of the algorithm and provides global convergence to a stationary point if the cost function is differentiable and its gradient is Lipschitz continuous. The algorithm’s performance has been extensively tested using benchmark functions and compared to some well-known global optimization algorithms. The results of the computational study show that the algorithm combines both simplicity and efficiency and is competitive with the heuristics-based strategies presently used for global optimization.

  6. Treatment of Unstable Trochanteric Femur Fractures: Proximal Femur Nail Versus Proximal Femur Locking Compression Plate.

    Science.gov (United States)

    Singh, Ashutosh Kumar; Narsaria, Nidi; G R, Arun; Srivastava, Vivek

    Unstable trochanteric femur fractures are common fractures that are difficult to manage. We conducted a prospective study to compare functional outcomes and complications of 2 different implant designs, proximal femur nail (PFN) and proximal femur locking compression plate (PFLCP), used in internal fixation of unstable trochanteric femur fractures. On hospital admission, 48 patients with unstable trochanteric fractures were randomly assigned (using a sealed envelope method) to treatment with either PFN (24 patients) or PFLCP (24 patients). Perioperative data and complications were recorded. All cases were followed up for 2 years. The groups did not differ significantly (P > .05) in operative time, reduction quality, complications, hospital length of stay, union rate, or time to union. Compared with the PFLCP group, the PFN group had shorter incisions and less blood loss. Regarding functional outcomes, there was no significant difference in mean Harris Hip Score (P = .48) or Palmer and Parker mobility score (P = .58). Both PFN and PFLCP are effective in internal fixation of unstable trochanteric femur fractures.

  7. Dual pathology proximal median nerve compression of the forearm.

    LENUS (Irish Health Repository)

    Murphy, Siun M

    2013-12-01

    We report an unusual case of synchronous pathology in the forearm- the coexistence of a large lipoma of the median nerve together with an osteochondroma of the proximal ulna, giving rise to a dual proximal median nerve compression. Proximal median nerve compression neuropathies in the forearm are uncommon compared to the prevalence of distal compression neuropathies (eg Carpal Tunnel Syndrome). Both neural fibrolipomas (Refs. 1,2) and osteochondromas of the proximal ulna (Ref. 3) in isolation are rare but well documented. Unlike that of a distal compression, a proximal compression of the median nerve will often have a definite cause. Neural fibrolipoma, also called fibrolipomatous hamartoma are rare, slow-growing, benign tumours of peripheral nerves, most often occurring in the median nerve of younger patients. To our knowledge, this is the first report of such dual pathology in the same forearm, giving rise to a severe proximal compression of the median nerve. In this case, the nerve was being pushed anteriorly by the osteochondroma, and was being compressed from within by the intraneural lipoma. This unusual case highlights the advantage of preoperative imaging as part of the workup of proximal median nerve compression.

  8. Dual pathology proximal median nerve compression of the forearm.

    Science.gov (United States)

    Murphy, Siun M; Browne, Katherine; Tuite, David J; O'Shaughnessy, Michael

    2013-12-01

    We report an unusual case of synchronous pathology in the forearm- the coexistence of a large lipoma of the median nerve together with an osteochondroma of the proximal ulna, giving rise to a dual proximal median nerve compression. Proximal median nerve compression neuropathies in the forearm are uncommon compared to the prevalence of distal compression neuropathies (eg Carpal Tunnel Syndrome). Both neural fibrolipomas (Refs. 1,2) and osteochondromas of the proximal ulna (Ref. 3) in isolation are rare but well documented. Unlike that of a distal compression, a proximal compression of the median nerve will often have a definite cause. Neural fibrolipoma, also called fibrolipomatous hamartoma are rare, slow-growing, benign tumours of peripheral nerves, most often occurring in the median nerve of younger patients. To our knowledge, this is the first report of such dual pathology in the same forearm, giving rise to a severe proximal compression of the median nerve. In this case, the nerve was being pushed anteriorly by the osteochondroma, and was being compressed from within by the intraneural lipoma. This unusual case highlights the advantage of preoperative imaging as part of the workup of proximal median nerve compression. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  9. Digital contrast subtraction radiography for proximal caries diagnosis

    International Nuclear Information System (INIS)

    Kang, Byung Cheol; Yoon, Suk Ja

    2002-01-01

    To determine whether subtraction images utilizing contrast media can improve the diagnostic performance of proximal caries diagnosis compared to conventional periapical radiographic images. Thirty-six teeth with 57 proximal surfaces were radiographied using a size no.2 RVG-ui sensor (Trophy Radiology, Marne-la-Vallee, France). The teeth immersed in water-soluble contrast media and subtraction images were taken. Each tooth was then sectioned for histologic examination. The digital radiographic images and subtraction images were examined and interpreted by three dentists for proximal caries. The results of the proximal caries diagnosis were then verified with the results of the histologic examination. The proximal caries sensitivity using digital subtraction radiography was significantly higher than simply examining a single digital radiograph. The sensitivity of the proximal dentinal carious lesion when analyzed with the subtraction radiograph and the radiograph together was higher than with the subtraction radiograph or the radiograph alone. The use of subtraction radiography with contrast media may be useful for detecting proximal dentinal carious lesions.

  10. Effect of age on proximal esophageal response to swallowing

    Directory of Open Access Journals (Sweden)

    Roberto Oliveira Dantas

    2010-12-01

    Full Text Available CONTEXT: It has been demonstrated that the ageing process affects esophageal motility. OBJECTIVES: To evaluate the effect of the age on the proximal esophageal response to wet swallows. METHOD: We measured the proximal esophageal response to swallows of a 5 mL bolus of water in 69 healthy volunteers, 20 of them aged 18-30 years (group I, 27 aged 31-50 years (group II, and 22 aged 51-74 years (group III. We used the manometric method with continuous perfusion. The proximal esophageal contractions were recorded 5 cm from a pharyngeal recording site located 1 cm above the upper esophageal sphincter. The time between the onset of the pharyngeal and of the proximal esophageal recording (pharyngeal-esophageal time and the amplitude, duration and area under the curve of the proximal esophageal contraction were measured. RESULTS: The pharyngeal-esophageal time was shorter in group I subjects than in group II and III subjects (P<0.05. The duration of proximal esophageal contractions was longer in group I than in groups II and III (P<0.001. There was no differences between groups in the amplitude or area under the curve of contractions. There were no differences between groups II and III for any of the measurements. CONCLUSION: We conclude that the age may affects the response of the proximal esophagus to wet swallows.

  11. Giant proximity effect and critical opalescence in EuS

    Science.gov (United States)

    Charlton, Timothy; Ramos, Silvia; Quintanilla, Jorge; Suter, Andreas; Moodera, Jagadeesh

    2015-03-01

    The proximity effect is a type of wetting phenomenon where an ordered state, usually magnetism or superconductivity, ``leaks'' from one material into an adjacent one over some finite distance. For superconductors, the characteristic range is of the order of the coherence length, usually hundreds of nm. Nevertheless much longer, ``giant'' proximity effects have been observed in cuprate perovskite junctions. Such giant proximity effects can be understood by taking into account the divergence of the pairing susceptibility in the non-superconducting material when it is itself close to a superconducting instability: a superconducting version of critical opalescence. Since critical opalescence occurs in all second order phase transitions, giant proximity effects are expected to be general, therefor there must be a giant ferromagnetic proximity effect. Compared to its superconducting counterpart, the giant ferromagnetic proximity effect has the advantage that the order parameter (magnetization) can be observed directly. We have fabricated Co/EuS thin films and measured the magnetization profiles as a function of temperature using the complementary techniques of low energy muon relaxation and polarized neutron reflectivity. Details of the proximity effect near TCEuS will be presented.

  12. A verified LLL algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  13. Cancer in proximity to TV towers

    International Nuclear Information System (INIS)

    Hocking, B.; Gordon, I.; Hatfield, G.

    1996-01-01

    Standard (AS2772.1). The few actual measurements available indicate even lower levels. The analysis shows no increased risk of brain cancer. An increased risk of childhood leukaemia is indicated in the municipalities close to the TV towers. The rate ratio for incidence is 1.6 (95% Cl: 1.08-2 41) and for mortality is 2.25 (95% Cl: 1.29-3.92), mainly due to lymphatic leukaemia (1.63 for incidence, 2.84 for mortality). An association between increased childhood leukaemia and proximity to TV towers is indicated. Further studies are needed to test this association and determine any dose-response relationship before firm conclusions may be reached

  14. Reoperations following proximal interphalangeal joint nonconstrained arthroplasties.

    Science.gov (United States)

    Pritsch, Tamir; Rizzo, Marco

    2011-09-01

    To retrospectively analyze the reasons for reoperations following primary nonconstrained proximal interphalangeal (PIP) joint arthroplasty and review clinical outcomes in this group of patients with 1 or more reoperations. Between 2001 and 2009, 294 nonconstrained (203 pyrocarbon and 91 metal-plastic) PIP joint replacements were performed in our institution. A total of 76 fingers (59 patients) required reoperation (50 pyrocarbon and 26 metal-plastic). There were 40 women and 19 men with an average age of 51 years (range, 19-83 y). Primary diagnoses included osteoarthritis in 35, posttraumatic arthritis in 24, and inflammatory arthritis in 17 patients. There were 21 index, 27 middle, 18 ring, and 10 small fingers. The average number of reoperations per PIP joint was 1.6 (range, 1-4). A total of 45 joints had 1 reoperation, 19 had 2, 11 had 3, and 1 had 4. Extensor mechanism dysfunction was the most common reason for reoperation; it involved 51 of 76 fingers and was associated with Chamay or tendon-reflecting surgical approaches. Additional etiologies included component loosening in 17, collateral ligament failure in 10, and volar plate contracture in 8 cases. Inflammatory arthritis was associated with collateral ligament failure. Six fingers were eventually amputated, 9 had PIP joint arthrodeses, and 2 had resection arthroplasties. The arthrodesis and amputation rates correlated with the increased number of reoperations per finger. Clinically, most patients had no or mild pain at the most recent follow-up, and the PIP joint range-of-motion was not significantly different from preoperative values. Pain levels improved with longer follow-up. Reoperations following primary nonconstrained PIP joint arthroplasties are common. Extensor mechanism dysfunction was the most common reason for reoperation. The average reoperation rate was 1.6, and arthrodesis and amputation are associated with an increasing number of operations. Overall clinical outcomes demonstrated no

  15. Diffuse scattering from crystals with point defects

    International Nuclear Information System (INIS)

    Andrushevsky, N.M.; Shchedrin, B.M.; Simonov, V.I.; Malakhova, L.F.

    2002-01-01

    The analytical expressions for calculating the intensities of X-ray diffuse scattering from a crystal of finite dimensions and monatomic substitutional, interstitial, or vacancy-type point defects have been derived. The method for the determination of the three-dimensional structure by experimental diffuse-scattering data from crystals with point defects having various concentrations is discussed and corresponding numerical algorithms are suggested

  16. Computing half-plane and strip discrepancy of planar point sets

    NARCIS (Netherlands)

    Berg, de M.

    1996-01-01

    We present efficient algorithms for two problems concerning the discrepancy of a set S of n points in the unit square in the plane. First, we describe an algorithm for maintaining the half-plane discrepancy of S under insertions and deletions of points. The algorithm runs in O(nlogn) worst-case time

  17. Linear programming mathematics, theory and algorithms

    CERN Document Server

    1996-01-01

    Linear Programming provides an in-depth look at simplex based as well as the more recent interior point techniques for solving linear programming problems. Starting with a review of the mathematical underpinnings of these approaches, the text provides details of the primal and dual simplex methods with the primal-dual, composite, and steepest edge simplex algorithms. This then is followed by a discussion of interior point techniques, including projective and affine potential reduction, primal and dual affine scaling, and path following algorithms. Also covered is the theory and solution of the linear complementarity problem using both the complementary pivot algorithm and interior point routines. A feature of the book is its early and extensive development and use of duality theory. Audience: The book is written for students in the areas of mathematics, economics, engineering and management science, and professionals who need a sound foundation in the important and dynamic discipline of linear programming.

  18. An algorithm for online optimization of accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Xiaobiao [SLAC National Accelerator Lab., Menlo Park, CA (United States); Corbett, Jeff [SLAC National Accelerator Lab., Menlo Park, CA (United States); Safranek, James [SLAC National Accelerator Lab., Menlo Park, CA (United States); Wu, Juhao [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2013-10-01

    We developed a general algorithm for online optimization of accelerator performance, i.e., online tuning, using the performance measure as the objective function. This method, named robust conjugate direction search (RCDS), combines the conjugate direction set approach of Powell's method with a robust line optimizer which considers the random noise in bracketing the minimum and uses parabolic fit of data points that uniformly sample the bracketed zone. Moreover, it is much more robust against noise than traditional algorithms and is therefore suitable for online application. Simulation and experimental studies have been carried out to demonstrate the strength of the new algorithm.

  19. Fast algorithm of adaptive Fourier series

    Science.gov (United States)

    Gao, You; Ku, Min; Qian, Tao

    2018-05-01

    Adaptive Fourier decomposition (AFD, precisely 1-D AFD or Core-AFD) was originated for the goal of positive frequency representations of signals. It achieved the goal and at the same time offered fast decompositions of signals. There then arose several types of AFDs. AFD merged with the greedy algorithm idea, and in particular, motivated the so-called pre-orthogonal greedy algorithm (Pre-OGA) that was proven to be the most efficient greedy algorithm. The cost of the advantages of the AFD type decompositions is, however, the high computational complexity due to the involvement of maximal selections of the dictionary parameters. The present paper offers one formulation of the 1-D AFD algorithm by building the FFT algorithm into it. Accordingly, the algorithm complexity is reduced, from the original $\\mathcal{O}(M N^2)$ to $\\mathcal{O}(M N\\log_2 N)$, where $N$ denotes the number of the discretization points on the unit circle and $M$ denotes the number of points in $[0,1)$. This greatly enhances the applicability of AFD. Experiments are carried out to show the high efficiency of the proposed algorithm.

  20. [Avulsion of the Proximal Hamstring Insertion. Case Reports].

    Science.gov (United States)

    Mizera, R; Harcuba, R; Kratochvíl, J

    2016-01-01

    Proximal hamstring avulsion is an uncommon muscle injury with a lack of consensus on indications and the timing and technique of surgery. Poor clinical symptoms and difficulties in the diagnostic process can lead to a false diagnosis. The authors present three cases of proximal hamstring avulsion, two complete and one partial ruptures of the biceps femoris muscle. MRI and ultrasound scans were used for optimal treatment alignment. Acute surgery reconstruction (hamstring strength. Two interesting systematic reviews published on the treatment of proximal hamstring avulsion are discussed in the final part of the paper. Key words: hamstring, rupture, avulsion.

  1. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  2. Calculating Graph Algorithms for Dominance and Shortest Path

    DEFF Research Database (Denmark)

    Sergey, Ilya; Midtgaard, Jan; Clarke, Dave

    2012-01-01

    We calculate two iterative, polynomial-time graph algorithms from the literature: a dominance algorithm and an algorithm for the single-source shortest path problem. Both algorithms are calculated directly from the definition of the properties by fixed-point fusion of (1) a least fixed point...... expressing all finite paths through a directed graph and (2) Galois connections that capture dominance and path length. The approach illustrates that reasoning in the style of fixed-point calculus extends gracefully to the domain of graph algorithms. We thereby bridge common practice from the school...... of program calculation with common practice from the school of static program analysis, and build a novel view on iterative graph algorithms as instances of abstract interpretation...

  3. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  4. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  5. Kelp diagrams : Point set membership visualization

    NARCIS (Netherlands)

    Dinkla, K.; Kreveld, van M.J.; Speckmann, B.; Westenberg, M.A.

    2012-01-01

    We present Kelp Diagrams, a novel method to depict set relations over points, i.e., elements with predefined positions. Our method creates schematic drawings and has been designed to take aesthetic quality, efficiency, and effectiveness into account. This is achieved by a routing algorithm, which

  6. Fingerprint Analysis with Marked Point Processes

    DEFF Research Database (Denmark)

    Forbes, Peter G. M.; Lauritzen, Steffen; Møller, Jesper

    We present a framework for fingerprint matching based on marked point process models. An efficient Monte Carlo algorithm is developed to calculate the marginal likelihood ratio for the hypothesis that two observed prints originate from the same finger against the hypothesis that they originate from...... different fingers. Our model achieves good performance on an NIST-FBI fingerprint database of 258 matched fingerprint pairs....

  7. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  8. Proximal Pole Scaphoid Nonunion Reconstruction With 1,2 Intercompartmental Supraretinacular Artery Vascularized Graft and Compression Screw Fixation.

    Science.gov (United States)

    Morris, Mark S; Zhu, Andy F; Ozer, Kagan; Lawton, Jeffrey N

    2018-02-06

    To review the incidence of union of patients with proximal pole scaphoid fracture nonunions treated using a 1,2 intercompartmental supraretinacular artery (1,2 ICSRA) vascularized graft and a small compression screw. This is a retrospective case series of 12 patients. Calculations of the size of the proximal pole fragment relative to the total scaphoid were performed using posteroanterior view scaphoid radiographs with the wrist in ulnar deviation and flat on the cassette. Analyses were repeated 3 times per subject, and the average ratio of proximal pole fragment relative to the entire scaphoid was calculated. We reviewed medical records, radiographs, and computed tomography (CT) scans of these 12 patients. The CT scans that were performed after an average of 12 weeks were ultimately used to confirm union of the scaphoid fractures. One patient was unable to have a CT so was excluded from the final calculation. All 11 (100%) scaphoid fractures that were assessed by CT were found to be healed at the 12-week assessment point. The mean proximal pole fragment size was 18% (range, 7%-27%) of the entire scaphoid. The 1,2 ICSRA vascularized graft and compression screw was an effective treatment for patients with proximal pole scaphoid fractures. Therapeutic IV. Copyright © 2018 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  9. Proximity effects of high voltage electric power transmission lines on ...

    African Journals Online (AJOL)

    Yomi

    2010-08-18

    Aug 18, 2010 ... transmission lines on ornamental plant growth. Zeki Demir ... The effects of proximity to power-line on specific leaf area and seedling dbh were tested .... during vegetation season is about 72% and common wind blow.

  10. Nutritive Value Assessment of Four Crop Residues by Proximate ...

    African Journals Online (AJOL)

    Dr Grace Tona

    Ladoke Akintola University of Technology, P.M.B. 4000, Ogbomoso, Nigeria ... Abstract. This study estimated the proximate composition and in vitro gas production parameters of rice husk, bean ... farms and industries generate large quantities.

  11. Population Exposure Estimates in Proximity to Nuclear Power Plants, Locations

    Data.gov (United States)

    National Aeronautics and Space Administration — The Population Exposure Estimates in Proximity to Nuclear Power Plants, Locations data set combines information from a global data set developed by Declan Butler of...

  12. Proximate composition and nutrient content of some wild and ...

    African Journals Online (AJOL)

    Proximate composition and nutrient content of some wild and cultivated ... Ca, P, K), one minor mineral (Fe) constituent and vitamin C content were determined. ... Mineral content (P and K) in the mushroom sporophores were found to be ...

  13. Effects of Fermentation and Extrusion on the Proximate Composition ...

    African Journals Online (AJOL)

    Prof. Ogunji

    protein malnutrition persists as a principal health problem among children .... Proximate analysis: Moisture content ... nitrogen by a factor of 6.25. Crude fat ... Statistical analysis: The data were ..... interaction of amino acid in maillard reactions.

  14. Proximate composition and mineral contents of Pebbly fish, Alestes ...

    African Journals Online (AJOL)

    ACSS

    /100 g) were significantly .... Proximate analysis of A. baremoze fillets according to sample sizes in a study in. Uganda .... present in the fish tissues in trace amounts. ... The zinc contents in fish samples of ..... (mercury, cadmium, lead, tin and.

  15. Effects of Planting Locations on the Proximate Compositions of ...

    African Journals Online (AJOL)

    ADOWIE PERE

    Ash, moisture, crude fat, crude fibre, carbohydrate and protein contents were determined according ... All fruits, vegetables, legumes (beans and peas), and the grains we eat ... isolate and quantify each proximate present in the plant material.

  16. Development and experimentation of LQR/APF guidance and control for autonomous proximity maneuvers of multiple spacecraft

    Science.gov (United States)

    Bevilacqua, R.; Lehmann, T.; Romano, M.

    2011-04-01

    This work introduces a novel control algorithm for close proximity multiple spacecraft autonomous maneuvers, based on hybrid linear quadratic regulator/artificial potential function (LQR/APF), for applications including autonomous docking, on-orbit assembly and spacecraft servicing. Both theoretical developments and experimental validation of the proposed approach are presented. Fuel consumption is sub-optimized in real-time through re-computation of the LQR at each sample time, while performing collision avoidance through the APF and a high level decisional logic. The underlying LQR/APF controller is integrated with a customized wall-following technique and a decisional logic, overcoming problems such as local minima. The algorithm is experimentally tested on a four spacecraft simulators test bed at the Spacecraft Robotics Laboratory of the Naval Postgraduate School. The metrics to evaluate the control algorithm are: autonomy of the system in making decisions, successful completion of the maneuver, required time, and propellant consumption.

  17. Anchor proximal migration in the medial patellofemoral ligament reconstruction in skeletally immature patients

    Directory of Open Access Journals (Sweden)

    Fabiano Kupczik

    2013-09-01

    Full Text Available The medial patellofemoral ligament (MPFL injury has been considered instrumental in lateral patellar instability after patellar dislocation. Consequently, the focus on the study of this ligament reconstruction has increased in recent years. The MPFL femoral anatomical origin point has great importance at the moment of reconstruction surgery, because a graft fixation in a non anatomical position may result in medial overload, medial subluxation of the patella or excessive tensioning of the graft with subsequent failure. In the pediatric population, the location of this point is highlighted by the presence of femoral physis. The literature is still controversial regarding the best placement of the graft. We describe two cases of skeletally immature patients in whom LPFM reconstruction was performed. The femoral fixation was through anchors that were placed above the physis. With the growth and development of the patients, the femoral origin point of the graft moved proximally, resulting in failure in these two cases.

  18. Interval Mathematics Applied to Critical Point Transitions

    Directory of Open Access Journals (Sweden)

    Benito A. Stradi

    2012-03-01

    Full Text Available The determination of critical points of mixtures is important for both practical and theoretical reasons in the modeling of phase behavior, especially at high pressure. The equations that describe the behavior of complex mixtures near critical points are highly nonlinear and with multiplicity of solutions to the critical point equations. Interval arithmetic can be used to reliably locate all the critical points of a given mixture. The method also verifies the nonexistence of a critical point if a mixture of a given composition does not have one. This study uses an interval Newton/Generalized Bisection algorithm that provides a mathematical and computational guarantee that all mixture critical points are located. The technique is illustrated using several example problems. These problems involve cubic equation of state models; however, the technique is general purpose and can be applied in connection with other nonlinear problems.

  19. In-Place Algorithms for Computing (Layers of) Maxima

    DEFF Research Database (Denmark)

    Blunck, Henrik; Vahrenhold, Jan

    2006-01-01

    We describe space-efficient algorithms for solving problems related to finding maxima among points in two and three dimensions. Our algorithms run in optimal O(n log2 n) time and require O(1) space in addition to the representation of the input.......We describe space-efficient algorithms for solving problems related to finding maxima among points in two and three dimensions. Our algorithms run in optimal O(n log2 n) time and require O(1) space in addition to the representation of the input....

  20. A simple algorithm for computing the smallest enclosing circle

    DEFF Research Database (Denmark)

    Skyum, Sven

    1991-01-01

    Presented is a simple O(n log n) algorithm for computing the smallest enclosing circle of a convex polygon. It can be easily extended to algorithms that compute the farthest-and the closest-point Voronoi diagram of a convex polygon within the same time bound.......Presented is a simple O(n log n) algorithm for computing the smallest enclosing circle of a convex polygon. It can be easily extended to algorithms that compute the farthest-and the closest-point Voronoi diagram of a convex polygon within the same time bound....