WorldWideScience

Sample records for simple algorithmic principles

  1. What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm.

    Science.gov (United States)

    Raykov, Yordan P; Boukouvalas, Alexis; Baig, Fahd; Little, Max A

    The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.

  2. Algorithmic Principles of Mathematical Programming

    NARCIS (Netherlands)

    Faigle, Ulrich; Kern, Walter; Still, Georg

    2002-01-01

    Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear

  3. Simple sorting algorithm test based on CUDA

    OpenAIRE

    Meng, Hongyu; Guo, Fangjin

    2015-01-01

    With the development of computing technology, CUDA has become a very important tool. In computer programming, sorting algorithm is widely used. There are many simple sorting algorithms such as enumeration sort, bubble sort and merge sort. In this paper, we test some simple sorting algorithm based on CUDA and draw some useful conclusions.

  4. Time-advance algorithms based on Hamilton's principle

    International Nuclear Information System (INIS)

    Lewis, H.R.; Kostelec, P.J.

    1993-01-01

    Time-advance algorithms based on Hamilton's variational principle are being developed for application to problems in plasma physics and other areas. Hamilton's principle was applied previously to derive a system of ordinary differential equations in time whose solution provides an approximation to the evolution of a plasma described by the Vlasov-Maxwell equations. However, the variational principle was not used to obtain an algorithm for solving the ordinary differential equations numerically. The present research addresses the numerical solution of systems of ordinary differential equations via Hamilton's principle. The basic idea is first to choose a class of functions for approximating the solution of the ordinary differential equations over a specific time interval. Then the parameters in the approximating function are determined by applying Hamilton's principle exactly within the class of approximating functions. For example, if an approximate solution is desired between time t and time t + Δ t, the class of approximating functions could be polynomials in time up to some degree. The issue of how to choose time-advance algorithms is very important for achieving efficient, physically meaningful computer simulations. The objective is to reliably simulate those characteristics of an evolving system that are scientifically most relevant. Preliminary numerical results are presented, including comparisons with other computational methods

  5. Simple Activity Demonstrates Wind Energy Principles

    Science.gov (United States)

    Roman, Harry T.

    2012-01-01

    Wind energy is an exciting and clean energy option often described as the fastest-growing energy system on the planet. With some simple materials, teachers can easily demonstrate its key principles in their classroom. (Contains 1 figure and 2 tables.)

  6. A simple algorithm for computing the smallest enclosing circle

    DEFF Research Database (Denmark)

    Skyum, Sven

    1991-01-01

    Presented is a simple O(n log n) algorithm for computing the smallest enclosing circle of a convex polygon. It can be easily extended to algorithms that compute the farthest-and the closest-point Voronoi diagram of a convex polygon within the same time bound.......Presented is a simple O(n log n) algorithm for computing the smallest enclosing circle of a convex polygon. It can be easily extended to algorithms that compute the farthest-and the closest-point Voronoi diagram of a convex polygon within the same time bound....

  7. Graphics and visualization principles & algorithms

    CERN Document Server

    Theoharis, T; Platis, Nikolaos; Patrikalakis, Nicholas M

    2008-01-01

    Computer and engineering collections strong in applied graphics and analysis of visual data via computer will find Graphics & Visualization: Principles and Algorithms makes an excellent classroom text as well as supplemental reading. It integrates coverage of computer graphics and other visualization topics, from shadow geneeration and particle tracing to spatial subdivision and vector data visualization, and it provides a thorough review of literature from multiple experts, making for a comprehensive review essential to any advanced computer study.-California Bookw

  8. Connectivity algorithm with depth first search (DFS) on simple graphs

    Science.gov (United States)

    Riansanti, O.; Ihsan, M.; Suhaimi, D.

    2018-01-01

    This paper discusses an algorithm to detect connectivity of a simple graph using Depth First Search (DFS). The DFS implementation in this paper differs than other research, that is, on counting the number of visited vertices. The algorithm obtains s from the number of vertices and visits source vertex, following by its adjacent vertices until the last vertex adjacent to the previous source vertex. Any simple graph is connected if s equals 0 and disconnected if s is greater than 0. The complexity of the algorithm is O(n2).

  9. Training nuclei detection algorithms with simple annotations

    Directory of Open Access Journals (Sweden)

    Henning Kost

    2017-01-01

    Full Text Available Background: Generating good training datasets is essential for machine learning-based nuclei detection methods. However, creating exhaustive nuclei contour annotations, to derive optimal training data from, is often infeasible. Methods: We compared different approaches for training nuclei detection methods solely based on nucleus center markers. Such markers contain less accurate information, especially with regard to nuclear boundaries, but can be produced much easier and in greater quantities. The approaches use different automated sample extraction methods to derive image positions and class labels from nucleus center markers. In addition, the approaches use different automated sample selection methods to improve the detection quality of the classification algorithm and reduce the run time of the training process. We evaluated the approaches based on a previously published generic nuclei detection algorithm and a set of Ki-67-stained breast cancer images. Results: A Voronoi tessellation-based sample extraction method produced the best performing training sets. However, subsampling of the extracted training samples was crucial. Even simple class balancing improved the detection quality considerably. The incorporation of active learning led to a further increase in detection quality. Conclusions: With appropriate sample extraction and selection methods, nuclei detection algorithms trained on the basis of simple center marker annotations can produce comparable quality to algorithms trained on conventionally created training sets.

  10. Modified SIMPLE algorithm for the numerical analysis of incompressible flows with free surface

    International Nuclear Information System (INIS)

    Mok, Jin Ho; Hong, Chun Pyo; Lee, Jin Ho

    2005-01-01

    While the SIMPLE algorithm is most widely used for the simulations of flow phenomena that take place in the industrial equipment or the manufacturing processes, it is less adopted for the simulations of the free surface flow. Though the SIMPLE algorithm is free from the limitation of time step, the free surface behavior imposes the restriction on the time step. As a result, the explicit schemes are faster than the implicit scheme in terms of computation time when the same time step is applied to, since the implicit scheme includes the numerical method to solve the simultaneous equations in its procedure. If the computation time of SIMPLE algorithm can be reduced when it is applied to the unsteady free surface flow problems, the calculation can be carried out in the more stable way and, in the design process, the process variables can be controlled based on the more accurate data base. In this study, a modified SIMPLE algorithm is presented for the free surface flow. The broken water column problem is adopted for the validation of the modified algorithm (MoSIMPLE) and for comparison to the conventional SIMPLE algorithm

  11. Inverse synthetic aperture radar imaging principles, algorithms and applications

    CERN Document Server

    Chen , Victor C

    2014-01-01

    Inverse Synthetic Aperture Radar Imaging: Principles, Algorithms and Applications is based on the latest research on ISAR imaging of moving targets and non-cooperative target recognition (NCTR). With a focus on the advances and applications, this book will provide readers with a working knowledge on various algorithms of ISAR imaging of targets and implementation with MATLAB. These MATLAB algorithms will prove useful in order to visualize and manipulate some simulated ISAR images.

  12. Simple Obstacle Avoidance Algorithm for Rehabilitation Robots

    NARCIS (Netherlands)

    Stuyt, Floran H.A.; Römer, GertWillem R.B.E.; Stuyt, Harry .J.A.

    2007-01-01

    The efficiency of a rehabilitation robot is improved by offering record-and-replay to operate the robot. While automatically moving to a stored target (replay) collisions of the robot with obstacles in its work space must be avoided. A simple, though effective, generic and deterministic algorithm

  13. A new simple iterative reconstruction algorithm for SPECT transmission measurement

    International Nuclear Information System (INIS)

    Hwang, D.S.; Zeng, G.L.

    2005-01-01

    This paper proposes a new iterative reconstruction algorithm for transmission tomography and compares this algorithm with several other methods. The new algorithm is simple and resembles the emission ML-EM algorithm in form. Due to its simplicity, it is easy to implement and fast to compute a new update at each iteration. The algorithm also always guarantees non-negative solutions. Evaluations are performed using simulation studies and real phantom data. Comparisons with other algorithms such as convex, gradient, and logMLEM show that the proposed algorithm is as good as others and performs better in some cases

  14. The algorithms and principles of non-photorealistic graphics

    CERN Document Server

    Geng, Weidong

    2011-01-01

    ""The Algorithms and Principles of Non-photorealistic Graphics: Artistic Rendering and Cartoon Animation"" provides a conceptual framework for and comprehensive and up-to-date coverage of research on non-photorealistic computer graphics including methodologies, algorithms and software tools dedicated to generating artistic and meaningful images and animations. This book mainly discusses how to create art from a blank canvas, how to convert the source images into pictures with the desired visual effects, how to generate artistic renditions from 3D models, how to synthesize expressive pictures f

  15. Nodal algorithm derived from a new variational principle

    International Nuclear Information System (INIS)

    Watson, Fernando V.

    1995-01-01

    As a by-product of the research being carried on by the author on methods of recovering pin power distribution of PWR cores, a nodal algorithm based on a modified variational principle for the two group diffusion equations has been obtained. The main feature of the new algorithm is the low dimensionality achieved by the reduction of the original diffusion equations to a system of algebraic Eigen equations involving the average sources only, instead of sources and interface group currents used in conventional nodal methods. The advantage of this procedure is discussed and results generated by the new algorithm and by a finite difference code are compared. (author). 2 refs, 7 tabs

  16. Fast algorithms for transforming back and forth between a signed permutation and its equivalent simple permutation.

    Science.gov (United States)

    Gog, Simon; Bader, Martin

    2008-10-01

    The problem of sorting signed permutations by reversals is a well-studied problem in computational biology. The first polynomial time algorithm was presented by Hannenhalli and Pevzner in 1995. The algorithm was improved several times, and nowadays the most efficient algorithm has a subquadratic running time. Simple permutations played an important role in the development of these algorithms. Although the latest result of Tannier et al. does not require simple permutations, the preliminary version of their algorithm as well as the first polynomial time algorithm of Hannenhalli and Pevzner use the structure of simple permutations. More precisely, the latter algorithms require a precomputation that transforms a permutation into an equivalent simple permutation. To the best of our knowledge, all published algorithms for this transformation have at least a quadratic running time. For further investigations on genome rearrangement problems, the existence of a fast algorithm for the transformation could be crucial. Another important task is the back transformation, i.e. if we have a sorting on the simple permutation, transform it into a sorting on the original permutation. Again, the naive approach results in an algorithm with quadratic running time. In this paper, we present a linear time algorithm for transforming a permutation into an equivalent simple permutation, and an O(n log n) algorithm for the back transformation of the sorting sequence.

  17. A simple and efficient parallel FFT algorithm using the BSP model

    NARCIS (Netherlands)

    Bisseling, R.H.; Inda, M.A.

    2000-01-01

    In this paper we present a new parallel radix FFT algorithm based on the BSP model Our parallel algorithm uses the groupcyclic distribution family which makes it simple to understand and easy to implement We show how to reduce the com munication cost of the algorithm by a factor of three in the case

  18. A simple two stage optimization algorithm for constrained power economic dispatch

    International Nuclear Information System (INIS)

    Huang, G.; Song, K.

    1994-01-01

    A simple two stage optimization algorithm is proposed and investigated for fast computation of constrained power economic dispatch control problems. The method is a simple demonstration of the hierarchical aggregation-disaggregation (HAD) concept. The algorithm first solves an aggregated problem to obtain an initial solution. This aggregated problem turns out to be classical economic dispatch formulation, and it can be solved in 1% of overall computation time. In the second stage, linear programming method finds optimal solution which satisfies power balance constraints, generation and transmission inequality constraints and security constraints. Implementation of the algorithm for IEEE systems and EPRI Scenario systems shows that the two stage method obtains average speedup ratio 10.64 as compared to classical LP-based method

  19. A Simple FDTD Algorithm for Simulating EM-Wave Propagation in General Dispersive Anisotropic Material

    KAUST Repository

    Al-Jabr, Ahmad Ali; Alsunaidi, Mohammad A.; Ng, Tien Khee; Ooi, Boon S.

    2013-01-01

    In this paper, an finite-difference time-domain (FDTD) algorithm for simulating propagation of EM waves in anisotropic material is presented. The algorithm is based on the auxiliary differential equation and the general polarization formulation. In anisotropic materials, electric fields are coupled and elements in the permittivity tensor are, in general, multiterm dispersive. The presented algorithm resolves the field coupling using a formulation based on electric polarizations. It also offers a simple procedure for the treatment of multiterm dispersion in the FDTD scheme. The algorithm is tested by simulating wave propagation in 1-D magnetized plasma showing excellent agreement with analytical solutions. Extension of the algorithm to multidimensional structures is straightforward. The presented algorithm is efficient and simple compared to other algorithms found in the literature. © 2012 IEEE.

  20. A Simple FDTD Algorithm for Simulating EM-Wave Propagation in General Dispersive Anisotropic Material

    KAUST Repository

    Al-Jabr, Ahmad Ali

    2013-03-01

    In this paper, an finite-difference time-domain (FDTD) algorithm for simulating propagation of EM waves in anisotropic material is presented. The algorithm is based on the auxiliary differential equation and the general polarization formulation. In anisotropic materials, electric fields are coupled and elements in the permittivity tensor are, in general, multiterm dispersive. The presented algorithm resolves the field coupling using a formulation based on electric polarizations. It also offers a simple procedure for the treatment of multiterm dispersion in the FDTD scheme. The algorithm is tested by simulating wave propagation in 1-D magnetized plasma showing excellent agreement with analytical solutions. Extension of the algorithm to multidimensional structures is straightforward. The presented algorithm is efficient and simple compared to other algorithms found in the literature. © 2012 IEEE.

  1. Linear Programming, the Simplex Algorithm and Simple Polytopes

    Directory of Open Access Journals (Sweden)

    Das Bhusan

    2010-09-01

    Full Text Available In the first part of the paper we survey some far reaching applications of the basis facts of linear programming to the combinatorial theory of simple polytopes. In the second part we discuss some recent developments concurring the simplex algorithm. We describe sub-exponential randomized pivot roles and upper bounds on the diameter of graphs of polytopes.

  2. A Simple But Effective Canonical Dual Theory Unified Algorithm for Global Optimization

    OpenAIRE

    Zhang, Jiapu

    2011-01-01

    Numerical global optimization methods are often very time consuming and could not be applied for high-dimensional nonconvex/nonsmooth optimization problems. Due to the nonconvexity/nonsmoothness, directly solving the primal problems sometimes is very difficult. This paper presents a very simple but very effective canonical duality theory (CDT) unified global optimization algorithm. This algorithm has convergence is proved in this paper. More important, for this CDT-unified algorithm, numerous...

  3. A simple algorithm for the identification of clinical COPD phenotypes

    DEFF Research Database (Denmark)

    Burgel, Pierre-Régis; Paillasseur, Jean-Louis; Janssens, Wim

    2017-01-01

    This study aimed to identify simple rules for allocating chronic obstructive pulmonary disease (COPD) patients to clinical phenotypes identified by cluster analyses. Data from 2409 COPD patients of French/Belgian COPD cohorts were analysed using cluster analysis resulting in the identification...... of subgroups, for which clinical relevance was determined by comparing 3-year all-cause mortality. Classification and regression trees (CARTs) were used to develop an algorithm for allocating patients to these subgroups. This algorithm was tested in 3651 patients from the COPD Cohorts Collaborative...... International Assessment (3CIA) initiative. Cluster analysis identified five subgroups of COPD patients with different clinical characteristics (especially regarding severity of respiratory disease and the presence of cardiovascular comorbidities and diabetes). The CART-based algorithm indicated...

  4. An ant colony optimization algorithm for phylogenetic estimation under the minimum evolution principle

    Directory of Open Access Journals (Sweden)

    Milinkovitch Michel C

    2007-11-01

    Full Text Available Abstract Background Distance matrix methods constitute a major family of phylogenetic estimation methods, and the minimum evolution (ME principle (aiming at recovering the phylogeny with shortest length is one of the most commonly used optimality criteria for estimating phylogenetic trees. The major difficulty for its application is that the number of possible phylogenies grows exponentially with the number of taxa analyzed and the minimum evolution principle is known to belong to the NP MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGacaGaaiaabeqaaeqabiWaaaGcbaWenfgDOvwBHrxAJfwnHbqeg0uy0HwzTfgDPnwy1aaceaGae8xdX7Kaeeiuaafaaa@3888@-hard class of problems. Results In this paper, we introduce an Ant Colony Optimization (ACO algorithm to estimate phylogenies under the minimum evolution principle. ACO is an optimization technique inspired from the foraging behavior of real ant colonies. This behavior is exploited in artificial ant colonies for the search of approximate solutions to discrete optimization problems. Conclusion We show that the ACO algorithm is potentially competitive in comparison with state-of-the-art algorithms for the minimum evolution principle. This is the first application of an ACO algorithm to the phylogenetic estimation problem.

  5. A simple greedy algorithm for dynamic graph orientation

    DEFF Research Database (Denmark)

    Berglin, Edvin; Brodal, Gerth Stølting

    2017-01-01

    Graph orientations with low out-degree are one of several ways to efficiently store sparse graphs. If the graphs allow for insertion and deletion of edges, one may have to flip the orientation of some edges to prevent blowing up the maximum out-degree. We use arboricity as our sparsity measure....... With an immensely simple greedy algorithm, we get parametrized trade-off bounds between out-degree and worst case number of flips, which previously only existed for amortized number of flips. We match the previous best worst-case algorithm (in O(log n) flips) for general arboricity and beat it for either constant...... or super-logarithmic arboricity. We also match a previous best amortized result for at least logarithmic arboricity, and give the first results with worst-case O(1) and O(sqrt(log n)) flips nearly matching degree bounds to their respective amortized solutions....

  6. A Simple Sizing Algorithm for Stand-Alone PV/Wind/Battery Hybrid Microgrids

    Directory of Open Access Journals (Sweden)

    Jing Li

    2012-12-01

    Full Text Available In this paper, we develop a simple algorithm to determine the required number of generating units of wind-turbine generator and photovoltaic array, and the associated storage capacity for stand-alone hybrid microgrid. The algorithm is based on the observation that the state of charge of battery should be periodically invariant. The optimal sizing of hybrid microgrid is given in the sense that the life cycle cost of system is minimized while the given load power demand can be satisfied without load rejection. We also report a case study to show the efficacy of the developed algorithm.

  7. Architectures of soft robotic locomotion enabled by simple mechanical principles.

    Science.gov (United States)

    Zhu, Liangliang; Cao, Yunteng; Liu, Yilun; Yang, Zhe; Chen, Xi

    2017-06-28

    In nature, a variety of limbless locomotion patterns flourish, from the small or basic life forms (Escherichia coli, amoebae, etc.) to the large or intelligent creatures (e.g., slugs, starfishes, earthworms, octopuses, jellyfishes, and snakes). Many bioinspired soft robots based on locomotion have been developed in the past few decades. In this work, based on the kinematics and dynamics of two representative locomotion modes (i.e., worm-like crawling and snake-like slithering), we propose a broad set of innovative designs for soft mobile robots through simple mechanical principles. Inspired by and going beyond the existing biological systems, these designs include 1-D (dimensional), 2-D, and 3-D robotic locomotion patterns enabled by the simple actuation of continuous beams. We report herein over 20 locomotion modes achieving various locomotion functions, including crawling, rising, running, creeping, squirming, slithering, swimming, jumping, turning, turning over, helix rolling, wheeling, etc. Some are able to reach high speed, high efficiency, and overcome obstacles. All these locomotion strategies and functions can be integrated into a simple beam model. The proposed simple and robust models are adaptive for severe and complex environments. These elegant designs for diverse robotic locomotion patterns are expected to underpin future deployments of soft robots and to inspire a series of advanced designs.

  8. Calculation of propellant gas pressure by simple extended corresponding state principle

    OpenAIRE

    Bin Xu; San-jiu Ying; Xin Liao

    2016-01-01

    The virial equation can well describe gas state at high temperature and pressure, but the difficulties in virial coefficient calculation limit the use of virial equation. Simple extended corresponding state principle (SE-CSP) is introduced in virial equation. Based on a corresponding state equation, including three characteristic parameters, an extended parameter is introduced to describe the second virial coefficient expressions of main products of propellant gas. The modified SE-CSP second ...

  9. MM Algorithms for Geometric and Signomial Programming.

    Science.gov (United States)

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  10. Design principles and algorithms for automated air traffic management

    Science.gov (United States)

    Erzberger, Heinz

    1995-01-01

    This paper presents design principles and algorithm for building a real time scheduler. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high altitude airspace far from the airport and low altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time.

  11. Efficiency Analysis of the Parallel Implementation of the SIMPLE Algorithm on Multiprocessor Computers

    Science.gov (United States)

    Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.

    2017-12-01

    This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.

  12. A simple algorithm for measuring particle size distributions on an uneven background from TEM images

    DEFF Research Database (Denmark)

    Gontard, Lionel Cervera; Ozkaya, Dogan; Dunin-Borkowski, Rafal E.

    2011-01-01

    Nanoparticles have a wide range of applications in science and technology. Their sizes are often measured using transmission electron microscopy (TEM) or X-ray diffraction. Here, we describe a simple computer algorithm for measuring particle size distributions from TEM images in the presence of a...... application to images of heterogeneous catalysts is presented.......Nanoparticles have a wide range of applications in science and technology. Their sizes are often measured using transmission electron microscopy (TEM) or X-ray diffraction. Here, we describe a simple computer algorithm for measuring particle size distributions from TEM images in the presence...

  13. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    International Nuclear Information System (INIS)

    Rolland, Joran; Simonnet, Eric

    2015-01-01

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations

  14. A Cubature-Principle-Assisted IMM-Adaptive UKF Algorithm for Maneuvering Target Tracking Caused by Sensor Faults

    Directory of Open Access Journals (Sweden)

    Huan Zhou

    2017-09-01

    Full Text Available Aimed at solving the problem of decreased filtering precision while maneuvering target tracking caused by non-Gaussian distribution and sensor faults, we developed an efficient interacting multiple model-unscented Kalman filter (IMM-UKF algorithm. By dividing the IMM-UKF into two links, the algorithm introduces the cubature principle to approximate the probability density of the random variable, after the interaction, by considering the external link of IMM-UKF, which constitutes the cubature-principle-assisted IMM method (CPIMM for solving the non-Gaussian problem, and leads to an adaptive matrix to balance the contribution of the state. The algorithm provides filtering solutions by considering the internal link of IMM-UKF, which is called a new adaptive UKF algorithm (NAUKF to address sensor faults. The proposed CPIMM-NAUKF is evaluated in a numerical simulation and two practical experiments including one navigation experiment and one maneuvering target tracking experiment. The simulation and experiment results show that the proposed CPIMM-NAUKF has greater filtering precision and faster convergence than the existing IMM-UKF. The proposed algorithm achieves a very good tracking performance, and will be effective and applicable in the field of maneuvering target tracking.

  15. Simple algorithm for improved security in the FDDI protocol

    Science.gov (United States)

    Lundy, G. M.; Jones, Benjamin

    1993-02-01

    We propose a modification to the Fiber Distributed Data Interface (FDDI) protocol based on a simple algorithm which will improve confidential communication capability. This proposed modification provides a simple and reliable system which exploits some of the inherent security properties in a fiber optic ring network. This method differs from conventional methods in that end to end encryption can be facilitated at the media access control sublayer of the data link layer in the OSI network model. Our method is based on a variation of the bit stream cipher method. The transmitting station takes the intended confidential message and uses a simple modulo two addition operation against an initialization vector. The encrypted message is virtually unbreakable without the initialization vector. None of the stations on the ring will have access to both the encrypted message and the initialization vector except the transmitting and receiving stations. The generation of the initialization vector is unique for each confidential transmission and thus provides a unique approach to the key distribution problem. The FDDI protocol is of particular interest to the military in terms of LAN/MAN implementations. Both the Army and the Navy are considering the standard as the basis for future network systems. A simple and reliable security mechanism with the potential to support realtime communications is a necessary consideration in the implementation of these systems. The proposed method offers several advantages over traditional methods in terms of speed, reliability, and standardization.

  16. Simple Exact Algorithm for Transistor Sizing of Low-Power High-Speed Arithmetic Circuits

    Directory of Open Access Journals (Sweden)

    Tooraj Nikoubin

    2010-01-01

    Full Text Available A new transistor sizing algorithm, SEA (Simple Exact Algorithm, for optimizing low-power and high-speed arithmetic integrated circuits is proposed. In comparison with other transistor sizing algorithms, simplicity, accuracy, independency of order and initial sizing factors of transistors, and flexibility in choosing the optimization parameters such as power consumption, delay, Power-Delay Product (PDP, chip area or the combination of them are considered as the advantages of this new algorithm. More exhaustive rules of grouping transistors are the main trait of our algorithm. Hence, the SEA algorithm dominates some major transistor sizing metrics such as optimization rate, simulation speed, and reliability. According to approximate comparison of the SEA algorithm with MDE and ADC for a number of conventional full adder circuits, delay and PDP have been improved 55.01% and 57.92% on an average, respectively. By comparing the SEA and Chang's algorithm, 25.64% improvement in PDP and 33.16% improvement in delay have been achieved. All the simulations have been performed with 0.13 m technology based on the BSIM3v3 model using HSpice simulator software.

  17. Design Principles and Algorithms for Air Traffic Arrival Scheduling

    Science.gov (United States)

    Erzberger, Heinz; Itoh, Eri

    2014-01-01

    This report presents design principles and algorithms for building a real-time scheduler of arrival aircraft based on a first-come-first-served (FCFS) scheduling protocol. The algorithms provide the conceptual and computational foundation for the Traffic Management Advisor (TMA) of the Center/terminal radar approach control facilities (TRACON) automation system, which comprises a set of decision support tools for managing arrival traffic at major airports in the United States. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high-altitude airspace far away from the airport and low-altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time. This report is a revision of an earlier paper first presented as part of an Advisory Group for Aerospace Research and Development (AGARD) lecture series in September 1995. The authors, during vigorous discussions over the details of this paper, felt it was important to the air-trafficmanagement (ATM) community to revise and extend the original 1995 paper, providing more detail and clarity and thereby allowing future researchers to understand this foundational work as the basis for the TMA's scheduling algorithms.

  18. Mathematical Optimization Algorithm for Minimizing the Cost Function of GHG Emission in AS/RS Using Positive Selection Based Clonal Selection Principle

    Science.gov (United States)

    Mahalakshmi; Murugesan, R.

    2018-04-01

    This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.

  19. A phenomenological model for the structure-composition relationship of the high Tc cuprates based on simple chemical principles

    International Nuclear Information System (INIS)

    Alarco, J.A.; Talbot, P.C.

    2012-01-01

    A simple phenomenological model for the relationship between structure and composition of the high Tc cuprates is presented. The model is based on two simple crystal chemistry principles: unit cell doping and charge balance within unit cells. These principles are inspired by key experimental observations of how the materials accommodate large deviations from stoichiometry. Consistent explanations for significant HTSC properties can be explained without any additional assumptions while retaining valuable insight for geometric interpretation. Combining these two chemical principles with a review of Crystal Field Theory (CFT) or Ligand Field Theory (LFT), it becomes clear that the two oxidation states in the conduction planes (typically d 8 and d 9 ) belong to the most strongly divergent d-levels as a function of deformation from regular octahedral coordination. This observation offers a link to a range of coupling effects relating vibrations and spin waves through application of Hund’s rules. An indication of this model’s capacity to predict physical properties for HTSC is provided and will be elaborated in subsequent publications. Simple criteria for the relationship between structure and composition in HTSC systems may guide chemical syntheses within new material systems.

  20. Some algorithms for reordering a sequence of objects, with application to E. Sparre Andersen's principle of equivalence in mathematical statistics

    NARCIS (Netherlands)

    Bruijn, de N.G.

    1972-01-01

    Recently A. W. Joseph described an algorithm providing combinatorial insight into E. Sparre Andersen's so-called Principle of Equivalence in mathematical statistics. In the present paper such algorithms are discussed systematically.

  1. Energy Aware Simple Ant Routing Algorithm for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Sohail Jabbar

    2015-01-01

    Full Text Available Network lifetime is one of the most prominent barriers in deploying wireless sensor networks for large-scale applications because these networks employ sensors with nonrenewable scarce energy resources. Sensor nodes dissipate most of their energy in complex routing mechanisms. To cope with limited energy problem, we present EASARA, an energy aware simple ant routing algorithm based on ant colony optimization. Unlike most algorithms, EASARA strives to avoid low energy routes and optimizes the routing process through selection of least hop count path with more energy. It consists of three phases, that is, route discovery, forwarding node, and route selection. We have improved the route discovery procedure and mainly concentrate on energy efficient forwarding node and route selection, so that the network lifetime can be prolonged. The four possible cases of forwarding node and route selection are presented. The performance of EASARA is validated through simulation. Simulation results demonstrate the performance supremacy of EASARA over contemporary scheme in terms of various metrics.

  2. Development and validation of a simple algorithm for initiation of CPAP in neonates with respiratory distress in Malawi.

    Science.gov (United States)

    Hundalani, Shilpa G; Richards-Kortum, Rebecca; Oden, Maria; Kawaza, Kondwani; Gest, Alfred; Molyneux, Elizabeth

    2015-07-01

    Low-cost bubble continuous positive airway pressure (bCPAP) systems have been shown to improve survival in neonates with respiratory distress, in developing countries including Malawi. District hospitals in Malawi implementing CPAP requested simple and reliable guidelines to enable healthcare workers with basic skills and minimal training to determine when treatment with CPAP is necessary. We developed and validated TRY (T: Tone is good, R: Respiratory Distress and Y=Yes) CPAP, a simple algorithm to identify neonates with respiratory distress who would benefit from CPAP. To validate the TRY CPAP algorithm for neonates with respiratory distress in a low-resource setting. We constructed an algorithm using a combination of vital signs, tone and birth weight to determine the need for CPAP in neonates with respiratory distress. Neonates admitted to the neonatal ward of Queen Elizabeth Central Hospital, in Blantyre, Malawi, were assessed in a prospective, cross-sectional study. Nurses and paediatricians-in-training assessed neonates to determine whether they required CPAP using the TRY CPAP algorithm. To establish the accuracy of the TRY CPAP algorithm in evaluating the need for CPAP, their assessment was compared with the decision of a neonatologist blinded to the TRY CPAP algorithm findings. 325 neonates were evaluated over a 2-month period; 13% were deemed to require CPAP by the neonatologist. The inter-rater reliability with the algorithm was 0.90 for nurses and 0.97 for paediatricians-in-training using the neonatologist's assessment as the reference standard. The TRY CPAP algorithm has the potential to be a simple and reliable tool to assist nurses and clinicians in identifying neonates who require treatment with CPAP in low-resource settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  3. Using Coevolution Genetic Algorithm with Pareto Principles to Solve Project Scheduling Problem under Duration and Cost Constraints

    Directory of Open Access Journals (Sweden)

    Alexandr Victorovich Budylskiy

    2014-06-01

    Full Text Available This article considers the multicriteria optimization approach using the modified genetic algorithm to solve the project-scheduling problem under duration and cost constraints. The work contains the list of choices for solving this problem. The multicriteria optimization approach is justified here. The study describes the Pareto principles, which are used in the modified genetic algorithm. We identify the mathematical model of the project-scheduling problem. We introduced the modified genetic algorithm, the ranking strategies, the elitism approaches. The article includes the example.

  4. A Simple Density with Distance Based Initial Seed Selection Technique for K Means Algorithm

    Directory of Open Access Journals (Sweden)

    Sajidha Syed Azimuddin

    2017-01-01

    Full Text Available Open issues with respect to K means algorithm are identifying the number of clusters, initial seed concept selection, clustering tendency, handling empty clusters, identifying outliers etc. In this paper we propose a novel and a simple technique considering both density and distance of the concepts in a dataset to identify initial seed concepts for clustering. Many authors have proposed different techniques to identify initial seed concepts; but our method ensures that the initial seed concepts are chosen from different clusters that are to be generated by the clustering solution. The hallmark of our algorithm is that it is a single pass algorithm that does not require any extra parameters to be estimated. Further, our seed concepts are one among the actual concepts and not the mean of representative concepts as is the case in many other algorithms. We have implemented our proposed algorithm and compared the results with the interval based technique of Fouad Khan. We see that our method outperforms the interval based method. We have also compared our method with the original random K means and K Means++ algorithms.

  5. A simple algorithm for estimation of source-to-detector distance in Compton imaging

    International Nuclear Information System (INIS)

    Rawool-Sullivan, Mohini W.; Sullivan, John P.; Tornga, Shawn R.; Brumby, Steven P.

    2008-01-01

    Compton imaging is used to predict the location of gamma-emitting radiation sources. The X and Y coordinates of the source can be obtained using a back-projected image and a two-dimensional peak-finding algorithm. The emphasis of this work is to estimate the source-to-detector distance (Z). The algorithm presented uses the solid angle subtended by the reconstructed image at various source-to-detector distances. This algorithm was validated using both measured data from the prototype Compton imager (PCI) constructed at the Los Alamos National Laboratory and simulated data of the same imager. Results show this method can be applied successfully to estimate Z, and it provides a way of determining Z without prior knowledge of the source location. This method is faster than the methods that employ maximum likelihood method because it is based on simple back projections of Compton scatter data

  6. Ant-based extraction of rules in simple decision systems over ontological graphs

    Directory of Open Access Journals (Sweden)

    Pancerz Krzysztof

    2015-06-01

    Full Text Available In the paper, the problem of extraction of complex decision rules in simple decision systems over ontological graphs is considered. The extracted rules are consistent with the dominance principle similar to that applied in the dominancebased rough set approach (DRSA. In our study, we propose to use a heuristic algorithm, utilizing the ant-based clustering approach, searching the semantic spaces of concepts presented by means of ontological graphs. Concepts included in the semantic spaces are values of attributes describing objects in simple decision systems

  7. Pretest probability of a normal echocardiography: validation of a simple and practical algorithm for routine use.

    Science.gov (United States)

    Hammoudi, Nadjib; Duprey, Matthieu; Régnier, Philippe; Achkar, Marc; Boubrit, Lila; Preud'homme, Gisèle; Healy-Brucker, Aude; Vignalou, Jean-Baptiste; Pousset, Françoise; Komajda, Michel; Isnard, Richard

    2014-02-01

    Management of increased referrals for transthoracic echocardiography (TTE) examinations is a challenge. Patients with normal TTE examinations take less time to explore than those with heart abnormalities. A reliable method for assessing pretest probability of a normal TTE may optimize management of requests. To establish and validate, based on requests for examinations, a simple algorithm for defining pretest probability of a normal TTE. In a retrospective phase, factors associated with normality were investigated and an algorithm was designed. In a prospective phase, patients were classified in accordance with the algorithm as being at high or low probability of having a normal TTE. In the retrospective phase, 42% of 618 examinations were normal. In multivariable analysis, age and absence of cardiac history were associated to normality. Low pretest probability of normal TTE was defined by known cardiac history or, in case of doubt about cardiac history, by age>70 years. In the prospective phase, the prevalences of normality were 72% and 25% in high (n=167) and low (n=241) pretest probability of normality groups, respectively. The mean duration of normal examinations was significantly shorter than abnormal examinations (13.8 ± 9.2 min vs 17.6 ± 11.1 min; P=0.0003). A simple algorithm can classify patients referred for TTE as being at high or low pretest probability of having a normal examination. This algorithm might help to optimize management of requests in routine practice. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  8. A simple and efficient algorithm to estimate daily global solar radiation from geostationary satellite data

    International Nuclear Information System (INIS)

    Lu, Ning; Qin, Jun; Yang, Kun; Sun, Jiulin

    2011-01-01

    Surface global solar radiation (GSR) is the primary renewable energy in nature. Geostationary satellite data are used to map GSR in many inversion algorithms in which ground GSR measurements merely serve to validate the satellite retrievals. In this study, a simple algorithm with artificial neural network (ANN) modeling is proposed to explore the non-linear physical relationship between ground daily GSR measurements and Multi-functional Transport Satellite (MTSAT) all-channel observations in an effort to fully exploit information contained in both data sets. Singular value decomposition is implemented to extract the principal signals from satellite data and a novel method is applied to enhance ANN performance at high altitude. A three-layer feed-forward ANN model is trained with one year of daily GSR measurements at ten ground sites. This trained ANN is then used to map continuous daily GSR for two years, and its performance is validated at all 83 ground sites in China. The evaluation result demonstrates that this algorithm can quickly and efficiently build the ANN model that estimates daily GSR from geostationary satellite data with good accuracy in both space and time. -- Highlights: → A simple and efficient algorithm to estimate GSR from geostationary satellite data. → ANN model fully exploits both the information from satellite and ground measurements. → Good performance of the ANN model is comparable to that of the classical models. → Surface elevation and infrared information enhance GSR inversion.

  9. First-principles molecular dynamics for metals

    International Nuclear Information System (INIS)

    Fernando, G.W.; Qian, G.; Weinert, M.; Davenport, J.W.

    1989-01-01

    A Car-Parrinello-type first-principles molecular-dynamics approach capable of treating the partial occupancy of electronic states that occurs at the Fermi level in a metal is presented. The algorithms used to study metals are both simple and computationally efficient. We also discuss the connection between ordinary electronic-structure calculations and molecular-dynamics simulations as well as the role of Brillouin-zone sampling. This extension should be useful not only for metallic solids but also for solids that become metals in their liquid and/or amorphous phases

  10. On the Runtime of Randomized Local Search and Simple Evolutionary Algorithms for Dynamic Makespan Scheduling

    DEFF Research Database (Denmark)

    Neumann, Frank; Witt, Carsten

    2015-01-01

    combinatorial optimization problem, namely makespan scheduling. We study the model of a strong adversary which is allowed to change one job at regular intervals. Furthermore, we investigate the setting of random changes. Our results show that randomized local search and a simple evolutionary algorithm are very...

  11. Implementation and testing of a simple data assimilation algorithm in the regional air pollution forecast model, DEOM

    DEFF Research Database (Denmark)

    Frydendall, Jan; Brandt, J.; Christensen, J. H.

    2009-01-01

    A simple data assimilation algorithm based on statistical interpolation has been developed and coupled to a long-range chemistry transport model, the Danish Eulerian Operational Model (DEOM), applied for air pollution forecasting at the National Environmental Research Institute (NERI), Denmark....... In this paper, the algorithm and the results from experiments designed to find the optimal setup of the algorithm are described. The algorithm has been developed and optimized via eight different experiments where the results from different model setups have been tested against measurements from the EMEP...... (European Monitoring and Evaluation Programme) network covering a half-year period, April-September 1999. The best performing setup of the data assimilation algorithm for surface ozone concentrations has been found, including the combination of determining the covariances using the Hollingsworth method...

  12. Introductory review on `Flying Triangulation': a motion-robust optical 3D measurement principle

    Science.gov (United States)

    Ettl, Svenja

    2015-04-01

    'Flying Triangulation' (FlyTri) is a recently developed principle which allows for a motion-robust optical 3D measurement of rough surfaces. It combines a simple sensor with sophisticated algorithms: a single-shot sensor acquires 2D camera images. From each camera image, a 3D profile is generated. The series of 3D profiles generated are aligned to one another by algorithms, without relying on any external tracking device. It delivers real-time feedback of the measurement process which enables an all-around measurement of objects. The principle has great potential for small-space acquisition environments, such as the measurement of the interior of a car, and motion-sensitive measurement tasks, such as the intraoral measurement of teeth. This article gives an overview of the basic ideas and applications of FlyTri. The main challenges and their solutions are discussed. Measurement examples are also given to demonstrate the potential of the measurement principle.

  13. Improved FHT Algorithms for Fast Computation of the Discrete Hartley Transform

    Directory of Open Access Journals (Sweden)

    M. T. Hamood

    2013-05-01

    Full Text Available In this paper, by using the symmetrical properties of the discrete Hartley transform (DHT, an improved radix-2 fast Hartley transform (FHT algorithm with arithmetic complexity comparable to that of the real-valued fast Fourier transform (RFFT is developed. It has a simple and regular butterfly structure and possesses the in-place computation property. Furthermore, using the same principles, the development can be extended to more efficient radix-based FHT algorithms. An example for the improved radix-4 FHT algorithm is given to show the validity of the presented method. The arithmetic complexity for the new algorithms are computed and then compared with the existing FHT algorithms. The results of these comparisons have shown that the developed algorithms reduce the number of multiplications and additions considerably.

  14. linear time algorithm for finding the convex ropes between two vertices of a simple polygon without triangulation

    International Nuclear Information System (INIS)

    Phan Thanh An

    2008-06-01

    The convex rope problem, posed by Peshkin and Sanderson in IEEE J. Robotics Automat, 2 (1986) pp. 53-58, is to find the counterclockwise and clockwise convex ropes starting at the vertex a and ending at the vertex b of a simple polygon, where a is on the boundary of the convex hull of the polygon and b is visible from infinity. In this paper, we present a linear time algorithm for solving this problem without resorting to a linear-time triangulation algorithm and without resorting to a convex hull algorithm for the polygon. The counterclockwise (clockwise, respectively) convex rope consists of two polylines obtained in a basic incremental strategy described in convex hull algorithms for the polylines forming the polygon from a to b. (author)

  15. Inversion of self-potential anomalies caused by simple-geometry bodies using global optimization algorithms

    International Nuclear Information System (INIS)

    Göktürkler, G; Balkaya, Ç

    2012-01-01

    Three naturally inspired meta-heuristic algorithms—the genetic algorithm (GA), simulated annealing (SA) and particle swarm optimization (PSO)—were used to invert some of the self-potential (SP) anomalies originated by some polarized bodies with simple geometries. Both synthetic and field data sets were considered. The tests with the synthetic data comprised of the solutions with both noise-free and noisy data; in the tests with the field data some SP anomalies observed over a copper belt (India), graphite deposits (Germany) and metallic sulfide (Turkey) were inverted. The model parameters included the electric dipole moment, polarization angle, depth, shape factor and origin of the anomaly. The estimated parameters were compared with those from previous studies using various optimization algorithms, mainly least-squares approaches, on the same data sets. During the test studies the solutions by GA, PSO and SA were characterized as being consistent with each other; a good starting model was not a requirement to reach the global minimum. It can be concluded that the global optimization algorithms considered in this study were able to yield compatible solutions with those from widely used local optimization algorithms. (paper)

  16. A matrix S for all simple current extensions

    International Nuclear Information System (INIS)

    Fuchs, J.; Schellekens, A.N.; Schweigert, C.

    1996-01-01

    A formula is presented for the modular transformation matrix S for any simple current extension of the chiral algebra of a conformal field theory. This provides in particular an algorithm for resolving arbitrary simple current fixed points, in such a way that the matrix S we obtain is unitary and symmetric and furnishes a modular group representation. The formalism works in principle for any conformal field theory. A crucial ingredient is a set of matrices S ab J , where J is a simple current and a and b are fixed points of J. We expect that these input matrices realize the modular group for the torus one-point functions of the simple currents. In the case of WZW-models these matrices can be identified with the S-matrices of the orbit Lie algebras that were introduced recently. As a special case of our conjecture we obtain the modular matrix S for WZW-theories based on group manifolds that are not simply connected, as well as for most coset models. (orig.)

  17. A simple MC-based algorithm for evaluating reliability of stochastic-flow network with unreliable nodes

    International Nuclear Information System (INIS)

    Yeh, W.-C.

    2004-01-01

    A MP/minimal cutset (MC) is a path/cut set such that if any edge is removed from this path/cut set, then the remaining set is no longer a path/cut set. An intuitive method is proposed to evaluate the reliability in terms of MCs in a stochastic-flow network subject to both edge and node failures under the condition that all of the MCs are given in advance. This is an extension of the best of known algorithms for solving the d-MC (a special MC but formatted in a system-state vector, where d is the lower bound points of the system capacity level) problem from the stochastic-flow network without unreliable nodes to with unreliable nodes by introducing some simple concepts. These concepts were first developed in the literature to implement the proposed algorithm to reduce the number of d-MC candidates. This method is more efficient than the best of known existing algorithms regardless if the network has or does not have unreliable nodes. Two examples are illustrated to show how the reliability is determined using the proposed algorithm in the network with or without unreliable nodes. The computational complexity of the proposed algorithm is analyzed and compared with the existing methods

  18. Simple nuclear norm based algorithms for imputing missing data and forecasting in time series

    OpenAIRE

    Butcher, Holly Louise; Gillard, Jonathan William

    2017-01-01

    There has been much recent progress on the use of the nuclear norm for the so-called matrix completion problem (the problem of imputing missing values of a matrix). In this paper we investigate the use of the nuclear norm for modelling time series, with particular attention to imputing missing data and forecasting. We introduce a simple alternating projections type algorithm based on the nuclear norm for these tasks, and consider a number of practical examples.

  19. A simple algorithm for large-scale mapping of evergreen forests in tropical America, Africa and Asia

    Science.gov (United States)

    Xiangming Xiao; Chandrashekhar M. Biradar; Christina Czarnecki; Tunrayo Alabi; Michael Keller

    2009-01-01

    The areal extent and spatial distribution of evergreen forests in the tropical zones are important for the study of climate, carbon cycle and biodiversity. However, frequent cloud cover in the tropical regions makes mapping evergreen forests a challenging task. In this study we developed a simple and novel mapping algorithm that is based on the temporal profile...

  20. Constructing first-principles phase diagrams of amorphous LixSi using machine-learning-assisted sampling with an evolutionary algorithm

    Science.gov (United States)

    Artrith, Nongnuch; Urban, Alexander; Ceder, Gerbrand

    2018-06-01

    The atomistic modeling of amorphous materials requires structure sizes and sampling statistics that are challenging to achieve with first-principles methods. Here, we propose a methodology to speed up the sampling of amorphous and disordered materials using a combination of a genetic algorithm and a specialized machine-learning potential based on artificial neural networks (ANNs). We show for the example of the amorphous LiSi alloy that around 1000 first-principles calculations are sufficient for the ANN-potential assisted sampling of low-energy atomic configurations in the entire amorphous LixSi phase space. The obtained phase diagram is validated by comparison with the results from an extensive sampling of LixSi configurations using molecular dynamics simulations and a general ANN potential trained to ˜45 000 first-principles calculations. This demonstrates the utility of the approach for the first-principles modeling of amorphous materials.

  1. Convex hull ranking algorithm for multi-objective evolutionary algorithms

    NARCIS (Netherlands)

    Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.

    2012-01-01

    Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity

  2. Implementation and testing of a simple data assimilation algorithm in the regional air pollution forecast model, DEOM

    Directory of Open Access Journals (Sweden)

    J. Frydendall

    2009-08-01

    Full Text Available A simple data assimilation algorithm based on statistical interpolation has been developed and coupled to a long-range chemistry transport model, the Danish Eulerian Operational Model (DEOM, applied for air pollution forecasting at the National Environmental Research Institute (NERI, Denmark. In this paper, the algorithm and the results from experiments designed to find the optimal setup of the algorithm are described. The algorithm has been developed and optimized via eight different experiments where the results from different model setups have been tested against measurements from the EMEP (European Monitoring and Evaluation Programme network covering a half-year period, April–September 1999. The best performing setup of the data assimilation algorithm for surface ozone concentrations has been found, including the combination of determining the covariances using the Hollingsworth method, varying the correlation length according to the number of adjacent observation stations and applying the assimilation routine at three successive hours during the morning. Improvements in the correlation coefficient in the range of 0.1 to 0.21 between the results from the reference and the optimal configuration of the data assimilation algorithm, were found. The data assimilation algorithm will in the future be used in the operational THOR integrated air pollution forecast system, which includes the DEOM.

  3. A simple model based magnet sorting algorithm for planar hybrid undulators

    International Nuclear Information System (INIS)

    Rakowsky, G.

    2010-01-01

    Various magnet sorting strategies have been used to optimize undulator performance, ranging from intuitive pairing of high- and low-strength magnets, to full 3D FEM simulation with 3-axis Helmholtz coil magnet data. In the extreme, swapping magnets in a full field model to minimize trajectory wander and rms phase error can be time consuming. This paper presents a simpler approach, extending the field error signature concept to obtain trajectory displacement, kick angle and phase error signatures for each component of magnetization error from a Radia model of a short hybrid-PM undulator. We demonstrate that steering errors and phase errors are essentially decoupled and scalable from measured X, Y and Z components of magnetization. Then, for any given sequence of magnets, rms trajectory and phase errors are obtained from simple cumulative sums of the scaled displacements and phase errors. The cost function (a weighted sum of these errors) is then minimized by swapping magnets, using one's favorite optimization algorithm. This approach was applied recently at NSLS to a short in-vacuum undulator, which required no subsequent trajectory or phase shimming. Trajectory and phase signatures are also obtained for some mechanical errors, to guide 'virtual shimming' and specifying mechanical tolerances. Some simple inhomogeneities are modeled to assess their error contributions.

  4. Hamiltonians and variational principles for Alfvén simple waves

    International Nuclear Information System (INIS)

    Webb, G M; Hu, Q; Roux, J A le; Dasgupta, B; Zank, G P

    2012-01-01

    The evolution equations for the magnetic field induction B with the wave phase for Alfvén simple waves are expressed as variational principles and in the Hamiltonian form. The evolution of B with the phase (which is a function of the space and time variables) depends on the generalized Frenet–Serret equations, in which the wave normal n (which is a function of the phase) is taken to be tangent to a curve X, in a 3D Cartesian geometry vector space. The physical variables (the gas density, fluid velocity, gas pressure and magnetic field induction) in the wave depend only on the phase. Three approaches are developed. One approach exploits the fact that the Frenet equations may be written as a 3D Hamiltonian system, which can be described using the Nambu bracket. It is shown that B as a function of the phase satisfies a modified version of the Frenet equations, and hence the magnetic field evolution equations can be expressed in the Hamiltonian form. A second approach develops an Euler–Poincaré variational formulation. A third approach uses the Frenet frame formulation, in which the hodograph of B moves on a sphere of constant radius and uses a stereographic projection transformation due to Darboux. The equations for the projected field components reduce to a complex Riccati equation. By using a Cole–Hopf transformation, the Riccati equation reduces to a linear second order differential equation for the new variable. A Hamiltonian formulation of the second order differential equation then allows the system to be written in the Hamiltonian form. Alignment dynamics equations for Alfvén simple waves give rise to a complex Riccati equation or, equivalently, to a quaternionic Riccati equation, which can be mapped onto the Riccati equation obtained by stereographic projection. (paper)

  5. A Flocking Based algorithm for Document Clustering Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Gao, Jinzhu [ORNL; Potok, Thomas E [ORNL

    2006-01-01

    Social animals or insects in nature often exhibit a form of emergent collective behavior known as flocking. In this paper, we present a novel Flocking based approach for document clustering analysis. Our Flocking clustering algorithm uses stochastic and heuristic principles discovered from observing bird flocks or fish schools. Unlike other partition clustering algorithm such as K-means, the Flocking based algorithm does not require initial partitional seeds. The algorithm generates a clustering of a given set of data through the embedding of the high-dimensional data items on a two-dimensional grid for easy clustering result retrieval and visualization. Inspired by the self-organized behavior of bird flocks, we represent each document object with a flock boid. The simple local rules followed by each flock boid result in the entire document flock generating complex global behaviors, which eventually result in a clustering of the documents. We evaluate the efficiency of our algorithm with both a synthetic dataset and a real document collection that includes 100 news articles collected from the Internet. Our results show that the Flocking clustering algorithm achieves better performance compared to the K- means and the Ant clustering algorithm for real document clustering.

  6. A Simple Checking Algorithm with Perturb and Observe Maximum Power Point Tracking for Partially Shaded Photovoltaic System

    Directory of Open Access Journals (Sweden)

    Rozana Alik

    2016-03-01

    Full Text Available This paper presents a simple checking algorithm for maximum power point tracking (MPPT technique for Photovoltaic (PV system using Perturb and Observe (P&O algorithm. The main benefit of this checking algorithm is the simplicity and efficiency of the system whose duty cycle produced by the MPPT is smoother and changes faster according to maximum power point (MPP. This checking algorithm can determine the maximum power first before the P&O algorithm takes place to identify the voltage at MPP (VMPP, which is needed to calculate the duty cycle for the boost converter. To test the effectiveness of the algorithm, a simulation model of PV system has been carried out using MATLAB/Simulink under different level of irradiation; or in other words partially shaded condition of PV array. The results from the system using the proposed approach prove to have faster response and low ripple. Besides, the results are close to the desired outputs and exhibit an approximately 98.25% of the system efficiency. On the other hand, the system with conventional P&O MPPT seems to be unstable and has higher percentage of error. In summary, the proposed method is useful under varying level of irradiation with higher efficiency of the system.

  7. Calculation of propellant gas pressure by simple extended corresponding state principle

    Directory of Open Access Journals (Sweden)

    Bin Xu

    2016-04-01

    Full Text Available The virial equation can well describe gas state at high temperature and pressure, but the difficulties in virial coefficient calculation limit the use of virial equation. Simple extended corresponding state principle (SE-CSP is introduced in virial equation. Based on a corresponding state equation, including three characteristic parameters, an extended parameter is introduced to describe the second virial coefficient expressions of main products of propellant gas. The modified SE-CSP second virial coefficient expression was extrapolated based on the virial coefficients experimental temperature, and the second virial coefficients obtained are in good agreement with the experimental data at a low temperature and the theoretical values at high temperature. The maximum pressure in the closed bomb test was calculated with modified SE-CSP virial coefficient expressions with the calculated error of less than 2%, and the error was smaller than the result calculated with the reported values under the same calculation conditions. The modified SE-CSP virial coefficient expression provides a convenient and efficient method for practical virial coefficient calculation without resorting to complicated molecular model design and integral calculation.

  8. Simple and Effective Algorithms: Computer-Adaptive Testing.

    Science.gov (United States)

    Linacre, John Michael

    Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…

  9. Majorization arrow in quantum-algorithm design

    International Nuclear Information System (INIS)

    Latorre, J.I.; Martin-Delgado, M.A.

    2002-01-01

    We apply majorization theory to study the quantum algorithms known so far and find that there is a majorization principle underlying the way they operate. Grover's algorithm is a neat instance of this principle where majorization works step by step until the optimal target state is found. Extensions of this situation are also found in algorithms based in quantum adiabatic evolution and the family of quantum phase-estimation algorithms, including Shor's algorithm. We state that in quantum algorithms the time arrow is a majorization arrow

  10. Multiway simple cycle separators and I/O-efficient algorithms for planar graphs

    DEFF Research Database (Denmark)

    Arge, L.; Walderveen, Freek van; Zeh, Norbert

    2013-01-01

    memory, where sort(N) is the number of I/Os needed to sort N items in external memory. The key, and the main technical contribution of this paper, is a multiway version of Miller's simple cycle separator theorem. We show how to compute these separators in linear time in internal memory, and using O...... in internal memory, thereby completely negating the performance gain achieved by minimizing the number of disk accesses. In this paper, we show how to make these algorithms simultaneously efficient in internal and external memory so they achieve I/O complexity O(sort(N)) and take O(N log N) time in internal......(sort(N)) I/Os and O(N log N) (internal-memory computation) time in external memory....

  11. Quantitative x-ray photoelectron spectroscopy: Simple algorithm to determine the amount of atoms in the outermost few nanometers

    International Nuclear Information System (INIS)

    Tougaard, Sven

    2003-01-01

    It is well known that due to inelastic electron scattering, the measured x-ray photoelectron spectroscopy peak intensity depends strongly on the in-depth atom distribution. Quantification based only on the peak intensity can therefore give large errors. The problem was basically solved by developing algorithms for the detailed analysis of the energy distribution of emitted electrons. These algorithms have been extensively tested experimentally and found to be able to determine the depth distribution of atoms with nanometer resolution. Practical application of these algorithms has increased after ready-to-use software packages were made available and they are now being used in laboratories worldwide. These software packages are easy to use but they need operator interaction. They are not well suited for automatic data processing and there is an additional need for simplified quantification strategies that can be automated. In this article we report on a very simple algorithm. It is a slightly more accurate version of our previous algorithm. The algorithm gives the amount of atoms within the outermost three inelastic mean free paths and it also gives a rough estimate for the in-depth distribution. An experimental example of its application is also presented

  12. A simple algorithm for subregional striatal uptake analysis with partial volume correction in dopaminergic PET imaging

    International Nuclear Information System (INIS)

    Lue Kunhan; Lin Hsinhon; Chuang Kehshih; Kao Chihhao, K.; Hsieh Hungjen; Liu Shuhsin

    2014-01-01

    In positron emission tomography (PET) of the dopaminergic system, quantitative measurements of nigrostriatal dopamine function are useful for differential diagnosis. A subregional analysis of striatal uptake enables the diagnostic performance to be more powerful. However, the partial volume effect (PVE) induces an underestimation of the true radioactivity concentration in small structures. This work proposes a simple algorithm for subregional analysis of striatal uptake with partial volume correction (PVC) in dopaminergic PET imaging. The PVC algorithm analyzes the separate striatal subregions and takes into account the PVE based on the recovery coefficient (RC). The RC is defined as the ratio of the PVE-uncorrected to PVE-corrected radioactivity concentration, and is derived from a combination of the traditional volume of interest (VOI) analysis and the large VOI technique. The clinical studies, comprising 11 patients with Parkinson's disease (PD) and 6 healthy subjects, were used to assess the impact of PVC on the quantitative measurements. Simulations on a numerical phantom that mimicked realistic healthy and neurodegenerative situations were used to evaluate the performance of the proposed PVC algorithm. In both the clinical and the simulation studies, the striatal-to-occipital ratio (SOR) values for the entire striatum and its subregions were calculated with and without PVC. In the clinical studies, the SOR values in each structure (caudate, anterior putamen, posterior putamen, putamen, and striatum) were significantly higher by using PVC in contrast to those without. Among the PD patients, the SOR values in each structure and quantitative disease severity ratings were shown to be significantly related only when PVC was used. For the simulation studies, the average absolute percentage error of the SOR estimates before and after PVC were 22.74% and 1.54% in the healthy situation, respectively; those in the neurodegenerative situation were 20.69% and 2

  13. Parallelization of MCNP4 code by using simple FORTRAN algorithms

    International Nuclear Information System (INIS)

    Yazid, P.I.; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka.

    1993-12-01

    Simple FORTRAN algorithms, that rely only on open, close, read and write statements, together with disk files and some UNIX commands have been applied to parallelization of MCNP4. The code, named MCNPNFS, maintains almost all capabilities of MCNP4 in solving shielding problems. It is able to perform parallel computing on a set of any UNIX workstations connected by a network, regardless of the heterogeneity in hardware system, provided that all processors produce a binary file in the same format. Further, it is confirmed that MCNPNFS can be executed also on Monte-4 vector-parallel computer. MCNPNFS has been tested intensively by executing 5 photon-neutron benchmark problems, a spent fuel cask problem and 17 sample problems included in the original code package of MCNP4. Three different workstations, connected by a network, have been used to execute MCNPNFS in parallel. By measuring CPU time, the parallel efficiency is determined to be 58% to 99% and 86% in average. On Monte-4, MCNPNFS has been executed using 4 processors concurrently and has achieved the parallel efficiency of 79% in average. (author)

  14. Simple Algorithms for Distributed Leader Election in Anonymous Synchronous Rings and Complete Networks Inspired by Neural Development in Fruit Flies.

    Science.gov (United States)

    Xu, Lei; Jeavons, Peter

    2015-11-01

    Leader election in anonymous rings and complete networks is a very practical problem in distributed computing. Previous algorithms for this problem are generally designed for a classical message passing model where complex messages are exchanged. However, the need to send and receive complex messages makes such algorithms less practical for some real applications. We present some simple synchronous algorithms for distributed leader election in anonymous rings and complete networks that are inspired by the development of the neural system of the fruit fly. Our leader election algorithms all assume that only one-bit messages are broadcast by nodes in the network and processors are only able to distinguish between silence and the arrival of one or more messages. These restrictions allow implementations to use a simpler message-passing architecture. Even with these harsh restrictions our algorithms are shown to achieve good time and message complexity both analytically and experimentally.

  15. Improved hybridization of Fuzzy Analytic Hierarchy Process (FAHP) algorithm with Fuzzy Multiple Attribute Decision Making - Simple Additive Weighting (FMADM-SAW)

    Science.gov (United States)

    Zaiwani, B. E.; Zarlis, M.; Efendi, S.

    2018-03-01

    In this research, the improvement of hybridization algorithm of Fuzzy Analytic Hierarchy Process (FAHP) with Fuzzy Technique for Order Preference by Similarity to Ideal Solution (FTOPSIS) in selecting the best bank chief inspector based on several qualitative and quantitative criteria with various priorities. To improve the performance of the above research, FAHP algorithm hybridization with Fuzzy Multiple Attribute Decision Making - Simple Additive Weighting (FMADM-SAW) algorithm was adopted, which applied FAHP algorithm to the weighting process and SAW for the ranking process to determine the promotion of employee at a government institution. The result of improvement of the average value of Efficiency Rate (ER) is 85.24%, which means that this research has succeeded in improving the previous research that is equal to 77.82%. Keywords: Ranking and Selection, Fuzzy AHP, Fuzzy TOPSIS, FMADM-SAW.

  16. Insights: Simple Models for Teaching Equilibrium and Le Chatelier's Principle.

    Science.gov (United States)

    Russell, Joan M.

    1988-01-01

    Presents three models that have been effective for teaching chemical equilibrium and Le Chatelier's principle: (1) the liquid transfer model, (2) the fish model, and (3) the teeter-totter model. Explains each model and its relation to Le Chatelier's principle. (MVL)

  17. Simple prostatectomy

    Science.gov (United States)

    ... Han M, Partin AW. Simple prostatectomy: open and robot-assisted laparoscopic approaches. In: Wein AJ, Kavoussi LR, ... M. is also a founding member of Hi-Ethics and subscribes to the principles of the Health ...

  18. Absorption cooling sources atmospheric emissions decrease by implementation of simple algorithm for limiting temperature of cooling water

    Science.gov (United States)

    Wojdyga, Krzysztof; Malicki, Marcin

    2017-11-01

    Constant strive to improve the energy efficiency forces carrying out activities aimed at reduction of energy consumption hence decreasing amount of contamination emissions to atmosphere. Cooling demand, both for air-conditioning and process cooling, plays an increasingly important role in the balance of Polish electricity generation and distribution system in summer. During recent years' demand for electricity during summer months has been steadily and significantly increasing leading to deficits of energy availability during particularly hot periods. This causes growing importance and interest in trigeneration power generation sources and heat recovery systems producing chilled water. Key component of such system is thermally driven chiller, mostly absorption, based on lithium-bromide and water mixture. Absorption cooling systems also exist in Poland as stand-alone systems, supplied with heating from various sources, generated solely for them or recovered as waste or useless energy. The publication presents a simple algorithm, designed to reduce the amount of heat for the supply of absorption chillers producing chilled water for the purposes of air conditioning by reducing the temperature of the cooling water, and its impact on decreasing emissions of harmful substances into the atmosphere. Scale of environmental advantages has been rated for specific sources what enabled evaluation and estimation of simple algorithm implementation to sources existing nationally.

  19. Predicting the onset of hazardous alcohol drinking in primary care: development and validation of a simple risk algorithm.

    Science.gov (United States)

    Bellón, Juan Ángel; de Dios Luna, Juan; King, Michael; Nazareth, Irwin; Motrico, Emma; GildeGómez-Barragán, María Josefa; Torres-González, Francisco; Montón-Franco, Carmen; Sánchez-Celaya, Marta; Díaz-Barreiros, Miguel Ángel; Vicens, Catalina; Moreno-Peral, Patricia

    2017-04-01

    Little is known about the risk of progressing to hazardous alcohol use in abstinent or low-risk drinkers. To develop and validate a simple brief risk algorithm for the onset of hazardous alcohol drinking (HAD) over 12 months for use in primary care. Prospective cohort study in 32 health centres from six Spanish provinces, with evaluations at baseline, 6 months, and 12 months. Forty-one risk factors were measured and multilevel logistic regression and inverse probability weighting were used to build the risk algorithm. The outcome was new occurrence of HAD during the study, as measured by the AUDIT. From the lists of 174 GPs, 3954 adult abstinent or low-risk drinkers were recruited. The 'predictAL-10' risk algorithm included just nine variables (10 questions): province, sex, age, cigarette consumption, perception of financial strain, having ever received treatment for an alcohol problem, childhood sexual abuse, AUDIT-C, and interaction AUDIT-C*Age. The c-index was 0.886 (95% CI = 0.854 to 0.918). The optimal cutoff had a sensitivity of 0.83 and specificity of 0.80. Excluding childhood sexual abuse from the model (the 'predictAL-9'), the c-index was 0.880 (95% CI = 0.847 to 0.913), sensitivity 0.79, and specificity 0.81. There was no statistically significant difference between the c-indexes of predictAL-10 and predictAL-9. The predictAL-10/9 is a simple and internally valid risk algorithm to predict the onset of hazardous alcohol drinking over 12 months in primary care attendees; it is a brief tool that is potentially useful for primary prevention of hazardous alcohol drinking. © British Journal of General Practice 2017.

  20. Simple experimentally derived algorithm for computer calculated dose rates associated with 137Cs gynecologic insertions

    International Nuclear Information System (INIS)

    Wrede, D.E.; Dawalibi, H.

    1980-01-01

    A simple mathematical algorithm is derived from experimental data for dose rates from 137 Cs sources in a finite tissue equivalent medium corresponding to the female pelvis. An analytical expression for a point source of 137 Cs along with a simple numerical integration routine allows for rapid as well as accurate dose rate calculations at points of interest for gynecologic insertions. When compared with theoretical models assuming an infinite unit density medium, the measured dose rates are found to be systematically lower at distances away from a single source; 5 per cent at 2 cm and 10 per cent at 7 cm along the transverse axis. Allowance in the program for print out of dose rates from individual sources to a given point and the feature of source strength modification allows for optimization in terms of increasing the difference in dose rate between reference treatment points and sensitive structures such as the bladder, rectum and colon. (Auth.)

  1. A Simple and Efficient Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Yunfeng Xu

    2013-01-01

    Full Text Available Artificial bee colony (ABC is a new population-based stochastic algorithm which has shown good search abilities on many optimization problems. However, the original ABC shows slow convergence speed during the search process. In order to enhance the performance of ABC, this paper proposes a new artificial bee colony (NABC algorithm, which modifies the search pattern of both employed and onlooker bees. A solution pool is constructed by storing some best solutions of the current swarm. New candidate solutions are generated by searching the neighborhood of solutions randomly chosen from the solution pool. Experiments are conducted on a set of twelve benchmark functions. Simulation results show that our approach is significantly better or at least comparable to the original ABC and seven other stochastic algorithms.

  2. Low dose CT reconstruction via L1 norm dictionary learning using alternating minimization algorithm and balancing principle.

    Science.gov (United States)

    Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin

    2018-04-18

    Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.

  3. A simple fall detection algorithm for Powered Two Wheelers

    OpenAIRE

    BOUBEZOUL, Abderrahmane; ESPIE, Stéphane; LARNAUDIE, Bruno; BOUAZIZ, Samir

    2013-01-01

    The aim of this study is to evaluate a low-complexity fall detection algorithm, that use both acceleration and angular velocity signals to trigger an alert-system or to inflate an airbag jacket. The proposed fall detection algorithm is a threshold-based algorithm, using data from 3-accelerometers and 3-gyroscopes sensors mounted on the motorcycle. During the first step, the commonly fall accident configurations were selected and analyzed in order to identify the main causation factors. On the...

  4. A Simple Model of Entrepreneurship for Principles of Economics Courses

    Science.gov (United States)

    Gunter, Frank R.

    2012-01-01

    The critical roles of entrepreneurs in creating, operating, and destroying markets, as well as their importance in driving long-term economic growth are still generally either absent from principles of economics texts or relegated to later chapters. The primary difficulties in explaining entrepreneurship at the principles level are the lack of a…

  5. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  6. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  7. A simple biota removal algorithm for 35 GHz cloud radar measurements

    Science.gov (United States)

    Kalapureddy, Madhu Chandra R.; Sukanya, Patra; Das, Subrata K.; Deshpande, Sachin M.; Pandithurai, Govindan; Pazamany, Andrew L.; Ambuj K., Jha; Chakravarty, Kaustav; Kalekar, Prasad; Krishna Devisetty, Hari; Annam, Sreenivas

    2018-03-01

    promisingly simple in realization but powerful in performance due to the flexibility in constraining, identifying and filtering out the biota and screening out the true cloud content, especially the CBL clouds. Therefore, the TEST algorithm is superior for screening out the low-level clouds that are strongly linked to the rainmaking mechanism associated with the Indian Summer Monsoon region's CVS.

  8. Invited Article: Acousto-optic finite-difference frequency-domain algorithm for first-principles simulations of on-chip acousto-optic devices

    Directory of Open Access Journals (Sweden)

    Yu Shi

    2017-02-01

    Full Text Available We introduce a finite-difference frequency-domain algorithm for coupled acousto-optic simulations. First-principles acousto-optic simulation in time domain has been challenging due to the fact that the acoustic and optical frequencies differ by many orders of magnitude. We bypass this difficulty by formulating the interactions between the optical and acoustic waves rigorously as a system of coupled nonlinear equations in frequency domain. This approach is particularly suited for on-chip devices that are based on a variety of acousto-optic interactions such as the stimulated Brillouin scattering. We validate our algorithm by simulating a stimulated Brillouin scattering process in a suspended waveguide structure and find excellent agreement with coupled-mode theory. We further provide an example of a simulation for a compact on-chip resonator device that greatly enhances the effect of stimulated Brillouin scattering. Our algorithm should facilitate the design of nanophotonic on-chip devices for the harnessing of photon-phonon interactions.

  9. FAST (Four chamber view And Swing Technique) Echo: a Novel and Simple Algorithm to Visualize Standard Fetal Echocardiographic Planes

    Science.gov (United States)

    Yeo, Lami; Romero, Roberto; Jodicke, Cristiano; Oggè, Giovanna; Lee, Wesley; Kusanovic, Juan Pedro; Vaisbuch, Edi; Hassan, Sonia S.

    2010-01-01

    Objective To describe a novel and simple algorithm (FAST Echo: Four chamber view And Swing Technique) to visualize standard diagnostic planes of fetal echocardiography from dataset volumes obtained with spatiotemporal image correlation (STIC) and applying a new display technology (OmniView). Methods We developed an algorithm to image standard fetal echocardiographic planes by drawing four dissecting lines through the longitudinal view of the ductal arch contained in a STIC volume dataset. Three of the lines are locked to provide simultaneous visualization of targeted planes, and the fourth line (unlocked) “swings” through the ductal arch image (“swing technique”), providing an infinite number of cardiac planes in sequence. Each line generated the following plane(s): 1) Line 1: three-vessels and trachea view; 2) Line 2: five-chamber view and long axis view of the aorta (obtained by rotation of the five-chamber view on the y-axis); 3) Line 3: four-chamber view; and 4) “Swing” line: three-vessels and trachea view, five-chamber view and/or long axis view of the aorta, four-chamber view, and stomach. The algorithm was then tested in 50 normal hearts (15.3 – 40 weeks of gestation) and visualization rates for cardiac diagnostic planes were calculated. To determine if the algorithm could identify planes that departed from the normal images, we tested the algorithm in 5 cases with proven congenital heart defects. Results In normal cases, the FAST Echo algorithm (3 locked lines and rotation of the five-chamber view on the y-axis) was able to generate the intended planes (longitudinal view of the ductal arch, pulmonary artery, three-vessels and trachea view, five-chamber view, long axis view of the aorta, four-chamber view): 1) individually in 100% of cases [except for the three-vessel and trachea view, which was seen in 98% (49/50)]; and 2) simultaneously in 98% (49/50). The “swing technique” was able to generate the three-vessels and trachea view, five

  10. Four-chamber view and 'swing technique' (FAST) echo: a novel and simple algorithm to visualize standard fetal echocardiographic planes.

    Science.gov (United States)

    Yeo, L; Romero, R; Jodicke, C; Oggè, G; Lee, W; Kusanovic, J P; Vaisbuch, E; Hassan, S

    2011-04-01

    To describe a novel and simple algorithm (four-chamber view and 'swing technique' (FAST) echo) for visualization of standard diagnostic planes of fetal echocardiography from dataset volumes obtained with spatiotemporal image correlation (STIC) and applying a new display technology (OmniView). We developed an algorithm to image standard fetal echocardiographic planes by drawing four dissecting lines through the longitudinal view of the ductal arch contained in a STIC volume dataset. Three of the lines are locked to provide simultaneous visualization of targeted planes, and the fourth line (unlocked) 'swings' through the ductal arch image (swing technique), providing an infinite number of cardiac planes in sequence. Each line generates the following plane(s): (a) Line 1: three-vessels and trachea view; (b) Line 2: five-chamber view and long-axis view of the aorta (obtained by rotation of the five-chamber view on the y-axis); (c) Line 3: four-chamber view; and (d) 'swing line': three-vessels and trachea view, five-chamber view and/or long-axis view of the aorta, four-chamber view and stomach. The algorithm was then tested in 50 normal hearts in fetuses at 15.3-40 weeks' gestation and visualization rates for cardiac diagnostic planes were calculated. To determine whether the algorithm could identify planes that departed from the normal images, we tested the algorithm in five cases with proven congenital heart defects. In normal cases, the FAST echo algorithm (three locked lines and rotation of the five-chamber view on the y-axis) was able to generate the intended planes (longitudinal view of the ductal arch, pulmonary artery, three-vessels and trachea view, five-chamber view, long-axis view of the aorta, four-chamber view) individually in 100% of cases (except for the three-vessels and trachea view, which was seen in 98% (49/50)) and simultaneously in 98% (49/50). The swing technique was able to generate the three-vessels and trachea view, five-chamber view and/or long

  11. Acceleration of the direct reconstruction of linear parametric images using nested algorithms

    International Nuclear Information System (INIS)

    Wang Guobao; Qi Jinyi

    2010-01-01

    Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.

  12. Quantum Computations: Fundamentals and Algorithms

    International Nuclear Information System (INIS)

    Duplij, S.A.; Shapoval, I.I.

    2007-01-01

    Basic concepts of quantum information theory, principles of quantum calculations and the possibility of creation on this basis unique on calculation power and functioning principle device, named quantum computer, are concerned. The main blocks of quantum logic, schemes of quantum calculations implementation, as well as some known today effective quantum algorithms, called to realize advantages of quantum calculations upon classical, are presented here. Among them special place is taken by Shor's algorithm of number factorization and Grover's algorithm of unsorted database search. Phenomena of decoherence, its influence on quantum computer stability and methods of quantum errors correction are described

  13. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  14. Simple experimentally derived algorithm for computer calculated dose rates associated with /sup 137/Cs gynecologic insertions

    Energy Technology Data Exchange (ETDEWEB)

    Wrede, D E; Dawalibi, H [King Faisal Specialist Hospital and Research Centre, Department of Medical Physics. Riyadh (Saudi Arabia)

    1980-01-01

    A simple mathematical algorithm is derived from experimental data for dose rates from /sup 137/Cs sources in a finite tissue equivalent medium corresponding to the female pelvis. An analytical expression for a point source of /sup 137/Cs along with a simple numerical integration routine allows for rapid as well as accurate dose rate calculations at points of interest for gynecologic insertions. When compared with theoretical models assuming an infinite unit density medium, the measured dose rates are found to be systematically lower at distances away from a single source; 5 per cent at 2 cm and 10 per cent at 7 cm along the transverse axis. Allowance in the program for print out of dose rates from individual sources to a given point and the feature of source strength modification allows for optimization in terms of increasing the difference in dose rate between reference treatment points and sensitive structures such as the bladder, rectum and colon.

  15. Subpixelic measurement of large 1D displacements: principle, processing algorithms, performances and software.

    Science.gov (United States)

    Guelpa, Valérian; Laurent, Guillaume J; Sandoz, Patrick; Zea, July Galeano; Clévy, Cédric

    2014-03-12

    This paper presents a visual measurement method able to sense 1D rigid body displacements with very high resolutions, large ranges and high processing rates. Sub-pixelic resolution is obtained thanks to a structured pattern placed on the target. The pattern is made of twin periodic grids with slightly different periods. The periodic frames are suited for Fourier-like phase calculations-leading to high resolution-while the period difference allows the removal of phase ambiguity and thus a high range-to-resolution ratio. The paper presents the measurement principle as well as the processing algorithms (source files are provided as supplementary materials). The theoretical and experimental performances are also discussed. The processing time is around 3 µs for a line of 780 pixels, which means that the measurement rate is mostly limited by the image acquisition frame rate. A 3-σ repeatability of 5 nm is experimentally demonstrated which has to be compared with the 168 µm measurement range.

  16. First-principles study of complex material systems

    Science.gov (United States)

    He, Lixin

    This thesis covers several topics concerning the study of complex materials systems by first-principles methods. It contains four chapters. A brief, introductory motivation of this work will be given in Chapter 1. In Chapter 2, I will give a short overview of the first-principles methods, including density-functional theory (DFT), planewave pseudopotential methods, and the Berry-phase theory of polarization in crystallines insulators. I then discuss in detail the locality and exponential decay properties of Wannier functions and of related quantities such as the density matrix, and their application in linear-scaling algorithms. In Chapter 3, I investigate the interaction of oxygen vacancies and 180° domain walls in tetragonal PbTiO3 using first-principles methods. Our calculations indicate that the oxygen vacancies have a lower formation energy in the domain wall than in the bulk, thereby confirming the tendency of these defects to migrate to, and pin, the domain walls. The pinning energies are reported for each of the three possible orientations of the original Ti--O--Ti bonds, and attempts to model the results with simple continuum models are discussed. CaCu3Ti4O12 (CCTO) has attracted a lot of attention recently because it was found to have an enormous dielectric response over a very wide temperature range. In Chapter 4, I study the electronic and lattice structure, and the lattice dynamical properties, of this system. Our first-principles calculations together with experimental results point towards an extrinsic mechanism as the origin of the unusual dielectric response.

  17. Complexity-aware simple modeling.

    Science.gov (United States)

    Gómez-Schiavon, Mariana; El-Samad, Hana

    2018-02-26

    Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. New algorithms for motion error detection of numerical control machine tool by laser tracking measurement on the basis of GPS principle

    Science.gov (United States)

    Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie

    2018-01-01

    As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.

  19. Flux-corrected transport principles, algorithms, and applications

    CERN Document Server

    Kuzmin, Dmitri; Turek, Stefan

    2005-01-01

    Addressing students and researchers as well as CFD practitioners, this book describes the state of the art in the development of high-resolution schemes based on the Flux-Corrected Transport (FCT) paradigm. Intended for readers who have a solid background in Computational Fluid Dynamics, the book begins with historical notes by J.P. Boris and D.L. Book. Review articles that follow describe recent advances in the design of FCT algorithms as well as various algorithmic aspects. The topics addressed in the book and its main highlights include: the derivation and analysis of classical FCT schemes with special emphasis on the underlying physical and mathematical constraints; flux limiting for hyperbolic systems; generalization of FCT to implicit time-stepping and finite element discretizations on unstructured meshes and its role as a subgrid scale model for Monotonically Integrated Large Eddy Simulation (MILES) of turbulent flows. The proposed enhancements of the FCT methodology also comprise the prelimiting and '...

  20. The action uncertainty principle and quantum gravity

    Science.gov (United States)

    Mensky, Michael B.

    1992-02-01

    Results of the path-integral approach to the quantum theory of continuous measurements have been formulated in a preceding paper in the form of an inequality of the type of the uncertainty principle. The new inequality was called the action uncertainty principle, AUP. It was shown that the AUP allows one to find in a simple what outputs of the continuous measurements will occur with high probability. Here a more simple form of the AUP will be formulated, δ S≳ħ. When applied to quantum gravity, it leads in a very simple way to the Rosenfeld inequality for measurability of the average curvature.

  1. Solving simple stochastic games with few coin toss positions

    DEFF Research Database (Denmark)

    Ibsen-Jensen, Rasmus; Miltersen, Peter Bro

    2011-01-01

    Gimbert and Horn gave an algorithm for solving simple stochastic games with running time O(r! n) where n is the number of positions of the simple stochastic game and r is the number of its coin toss positions. Chatterjee et al. pointed out that a variant of strategy iteration can be implemented...... to solve this problem in time 4^r r^{O(1)} n^{O(1)}. In this paper, we show that an algorithm combining value iteration with retrograde analysis achieves a time bound of O(r 2^r (r log r + n)), thus improving both time bounds. While the algorithm is simple, the analysis leading to this time bound...

  2. A Simple Two Aircraft Conflict Resolution Algorithm

    Science.gov (United States)

    Chatterji, Gano B.

    2006-01-01

    Conflict detection and resolution methods are crucial for distributed air-ground traffic management in which the crew in, the cockpit, dispatchers in operation control centers sad and traffic controllers in the ground-based air traffic management facilities share information and participate in the traffic flow and traffic control functions. This paper describes a conflict detection, and a conflict resolution method. The conflict detection method predicts the minimum separation and the time-to-go to the closest point of approach by assuming that both the aircraft will continue to fly at their current speeds along their current headings. The conflict resolution method described here is motivated by the proportional navigation algorithm, which is often used for missile guidance during the terminal phase. It generates speed and heading commands to rotate the line-of-sight either clockwise or counter-clockwise for conflict resolution. Once the aircraft achieve a positive range-rate and no further conflict is predicted, the algorithm generates heading commands to turn back the aircraft to their nominal trajectories. The speed commands are set to the optimal pre-resolution speeds. Six numerical examples are presented to demonstrate the conflict detection, and the conflict resolution methods.

  3. Simple concurrent garbage collection almost without synchronization

    NARCIS (Netherlands)

    Hesselink, Wim H.; Lali, M.I.

    We present two simple mark and sweep algorithms, A and B, for concurrent garbage collection by a single collector running concurrently with a number of mutators that concurrently modify shared data. Both algorithms are based on the ideas of Ben-Ari's classical algorithm for on-the-fly garbage

  4. Quantum Computation and Algorithms

    International Nuclear Information System (INIS)

    Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.

    1999-01-01

    It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution

  5. The Basic Principles and Methods of the System Approach to Compression of Telemetry Data

    Science.gov (United States)

    Levenets, A. V.

    2018-01-01

    The task of data compressing of measurement data is still urgent for information-measurement systems. In paper the basic principles necessary for designing of highly effective systems of compression of telemetric information are offered. A basis of the offered principles is representation of a telemetric frame as whole information space where we can find of existing correlation. The methods of data transformation and compressing algorithms realizing the offered principles are described. The compression ratio for offered compression algorithm is about 1.8 times higher, than for a classic algorithm. Thus, results of a research of methods and algorithms showing their good perspectives.

  6. A simple algorithm for calculating the area of an arbitrary polygon

    Directory of Open Access Journals (Sweden)

    K.R. Wijeweera

    2017-06-01

    Full Text Available Computing the area of an arbitrary polygon is a popular problem in pure mathematics. The two methods used are Shoelace Method (SM and Orthogonal Trapezoids Method (OTM. In OTM, the polygon is partitioned into trapezoids by drawing either horizontal or vertical lines through its vertices. The area of each trapezoid is computed and the resultant areas are added up. In SM, a formula which is a generalization of Green’s Theorem for the discrete case is used. The most of the available systems is based on SM. Since an algorithm for OTM is not available in literature, this paper proposes an algorithm for OTM along with efficient implementation. Conversion of a pure mathematical method into an efficient computer program is not straightforward. In order to reduce the run time, minimal computation needs to be achieved. Handling of indeterminate forms and special cases separately can support this. On the other hand, precision error should also be avoided. Salient feature of the proposed algorithm is that it successfully handles these situations achieving minimum run time. Experimental results of the proposed method are compared against that of the existing algorithm. However, the proposed algorithm suggests a way to partition a polygon into orthogonal trapezoids which is not an easy task. Additionally, the proposed algorithm uses only basic mathematical concepts while the Green’s theorem uses complicated mathematical concepts. The proposed algorithm can be used when the simplicity is important than the speed.

  7. Principle Paradigms Revisiting the Dublin Core 1:1 Principle

    Science.gov (United States)

    Urban, Richard J.

    2012-01-01

    The Dublin Core "1:1 Principle" asserts that "related but conceptually different entities, for example a painting and a digital image of the painting, are described by separate metadata records" (Woodley et al., 2005). While this seems to be a simple requirement, studies of metadata quality have found that cultural heritage…

  8. A simple derivation and analysis of a helical cone beam tomographic algorithm for long object imaging via a novel definition of region of interest

    International Nuclear Information System (INIS)

    Hu Jicun; Tam, Kwok; Johnson, Roger H

    2004-01-01

    We derive and analyse a simple algorithm first proposed by Kudo et al (2001 Proc. 2001 Meeting on Fully 3D Image Reconstruction in Radiology and Nuclear Medicine (Pacific Grove, CA) pp 7-10) for long object imaging from truncated helical cone beam data via a novel definition of region of interest (ROI). Our approach is based on the theory of short object imaging by Kudo et al (1998 Phys. Med. Biol. 43 2885-909). One of the key findings in their work is that filtering of the truncated projection can be divided into two parts: one, finite in the axial direction, results from ramp filtering the data within the Tam window. The other, infinite in the z direction, results from unbounded filtering of ray sums over PI lines only. We show that for an ROI defined by PI lines emanating from the initial and final source positions on a helical segment, the boundary data which would otherwise contaminate the reconstruction of the ROI can be completely excluded. This novel definition of the ROI leads to a simple algorithm for long object imaging. The overscan of the algorithm is analytically calculated and it is the same as that of the zero boundary method. The reconstructed ROI can be divided into two regions: one is minimally contaminated by the portion outside the ROI, while the other is reconstructed free of contamination. We validate the algorithm with a 3D Shepp-Logan phantom and a disc phantom

  9. From properties to materials: An efficient and simple approach.

    Science.gov (United States)

    Huwig, Kai; Fan, Chencheng; Springborg, Michael

    2017-12-21

    We present an inverse-design method, the poor man's materials optimization, that is designed to identify materials within a very large class with optimized values for a pre-chosen property. The method combines an efficient genetic-algorithm-based optimization, an automatic approach for generating modified molecules, a simple approach for calculating the property of interest, and a mathematical formulation of the quantity whose value shall be optimized. In order to illustrate the performance of our approach, we study the properties of organic molecules related to those used in dye-sensitized solar cells, whereby we, for the sake of proof of principle, consider benzene as a simple test system. Using a genetic algorithm, the substituents attached to the organic backbone are varied and the best performing molecules are identified. We consider several properties to describe the performance of organic molecules, including the HOMO-LUMO gap, the sunlight absorption, the spatial distance of the orbitals, and the reorganisation energy. The results show that our method is able to identify a large number of good candidate structures within a short time. In some cases, chemical/physical intuition can be used to rationalize the substitution pattern of the best structures, although this is not always possible. The present investigations provide a solid foundation for dealing with more complex and technically relevant systems such as porphyrins. Furthermore, our "properties first, materials second" approach is not limited to solar-energy harvesting but can be applied to many other fields, as briefly is discussed in the paper.

  10. From properties to materials: An efficient and simple approach

    Science.gov (United States)

    Huwig, Kai; Fan, Chencheng; Springborg, Michael

    2017-12-01

    We present an inverse-design method, the poor man's materials optimization, that is designed to identify materials within a very large class with optimized values for a pre-chosen property. The method combines an efficient genetic-algorithm-based optimization, an automatic approach for generating modified molecules, a simple approach for calculating the property of interest, and a mathematical formulation of the quantity whose value shall be optimized. In order to illustrate the performance of our approach, we study the properties of organic molecules related to those used in dye-sensitized solar cells, whereby we, for the sake of proof of principle, consider benzene as a simple test system. Using a genetic algorithm, the substituents attached to the organic backbone are varied and the best performing molecules are identified. We consider several properties to describe the performance of organic molecules, including the HOMO-LUMO gap, the sunlight absorption, the spatial distance of the orbitals, and the reorganisation energy. The results show that our method is able to identify a large number of good candidate structures within a short time. In some cases, chemical/physical intuition can be used to rationalize the substitution pattern of the best structures, although this is not always possible. The present investigations provide a solid foundation for dealing with more complex and technically relevant systems such as porphyrins. Furthermore, our "properties first, materials second" approach is not limited to solar-energy harvesting but can be applied to many other fields, as briefly is discussed in the paper.

  11. Water productivity using SAFER - Simple Algorithm for Evapotranspiration Retrieving in watershed

    Directory of Open Access Journals (Sweden)

    Daniel N. Coaguila

    Full Text Available ABSTRACT The Cabeceira Comprida stream’s watershed, located in Santa Fé do Sul, São Paulo state, has great environmental importance. It is essential for supplying water to the population and generating surpluses for sewage dilution. This study aimed to evaluate the annual performance of the components of water productivity from Landsat-8 images of 2015, using the Simple Algorithm for Evapotranspiration Retrieving (SAFER, calculating the actual evapotranspiration (ETa, biomass (BIO and water productivity (WP. The annual averages of ETa, BIO and WP were 1.03 mm day-1, 36.04 kg ha-1 day-1 and 3.19 kg m-3, respectively. The average annual values of ETa for land use and occupation were 1.40, 1.23, 1.05, 0.97 and 1.08 mm day-1 for the remaining forest (RF, invasive species (IS, pasture (Pa, annual crop (AC and perennial crop (PC, respectively, with BIO of 57.64, 46.10, 36.78, 32.69, 40.03 kg ha-1 day-1 for RF, IS, Pa, AC and PC, respectively, resulting in WP of 3.94, 3.59, 3.25, 3.09, 3.35 kg m-3 for RF, IS, Pa, AC and PC, respectively. The ETa, BIO and WP adjust to the seasonality of the region, and RF and IS stood out with the highest values.

  12. Clustering Using Boosted Constrained k-Means Algorithm

    Directory of Open Access Journals (Sweden)

    Masayuki Okabe

    2018-03-01

    Full Text Available This article proposes a constrained clustering algorithm with competitive performance and less computation time to the state-of-the-art methods, which consists of a constrained k-means algorithm enhanced by the boosting principle. Constrained k-means clustering using constraints as background knowledge, although easy to implement and quick, has insufficient performance compared with metric learning-based methods. Since it simply adds a function into the data assignment process of the k-means algorithm to check for constraint violations, it often exploits only a small number of constraints. Metric learning-based methods, which exploit constraints to create a new metric for data similarity, have shown promising results although the methods proposed so far are often slow depending on the amount of data or number of feature dimensions. We present a method that exploits the advantages of the constrained k-means and metric learning approaches. It incorporates a mechanism for accepting constraint priorities and a metric learning framework based on the boosting principle into a constrained k-means algorithm. In the framework, a metric is learned in the form of a kernel matrix that integrates weak cluster hypotheses produced by the constrained k-means algorithm, which works as a weak learner under the boosting principle. Experimental results for 12 data sets from 3 data sources demonstrated that our method has performance competitive to those of state-of-the-art constrained clustering methods for most data sets and that it takes much less computation time. Experimental evaluation demonstrated the effectiveness of controlling the constraint priorities by using the boosting principle and that our constrained k-means algorithm functions correctly as a weak learner of boosting.

  13. Microscopic Description of Le Chatelier's Principle

    Science.gov (United States)

    Novak, Igor

    2005-01-01

    A simple approach that "demystifies" Le Chatelier's principle (LCP) and simulates students to think about fundamental physical background behind the well-known principles is presented. The approach uses microscopic descriptors of matter like energy levels and populations and does not require any assumption about the fixed amount of substance being…

  14. A Data-Guided Lexisearch Algorithm for the Asymmetric Traveling Salesman Problem

    Directory of Open Access Journals (Sweden)

    Zakir Hussain Ahmed

    2011-01-01

    Full Text Available A simple lexisearch algorithm that uses path representation method for the asymmetric traveling salesman problem (ATSP is proposed, along with an illustrative example, to obtain exact optimal solution to the problem. Then a data-guided lexisearch algorithm is presented. First, the cost matrix of the problem is transposed depending on the variance of rows and columns, and then the simple lexisearch algorithm is applied. It is shown that this minor preprocessing of the data before the simple lexisearch algorithm is applied improves the computational time substantially. The efficiency of our algorithms to the problem against two existing algorithms has been examined for some TSPLIB and random instances of various sizes. The results show remarkably better performance of our algorithms, especially our data-guided algorithm.

  15. Development of a Two-Phase Flow Analysis Code based on a Unstructured-Mesh SIMPLE Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Tae; Park, Ik Kyu; Cho, Heong Kyu; Yoon, Han Young; Kim, Kyung Doo; Jeong, Jae Jun

    2008-09-15

    For analyses of multi-phase flows in a water-cooled nuclear power plant, a three-dimensional SIMPLE-algorithm based hydrodynamic solver CUPID-S has been developed. As governing equations, it adopts a two-fluid three-field model for the two-phase flows. The three fields represent a continuous liquid, a dispersed droplets, and a vapour field. The governing equations are discretized by a finite volume method on an unstructured grid to handle the geometrical complexity of the nuclear reactors. The phasic momentum equations are coupled and solved with a sparse block Gauss-Seidel matrix solver to increase a numerical stability. The pressure correction equation derived by summing the phasic volume fraction equations is applied on the unstructured mesh in the context of a cell-centered co-located scheme. This paper presents the numerical method and the preliminary results of the calculations.

  16. Combinatorial structures to modeling simple games and applications

    Science.gov (United States)

    Molinero, Xavier

    2017-09-01

    We connect three different topics: combinatorial structures, game theory and chemistry. In particular, we establish the bases to represent some simple games, defined as influence games, and molecules, defined from atoms, by using combinatorial structures. First, we characterize simple games as influence games using influence graphs. It let us to modeling simple games as combinatorial structures (from the viewpoint of structures or graphs). Second, we formally define molecules as combinations of atoms. It let us to modeling molecules as combinatorial structures (from the viewpoint of combinations). It is open to generate such combinatorial structures using some specific techniques as genetic algorithms, (meta-)heuristics algorithms and parallel programming, among others.

  17. Cognitive radio resource allocation based on coupled chaotic genetic algorithm

    International Nuclear Information System (INIS)

    Zu Yun-Xiao; Zhou Jie; Zeng Chang-Chang

    2010-01-01

    A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed

  18. Development of a simple unified volatility-based scheme (SUVS for secondary organic aerosol formation using genetic algorithms

    Directory of Open Access Journals (Sweden)

    A. G. Xia

    2011-07-01

    Full Text Available A new method is proposed to simplify complex atmospheric chemistry reaction schemes, while preserving SOA formation properties, using genetic algorithms. The method is first applied in this study to the gas-phase α-pinene oxidation scheme. The simple unified volatility-based scheme (SUVS reflects the multi-generation evolution of chemical species from a near-explicit master chemical mechanism (MCM and, at the same time, uses the volatility-basis set speciation for condensable products. The SUVS also unifies reactions between SOA precursors with different oxidants under different atmospheric conditions. A total of 412 unknown parameters (product yields of parameterized products, reaction rates, etc. from the SUVS are estimated by using genetic algorithms operating on the detailed mechanism. The number of organic species was reduced from 310 in the detailed mechanism to 31 in the SUVS. Output species profiles, obtained from the original subset of the MCM reaction scheme for α-pinene oxidation, are reproduced with maximum fractional error at 0.10 for scenarios under a wide range of ambient HC/NOx conditions. Ultimately, the same SUVS with updated parameters could be used to describe the SOA formation from different precursors.

  19. Experimental realization of Shor's quantum factoring algorithm using nuclear magnetic resonance.

    Science.gov (United States)

    Vandersypen, L M; Steffen, M; Breyta, G; Yannoni, C S; Sherwood, M H; Chuang, I L

    The number of steps any classical computer requires in order to find the prime factors of an l-digit integer N increases exponentially with l, at least using algorithms known at present. Factoring large integers is therefore conjectured to be intractable classically, an observation underlying the security of widely used cryptographic codes. Quantum computers, however, could factor integers in only polynomial time, using Shor's quantum factoring algorithm. Although important for the study of quantum computers, experimental demonstration of this algorithm has proved elusive. Here we report an implementation of the simplest instance of Shor's algorithm: factorization of N = 15 (whose prime factors are 3 and 5). We use seven spin-1/2 nuclei in a molecule as quantum bits, which can be manipulated with room temperature liquid-state nuclear magnetic resonance techniques. This method of using nuclei to store quantum information is in principle scalable to systems containing many quantum bits, but such scalability is not implied by the present work. The significance of our work lies in the demonstration of experimental and theoretical techniques for precise control and modelling of complex quantum computers. In particular, we present a simple, parameter-free but predictive model of decoherence effects in our system.

  20. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  1. Predictive minimum description length principle approach to inferring gene regulatory networks.

    Science.gov (United States)

    Chaitankar, Vijender; Zhang, Chaoyang; Ghosh, Preetam; Gong, Ping; Perkins, Edward J; Deng, Youping

    2011-01-01

    Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold that defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we propose a new inference algorithm that incorporates mutual information (MI), conditional mutual information (CMI), and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the regulatory relationships between genes and the PMDL principle method attempts to determine the best MI threshold without the need of a user-specified fine tuning parameter. The performance of the proposed algorithm is evaluated using both synthetic time series data sets and a biological time series data set (Saccharomyces cerevisiae). The results show that the proposed algorithm produced fewer false edges and significantly improved the precision when compared to existing MDL algorithm.

  2. Mao-Gilles Stabilization Algorithm

    OpenAIRE

    Jérôme Gilles

    2013-01-01

    Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...

  3. A novel hybrid algorithm of GSA with Kepler algorithm for numerical optimization

    Directory of Open Access Journals (Sweden)

    Soroor Sarafrazi

    2015-07-01

    Full Text Available It is now well recognized that pure algorithms can be promisingly improved by hybridization with other techniques. One of the relatively new metaheuristic algorithms is Gravitational Search Algorithm (GSA which is based on the Newton laws. In this paper, to enhance the performance of GSA, a novel algorithm called “Kepler”, inspired by the astrophysics, is introduced. The Kepler algorithm is based on the principle of the first Kepler law. The hybridization of GSA and Kepler algorithm is an efficient approach to provide much stronger specialization in intensification and/or diversification. The performance of GSA–Kepler is evaluated by applying it to 14 benchmark functions with 20–1000 dimensions and the optimal approximation of linear system as a practical optimization problem. The results obtained reveal that the proposed hybrid algorithm is robust enough to optimize the benchmark functions and practical optimization problems.

  4. A New Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Medha Gupta

    2016-07-01

    Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.

  5. The theory of hybrid stochastic algorithms

    International Nuclear Information System (INIS)

    Kennedy, A.D.

    1989-01-01

    These lectures introduce the family of Hybrid Stochastic Algorithms for performing Monte Carlo calculations in Quantum Field Theory. After explaining the basic concepts of Monte Carlo integration we discuss the properties of Markov processes and one particularly useful example of them: the Metropolis algorithm. Building upon this framework we consider the Hybrid and Langevin algorithms from the viewpoint that they are approximate versions of the Hybrid Monte Carlo method; and thus we are led to consider Molecular Dynamics using the Leapfrog algorithm. The lectures conclude by reviewing recent progress in these areas, explaining higher-order integration schemes, the asymptotic large-volume behaviour of the various algorithms, and some simple exact results obtained by applying them to free field theory. It is attempted throughout to give simple yet correct proofs of the various results encountered. 38 refs

  6. The parallel plate avalanche counter: a simple, rugged, imaging X-ray counter

    International Nuclear Information System (INIS)

    Joensen, K.D.; Budtz-Joergensen, C.; Bahnsen, A.; Madsen, M.M.; Olesen, C.; Schnopper, H.W.

    1995-01-01

    A two-dimensional parallel gap proportional counter has been developed at the Danish Space Research Institute. Imaging over the 120 mm diameter active area is obtained using the positive ion component of the avalanche signals as recorded by a system of wedge- and strip-electrodes. An electronically simple, but very effective background rejection is obtained by using the fast electron component of the avalanche signal. Gas gains up to 8x10 5 have been achieved. An energy-resolution of 16% and a sub-millimeter spatial resolution have been measured at 5.9 keV for an operating gas gain of 10 5 . In principle, the position coordinates are linear functions of electronic readouts. The present model, however, exhibits non-linearities, caused by imperfections in the wedge and strip-electrode pattern. These non-linearities are corrected by using a bilinear correction algorithm. We conclude that the rugged construction, the simple electronics, the effectiveness of the background rejection and the actual imaging performance makes this a very attractive laboratory detector for low and intermediate count rate imaging applications. ((orig.))

  7. Unsupervised Classification Using Immune Algorithm

    OpenAIRE

    Al-Muallim, M. T.; El-Kouatly, R.

    2012-01-01

    Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...

  8. Maximum-entropy clustering algorithm and its global convergence analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.

  9. Automatic modulation classification principles, algorithms and applications

    CERN Document Server

    Zhu, Zhechen

    2014-01-01

    Automatic Modulation Classification (AMC) has been a key technology in many military, security, and civilian telecommunication applications for decades. In military and security applications, modulation often serves as another level of encryption; in modern civilian applications, multiple modulation types can be employed by a signal transmitter to control the data rate and link reliability. This book offers comprehensive documentation of AMC models, algorithms and implementations for successful modulation recognition. It provides an invaluable theoretical and numerical comparison of AMC algo

  10. Basic Knowledge for Market Principle: Approaches to the Price Coordination Mechanism by Using Optimization Theory and Algorithm

    Science.gov (United States)

    Aiyoshi, Eitaro; Masuda, Kazuaki

    On the basis of market fundamentalism, new types of social systems with the market mechanism such as electricity trading markets and carbon dioxide (CO2) emission trading markets have been developed. However, there are few textbooks in science and technology which present the explanation that Lagrange multipliers can be interpreted as market prices. This tutorial paper explains that (1) the steepest descent method for dual problems in optimization, and (2) Gauss-Seidel method for solving the stationary conditions of Lagrange problems with market principles, can formulate the mechanism of market pricing, which works even in the information-oriented modern society. The authors expect readers to acquire basic knowledge on optimization theory and algorithms related to economics and to utilize them for designing the mechanism of more complicated markets.

  11. On König's root finding algorithms

    DEFF Research Database (Denmark)

    Buff, Xavier; Henriksen, Christian

    2003-01-01

    In this paper, we first recall the definition of a family of root-finding algorithms known as König's algorithms. We establish some local and some global properties of those algorithms. We give a characterization of rational maps which arise as König's methods of polynomials with simple roots. We...

  12. Exploring Simple Algorithms for Estimating Gross Primary Production in Forested Areas from Satellite Data

    Directory of Open Access Journals (Sweden)

    Ramakrishna R. Nemani

    2012-01-01

    Full Text Available Algorithms that use remotely-sensed vegetation indices to estimate gross primary production (GPP, a key component of the global carbon cycle, have gained a lot of popularity in the past decade. Yet despite the amount of research on the topic, the most appropriate approach is still under debate. As an attempt to address this question, we compared the performance of different vegetation indices from the Moderate Resolution Imaging Spectroradiometer (MODIS in capturing the seasonal and the annual variability of GPP estimates from an optimal network of 21 FLUXNET forest towers sites. The tested indices include the Normalized Difference Vegetation Index (NDVI, Enhanced Vegetation Index (EVI, Leaf Area Index (LAI, and Fraction of Photosynthetically Active Radiation absorbed by plant canopies (FPAR. Our results indicated that single vegetation indices captured 50–80% of the variability of tower-estimated GPP, but no one index performed universally well in all situations. In particular, EVI outperformed the other MODIS products in tracking seasonal variations in tower-estimated GPP, but annual mean MODIS LAI was the best estimator of the spatial distribution of annual flux-tower GPP (GPP = 615 × LAI − 376, where GPP is in g C/m2/year. This simple algorithm rehabilitated earlier approaches linking ground measurements of LAI to flux-tower estimates of GPP and produced annual GPP estimates comparable to the MODIS 17 GPP product. As such, remote sensing-based estimates of GPP continue to offer a useful alternative to estimates from biophysical models, and the choice of the most appropriate approach depends on whether the estimates are required at annual or sub-annual temporal resolution.

  13. DEVELOPMENT OF A NEW ALGORITHM FOR KEY AND S-BOX GENERATION IN BLOWFISH ALGORITHM

    Directory of Open Access Journals (Sweden)

    TAYSEER S. ATIA

    2014-08-01

    Full Text Available Blowfish algorithm is a block cipher algorithm, its strong, simple algorithm used to encrypt data in block of size 64-bit. Key and S-box generation process in this algorithm require time and memory space the reasons that make this algorithm not convenient to be used in smart card or application requires changing secret key frequently. In this paper a new key and S-box generation process was developed based on Self Synchronization Stream Cipher (SSS algorithm where the key generation process for this algorithm was modified to be used with the blowfish algorithm. Test result shows that the generation process requires relatively slow time and reasonably low memory requirement and this enhance the algorithm and gave it the possibility for different usage.

  14. A new algorithm for coding geological terminology

    Science.gov (United States)

    Apon, W.

    The Geological Survey of The Netherlands has developed an algorithm to convert the plain geological language of lithologic well logs into codes suitable for computer processing and link these to existing plotting programs. The algorithm is based on the "direct method" and operates in three steps: (1) searching for defined word combinations and assigning codes; (2) deleting duplicated codes; (3) correcting incorrect code combinations. Two simple auxiliary files are used. A simple PC demonstration program is included to enable readers to experiment with this algorithm. The Department of Quarternary Geology of the Geological Survey of The Netherlands possesses a large database of shallow lithologic well logs in plain language and has been using a program based on this algorithm for about 3 yr. Erroneous codes resulting from using this algorithm are less than 2%.

  15. Genetic Algorithms Principles Towards Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Nabil M. Hewahi

    2011-10-01

    Full Text Available In this paper we propose a general approach based on Genetic Algorithms (GAs to evolve Hidden Markov Models (HMM. The problem appears when experts assign probability values for HMM, they use only some limited inputs. The assigned probability values might not be accurate to serve in other cases related to the same domain. We introduce an approach based on GAs to find
    out the suitable probability values for the HMM to be mostly correct in more cases than what have been used to assign the probability values.

  16. Deriving the Normalized Min-Sum Algorithm from Cooperative Optimization

    OpenAIRE

    Huang, Xiaofei

    2006-01-01

    The normalized min-sum algorithm can achieve near-optimal performance at decoding LDPC codes. However, it is a critical question to understand the mathematical principle underlying the algorithm. Traditionally, people thought that the normalized min-sum algorithm is a good approximation to the sum-product algorithm, the best known algorithm for decoding LDPC codes and Turbo codes. This paper offers an alternative approach to understand the normalized min-sum algorithm. The algorithm is derive...

  17. Mao-Gilles Stabilization Algorithm

    Directory of Open Access Journals (Sweden)

    Jérôme Gilles

    2013-07-01

    Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.

  18. A simple algorithm to retrieve soil moisture and vegetation biomass using passive microwave measurements over crop fields

    International Nuclear Information System (INIS)

    Wigneron, J.P.; Chanzy, A.; Calvet, J.C.; Bruguier, N.

    1995-01-01

    A simple algorithm to retrieve sail moisture and vegetation water content from passive microwave measurements is analyzed in this study. The approach is based on a zeroth-order solution of the radiative transfer equations in a vegetation layer. In this study, the single scattering albedo accounts for scattering effects and two parameters account for the dependence of the optical thickness on polarization, incidence angle, and frequency. The algorithm requires only ancillary information about crop type and surface temperature. Retrievals of the surface parameters from two radiometric data sets acquired over a soybean and a wheat crop have been attempted. The model parameters have been fitted in order to achieve best match between measured and retrieved surface data. The results of the inversion are analyzed for different configurations of the radiometric observations: one or several look angles, L-band, C-band or (L-band and C-band). Sensitivity of the retrievals to the best fit values of the model parameters has also been investigated. The best configurations, requiring simultaneous measurements at L- and C-band, produce retrievals of soil moisture and biomass with a 15% estimated precision (about 0.06 m 3 /m 3 for soil moisture and 0.3 kg/m 2 for biomass) and exhibit a limited sensitivity to the best fit parameters. (author)

  19. MANAGER PRINCIPLES AS BASIS OF MANAGEMENT STYLE TRANSFORMATION

    OpenAIRE

    R. A. Kopytov

    2011-01-01

    The paper considers an approach which is based on non-conventional mechanisms of management style formation. The preset level of sustainable management is maintained by self-organized environment created in the process of management style transformation in efficient management principles. Their efficiency is checked within an adaptive algorithm. The algorithm is developed on the basis of combination of evaluative tools  and base of operational  proves. The operating algorithm capability is te...

  20. Using a Simple Neural Network to Delineate Some Principles of Distributed Economic Choice.

    Science.gov (United States)

    Balasubramani, Pragathi P; Moreno-Bote, Rubén; Hayden, Benjamin Y

    2018-01-01

    The brain uses a mixture of distributed and modular organization to perform computations and generate appropriate actions. While the principles under which the brain might perform computations using modular systems have been more amenable to modeling, the principles by which the brain might make choices using distributed principles have not been explored. Our goal in this perspective is to delineate some of those distributed principles using a neural network method and use its results as a lens through which to reconsider some previously published neurophysiological data. To allow for direct comparison with our own data, we trained the neural network to perform binary risky choices. We find that value correlates are ubiquitous and are always accompanied by non-value information, including spatial information (i.e., no pure value signals). Evaluation, comparison, and selection were not distinct processes; indeed, value signals even in the earliest stages contributed directly, albeit weakly, to action selection. There was no place, other than at the level of action selection, at which dimensions were fully integrated. No units were specialized for specific offers; rather, all units encoded the values of both offers in an anti-correlated format, thus contributing to comparison. Individual network layers corresponded to stages in a continuous rotation from input to output space rather than to functionally distinct modules. While our network is likely to not be a direct reflection of brain processes, we propose that these principles should serve as hypotheses to be tested and evaluated for future studies.

  1. Online learning algorithm for ensemble of decision rules

    KAUST Repository

    Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2011-01-01

    We describe an online learning algorithm that builds a system of decision rules for a classification problem. Rules are constructed according to the minimum description length principle by a greedy algorithm or using the dynamic programming approach

  2. Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm

    Institute of Scientific and Technical Information of China (English)

    Haidong Xu; Mingyan Jiang; Kun Xu

    2015-01-01

    The artificial bee colony (ABC) algorithm is a com-petitive stochastic population-based optimization algorithm. How-ever, the ABC algorithm does not use the social information and lacks the knowledge of the problem structure, which leads to in-sufficiency in both convergent speed and searching precision. Archimedean copula estimation of distribution algorithm (ACEDA) is a relatively simple, time-economic and multivariate correlated EDA. This paper proposes a novel hybrid algorithm based on the ABC algorithm and ACEDA cal ed Archimedean copula estima-tion of distribution based on the artificial bee colony (ACABC) algorithm. The hybrid algorithm utilizes ACEDA to estimate the distribution model and then uses the information to help artificial bees to search more efficiently in the search space. Six bench-mark functions are introduced to assess the performance of the ACABC algorithm on numerical function optimization. Experimen-tal results show that the ACABC algorithm converges much faster with greater precision compared with the ABC algorithm, ACEDA and the global best (gbest)-guided ABC (GABC) algorithm in most of the experiments.

  3. Hardware modules of the RSA algorithm

    Directory of Open Access Journals (Sweden)

    Škobić Velibor

    2014-01-01

    Full Text Available This paper describes basic principles of data protection using the RSA algorithm, as well as algorithms for its calculation. The RSA algorithm is implemented on FPGA integrated circuit EP4CE115F29C7, family Cyclone IV, Altera. Four modules of Montgomery algorithm are designed using VHDL. Synthesis and simulation are done using Quartus II software and ModelSim. The modules are analyzed for different key lengths (16 to 1024 in terms of the number of logic elements, the maximum frequency and speed.

  4. Solving Simple Stochastic Games with Few Coin Toss Positions

    DEFF Research Database (Denmark)

    Ibsen-Jensen, Rasmus; Miltersen, Peter Bro

    2012-01-01

    Gimbert and Horn gave an algorithm for solving simple stochastic games with running time O(r! n) where n is the number of positions of the simple stochastic game and r is the number of its coin toss positions. Chatterjee et al. pointed out that a variant of strategy iteration can be implemented...... to solve this problem in time 4 r n O(1). In this paper, we show that an algorithm combining value iteration with retrograde analysis achieves a time bound of O(r 2 r (r logr + n)), thus improving both time bounds. We also improve the analysis of Chatterjee et al. and show that their algorithm in fact has...

  5. An Ordering Linear Unification Algorithm

    Institute of Scientific and Technical Information of China (English)

    胡运发

    1989-01-01

    In this paper,we present an ordering linear unification algorithm(OLU).A new idea on substituteion of the binding terms is introduced to the algorithm,which is able to overcome some drawbacks of other algorithms,e.g.,MM algorithm[1],RG1 and RG2 algorithms[2],Particularly,if we use the directed eyclie graphs,the algoritm needs not check the binding order,then the OLU algorithm can also be aplied to the infinite tree data struceture,and a higher efficiency can be expected.The paper focuses upon the discussion of OLU algorithm and a partial order structure with respect to the unification algorithm.This algorithm has been implemented in the GKD-PROLOG/VAX 780 interpreting system.Experimental results have shown that the algorithm is very simple and efficient.

  6. Using a Simple Neural Network to Delineate Some Principles of Distributed Economic Choice

    Directory of Open Access Journals (Sweden)

    Pragathi P. Balasubramani

    2018-03-01

    Full Text Available The brain uses a mixture of distributed and modular organization to perform computations and generate appropriate actions. While the principles under which the brain might perform computations using modular systems have been more amenable to modeling, the principles by which the brain might make choices using distributed principles have not been explored. Our goal in this perspective is to delineate some of those distributed principles using a neural network method and use its results as a lens through which to reconsider some previously published neurophysiological data. To allow for direct comparison with our own data, we trained the neural network to perform binary risky choices. We find that value correlates are ubiquitous and are always accompanied by non-value information, including spatial information (i.e., no pure value signals. Evaluation, comparison, and selection were not distinct processes; indeed, value signals even in the earliest stages contributed directly, albeit weakly, to action selection. There was no place, other than at the level of action selection, at which dimensions were fully integrated. No units were specialized for specific offers; rather, all units encoded the values of both offers in an anti-correlated format, thus contributing to comparison. Individual network layers corresponded to stages in a continuous rotation from input to output space rather than to functionally distinct modules. While our network is likely to not be a direct reflection of brain processes, we propose that these principles should serve as hypotheses to be tested and evaluated for future studies.

  7. Online learning algorithm for ensemble of decision rules

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    We describe an online learning algorithm that builds a system of decision rules for a classification problem. Rules are constructed according to the minimum description length principle by a greedy algorithm or using the dynamic programming approach. © 2011 Springer-Verlag.

  8. Numerical simulation of turbulent flow and heat transfer in a parallel channel. Verification of the field synergy principle

    International Nuclear Information System (INIS)

    Tian Wenxi; Su, G.H.; Qiu Suizheng; Jia Dounan

    2004-01-01

    The field synergy principle was proposed by Guo(1998) which is based on 2-D boundary laminar flow and it resulted from a second look at the mechanism of convective heat transfer. Numerical verification of this principle's validity for turbulent flow has been carried out by very few researchers, and mostly commercial software such as FLUENT, CFX etc. were used in their study. In this paper, numerical simulation of turbulent flow with recirculation was developed using SIMPLE algorithm with two-equation k-ε model. Extension of computational region method and wall function method were quoted to regulate the whole computational region geometrically. Given the inlet Reynold number keeps constant: 10000, by changing the height of the solid obstacle, simulation was conducted and the result showed that the wall heat flux decreased with the angle between the velocity vector and the temperature gradient. Thus it is validated that the field synergy principle based on 2-D boundary laminar flow can also be applied to complex turbulent flow even with recirculation. (author)

  9. A novel gene network inference algorithm using predictive minimum description length approach.

    Science.gov (United States)

    Chaitankar, Vijender; Ghosh, Preetam; Perkins, Edward J; Gong, Ping; Deng, Youping; Zhang, Chaoyang

    2010-05-28

    Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold which defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we proposed a new inference algorithm which incorporated mutual information (MI), conditional mutual information (CMI) and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the regulatory relationships between genes and the PMDL principle method attempts to determine the best MI threshold without the need of a user-specified fine tuning parameter. The performance of the proposed algorithm was evaluated using both synthetic time series data sets and a biological time series data set for the yeast Saccharomyces cerevisiae. The benchmark quantities precision and recall were used as performance measures. The results show that the proposed algorithm produced less false edges and significantly improved the precision, as compared to the existing algorithm. For further analysis the performance of the algorithms was observed over different sizes of data. We have proposed a new algorithm that implements the PMDL principle for inferring gene regulatory networks from time series DNA microarray data that eliminates the need of a fine tuning parameter. The evaluation results obtained from both synthetic and actual biological data sets show that the

  10. Emergence of an optimal search strategy from a simple random walk.

    Science.gov (United States)

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2013-09-06

    In reports addressing animal foraging strategies, it has been stated that Lévy-like algorithms represent an optimal search strategy in an unknown environment, because of their super-diffusion properties and power-law-distributed step lengths. Here, starting with a simple random walk algorithm, which offers the agent a randomly determined direction at each time step with a fixed move length, we investigated how flexible exploration is achieved if an agent alters its randomly determined next step forward and the rule that controls its random movement based on its own directional moving experiences. We showed that our algorithm led to an effective food-searching performance compared with a simple random walk algorithm and exhibited super-diffusion properties, despite the uniform step lengths. Moreover, our algorithm exhibited a power-law distribution independent of uniform step lengths.

  11. On the dosimetric behaviour of photon dose calculation algorithms in the presence of simple geometric heterogeneities: comparison with Monte Carlo calculations

    Science.gov (United States)

    Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk; Brink, Carsten; Clivio, Alessandro; Knöös, Tommy; Nicolini, Giorgia; Cozzi, Luca

    2007-03-01

    A comparative study was performed to reveal differences and relative figures of merit of seven different calculation algorithms for photon beams when applied to inhomogeneous media. The following algorithms were investigated: Varian Eclipse: the anisotropic analytical algorithm, and the pencil beam with modified Batho correction; Nucletron Helax-TMS: the collapsed cone and the pencil beam with equivalent path length correction; CMS XiO: the multigrid superposition and the fast Fourier transform convolution; Philips Pinnacle: the collapsed cone. Monte Carlo simulations (MC) performed with the EGSnrc codes BEAMnrc and DOSxyznrc from NRCC in Ottawa were used as a benchmark. The study was carried out in simple geometrical water phantoms (ρ = 1.00 g cm-3) with inserts of different densities simulating light lung tissue (ρ = 0.035 g cm-3), normal lung (ρ = 0.20 g cm-3) and cortical bone tissue (ρ = 1.80 g cm-3). Experiments were performed for low- and high-energy photon beams (6 and 15 MV) and for square (13 × 13 cm2) and elongated rectangular (2.8 × 13 cm2) fields. Analysis was carried out on the basis of depth dose curves and transverse profiles at several depths. Assuming the MC data as reference, γ index analysis was carried out distinguishing between regions inside the non-water inserts or inside the uniform water. For this study, a distance to agreement was set to 3 mm while the dose difference varied from 2% to 10%. In general all algorithms based on pencil-beam convolutions showed a systematic deficiency in managing the presence of heterogeneous media. In contrast, complicated patterns were observed for the advanced algorithms with significant discrepancies observed between algorithms in the lighter materials (ρ = 0.035 g cm-3), enhanced for the most energetic beam. For denser, and more clinical, densities a better agreement among the sophisticated algorithms with respect to MC was observed.

  12. Structural and computational aspects of simple and influence games

    OpenAIRE

    Riquelme Csori, Fabián

    2014-01-01

    Simple games are a fundamental class of cooperative games. They have a huge relevance in several areas of computer science, social sciences and discrete applied mathematics. The algorithmic and computational complexity aspects of simple games have been gaining notoriety in the recent years. In this thesis we review different computational problems related to properties, parameters, and solution concepts of simple games. We consider different forms of representation of simple games, regular...

  13. Clarifying the Misconception about the Principle of Floatation

    Science.gov (United States)

    Yadav, Manoj K.

    2014-01-01

    This paper aims to clarify the misconception about the violation of the principle of floatation. Improper understanding of the definition of "displaced fluid" by a floating body leads to the misconception. With the help of simple experiments, this article shows that there is no violation of the principle of floatation.

  14. Substoichiometric method in the simple radiometric analysis

    International Nuclear Information System (INIS)

    Ikeda, N.; Noguchi, K.

    1979-01-01

    The substoichiometric method is applied to simple radiometric analysis. Two methods - the standard reagent method and the standard sample method - are proposed. The validity of the principle of the methods is verified experimentally in the determination of silver by the precipitation method, or of zinc by the ion-exchange or solvent-extraction method. The proposed methods are simple and rapid compared with the conventional superstoichiometric method. (author)

  15. ANNIT - An Efficient Inversion Algorithm based on Prediction Principles

    Science.gov (United States)

    Růžek, B.; Kolář, P.

    2009-04-01

    Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good

  16. Principles of a new treatment algorithm in multiple sclerosis

    DEFF Research Database (Denmark)

    Hartung, Hans-Peter; Montalban, Xavier; Sorensen, Per Soelberg

    2011-01-01

    We are entering a new era in the management of patients with multiple sclerosis (MS). The first oral treatment (fingolimod) has now gained US FDA approval, addressing an unmet need for patients with MS who wish to avoid parenteral administration. A second agent (cladribine) is currently being...... considered for approval. With the arrival of these oral agents, a key question is where they may fit into the existing MS treatment algorithm. This article aims to help answer this question by analyzing the trial data for the new oral therapies, as well as for existing MS treatments, by applying practical...... clinical experience, and through consideration of our increased understanding of how to define treatment success in MS. This article also provides a speculative look at what the treatment algorithm may look like in 5 years, with the availability of new data, greater experience and, potentially, other novel...

  17. A simple algorithm for computing positively weighted straight skeletons of monotone polygons☆

    Science.gov (United States)

    Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter

    2015-01-01

    We study the characteristics of straight skeletons of monotone polygonal chains and use them to devise an algorithm for computing positively weighted straight skeletons of monotone polygons. Our algorithm runs in O(nlog⁡n) time and O(n) space, where n denotes the number of vertices of the polygon. PMID:25648376

  18. A simple algorithm for computing positively weighted straight skeletons of monotone polygons.

    Science.gov (United States)

    Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter

    2015-02-01

    We study the characteristics of straight skeletons of monotone polygonal chains and use them to devise an algorithm for computing positively weighted straight skeletons of monotone polygons. Our algorithm runs in [Formula: see text] time and [Formula: see text] space, where n denotes the number of vertices of the polygon.

  19. Robust MST-Based Clustering Algorithm.

    Science.gov (United States)

    Liu, Qidong; Zhang, Ruisheng; Zhao, Zhili; Wang, Zhenghai; Jiao, Mengyao; Wang, Guangjing

    2018-06-01

    Minimax similarity stresses the connectedness of points via mediating elements rather than favoring high mutual similarity. The grouping principle yields superior clustering results when mining arbitrarily-shaped clusters in data. However, it is not robust against noises and outliers in the data. There are two main problems with the grouping principle: first, a single object that is far away from all other objects defines a separate cluster, and second, two connected clusters would be regarded as two parts of one cluster. In order to solve such problems, we propose robust minimum spanning tree (MST)-based clustering algorithm in this letter. First, we separate the connected objects by applying a density-based coarsening phase, resulting in a low-rank matrix in which the element denotes the supernode by combining a set of nodes. Then a greedy method is presented to partition those supernodes through working on the low-rank matrix. Instead of removing the longest edges from MST, our algorithm groups the data set based on the minimax similarity. Finally, the assignment of all data points can be achieved through their corresponding supernodes. Experimental results on many synthetic and real-world data sets show that our algorithm consistently outperforms compared clustering algorithms.

  20. A Simple Wave Driver

    Science.gov (United States)

    Temiz, Burak Kagan; Yavuz, Ahmet

    2015-01-01

    This study was done to develop a simple and inexpensive wave driver that can be used in experiments on string waves. The wave driver was made using a battery-operated toy car, and the apparatus can be used to produce string waves at a fixed frequency. The working principle of the apparatus is as follows: shortly after the car is turned on, the…

  1. Nonlinear optics principles and applications

    CERN Document Server

    Li, Chunfei

    2017-01-01

    This book reflects the latest advances in nonlinear optics. Besides the simple, strict mathematical deduction, it also discusses the experimental verification and possible future applications, such as the all-optical switches. It consistently uses the practical unit system throughout. It employs simple physical images, such as "light waves" and "photons" to systematically explain the main principles of nonlinear optical effects. It uses the first-order nonlinear wave equation in frequency domain under the condition of “slowly varying amplitude approximation" and the classical model of the interaction between the light and electric dipole. At the same time, it also uses the rate equations based on the energy-level transition of particle systems excited by photons and the energy and momentum conservation principles to explain the nonlinear optical phenomenon. The book is intended for researchers, engineers and graduate students in the field of the optics, optoelectronics, fiber communication, information tech...

  2. Using the Perceptron Algorithm to Find Consistent Hypotheses

    OpenAIRE

    Anthony, M.; Shawe-Taylor, J.

    1993-01-01

    The perceptron learning algorithm yields quite naturally an algorithm for finding a linearly separable boolean function consistent with a sample of such a function. Using the idea of a specifying sample, we give a simple proof that this algorithm is not efficient, in general.

  3. Gravitation and Special Relativity from Compton Wave Interactions at the Planck Scale: An Algorithmic Approach

    Science.gov (United States)

    Blackwell, William C., Jr.

    2004-01-01

    In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.

  4. A simple calculation algorithm to separate high-resolution CH4 flux measurements into ebullition and diffusion-derived components

    Science.gov (United States)

    Hoffmann, Mathias; Schulz-Hanke, Maximilian; Garcia Alba, Joana; Jurisch, Nicole; Hagemann, Ulrike; Sachs, Torsten; Sommer, Michael; Augustin, Jürgen

    2016-04-01

    Processes driving methane (CH4) emissions in wetland ecosystems are highly complex. Especially, the separation of CH4 emissions into ebullition and diffusion derived flux components, a perquisite for the mechanistic process understanding and identification of potential environmental driver is rather challenging. We present a simple calculation algorithm, based on an adaptive R-script, which separates open-water, closed chamber CH4 flux measurements into diffusion- and ebullition-derived components. Hence, flux component specific dynamics are revealed and potential environmental driver identified. Flux separation is based on a statistical approach, using ebullition related sudden concentration changes obtained during high resolution CH4 concentration measurements. By applying the lower and upper quartile ± the interquartile range (IQR) as a variable threshold, diffusion dominated periods of the flux measurement are filtered. Subsequently, flux calculation and separation is performed. The algorithm was verified in a laboratory experiment and tested under field conditions, using flux measurement data (July to September 2013) from a flooded, former fen grassland site. Erratic ebullition events contributed 46% to total CH4 emissions, which is comparable to values reported by literature. Additionally, a shift in the diurnal trend of diffusive fluxes throughout the measurement period, driven by the water temperature gradient, was revealed.

  5. The serial message-passing schedule for LDPC decoding algorithms

    Science.gov (United States)

    Liu, Mingshan; Liu, Shanshan; Zhou, Yuan; Jiang, Xue

    2015-12-01

    The conventional message-passing schedule for LDPC decoding algorithms is the so-called flooding schedule. It has the disadvantage that the updated messages cannot be used until next iteration, thus reducing the convergence speed . In this case, the Layered Decoding algorithm (LBP) based on serial message-passing schedule is proposed. In this paper the decoding principle of LBP algorithm is briefly introduced, and then proposed its two improved algorithms, the grouped serial decoding algorithm (Grouped LBP) and the semi-serial decoding algorithm .They can improve LBP algorithm's decoding speed while maintaining a good decoding performance.

  6. Numerical Method based on SIMPLE Algorithm for a Two-Phase Flow with Non-condensable Gas

    International Nuclear Information System (INIS)

    Kim, Jong Tae

    2009-08-01

    In this study, a numerical method based on SIMPLE algorithm for a two-phase flow with non-condensable gas has been developed in order to simulate thermal hydraulics in a containment of a nuclear power plant. As governing equations, it adopts a two-fluid three-field model for the two-phase flows. The three fields include gas, drops, and continuous liquid. The gas field can contains vapor and non-condensable gases such as air and hydrogen. In order to resolve mixing phenomena of gas species, gas transport equations for each species base on the gas mass fractions are solved with gas phase governing equations such as mass, momentum and energy equations. Methods to evaluate the properties of the gas species were implemented in the code. They are constant or polynomial function based a user input and a property library from Chemkin and JANAF table for gas specific heat. Properties for the gas mixture which are dependent on mole fractions of the gas species were evaluated by a mix rule

  7. A new model and simple algorithms for multi-label mumford-shah problems

    KAUST Repository

    Hong, Byungwoo

    2013-06-01

    In this work, we address the multi-label Mumford-Shah problem, i.e., the problem of jointly estimating a partitioning of the domain of the image, and functions defined within regions of the partition. We create algorithms that are efficient, robust to undesirable local minima, and are easy-to-implement. Our algorithms are formulated by slightly modifying the underlying statistical model from which the multi-label Mumford-Shah functional is derived. The advantage of this statistical model is that the underlying variables: the labels and the functions are less coupled than in the original formulation, and the labels can be computed from the functions with more global updates. The resulting algorithms can be tuned to the desired level of locality of the solution: from fully global updates to more local updates. We demonstrate our algorithm on two applications: joint multi-label segmentation and denoising, and joint multi-label motion segmentation and flow estimation. We compare to the state-of-the-art in multi-label Mumford-Shah problems and show that we achieve more promising results. © 2013 IEEE.

  8. Loss avoidance as selection principle: evidence from simple stag-hunt games

    Czech Academy of Sciences Publication Activity Database

    Rydval, Ondřej; Ortmann, Andreas

    2005-01-01

    Roč. 88, č. 1 (2005), s. 101-107 ISSN 0165-1765 Institutional research plan: CEZ:AV0Z70850503 Keywords : loss avoidance * selection principles * stag-hunt games Subject RIV: AH - Economics Impact factor: 0.381, year: 2005 http://dx.doi.org/10.1016/j.econlet.2004.12.027

  9. The mGA1.0: A common LISP implementation of a messy genetic algorithm

    Science.gov (United States)

    Goldberg, David E.; Kerzic, Travis

    1990-01-01

    Genetic algorithms (GAs) are finding increased application in difficult search, optimization, and machine learning problems in science and engineering. Increasing demands are being placed on algorithm performance, and the remaining challenges of genetic algorithm theory and practice are becoming increasingly unavoidable. Perhaps the most difficult of these challenges is the so-called linkage problem. Messy GAs were created to overcome the linkage problem of simple genetic algorithms by combining variable-length strings, gene expression, messy operators, and a nonhomogeneous phasing of evolutionary processing. Results on a number of difficult deceptive test functions are encouraging with the mGA always finding global optima in a polynomial number of function evaluations. Theoretical and empirical studies are continuing, and a first version of a messy GA is ready for testing by others. A Common LISP implementation called mGA1.0 is documented and related to the basic principles and operators developed by Goldberg et. al. (1989, 1990). Although the code was prepared with care, it is not a general-purpose code, only a research version. Important data structures and global variations are described. Thereafter brief function descriptions are given, and sample input data are presented together with sample program output. A source listing with comments is also included.

  10. Fermat's principle and nonlinear traveltime tomography

    International Nuclear Information System (INIS)

    Berryman, J.G.; Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, New York 10012)

    1989-01-01

    Fermat's principle shows that a definite convex set of feasible slowness models, depending only on the traveltime data, exists for the fully nonlinear traveltime inversion problem. In a new iterative reconstruction algorithm, the minimum number of nonfeasible ray paths is used as a figure of merit to determine the optimum size of the model correction at each step. The numerical results show that the new algorithm is robust, stable, and produces very good reconstructions even for high contrast materials where standard methods tend to diverge

  11. the simple mono-canal algorithm for the temperature estimating of ...

    African Journals Online (AJOL)

    30 juin 2010 ... the brightness temperature (Tb) at the sensor level. This algorithm ..... des attributs de textures et de la fusion de segmentations: application à la zone ... retreved from thermal infrared single channel remote sensing data. 2004 ...

  12. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  13. Simple mechanical parameters identification of induction machine using voltage sensor only

    International Nuclear Information System (INIS)

    Horen, Yoram; Strajnikov, Pavel; Kuperman, Alon

    2015-01-01

    Highlights: • A simple low cost algorithm for induction motor mechanical parameters estimation is proposed. • Voltage sensing only is performed; speed sensor is not required. • The method is suitable for both wound rotor and squirrel cage motors. - Abstract: A simple low cost algorithm for induction motor mechanical parameters estimation without speed sensor is presented in this paper. Estimation is carried out by recording stator terminal voltage during natural braking and subsequent offline curve fitting. The algorithm allows accurately reconstructing mechanical time constant as well as loading torque speed dependency. Although the mathematical basis of the presented method is developed for wound rotor motors, it is shown to be suitable for squirrel cage motors as well. The algorithm is first tested by reconstruction of simulation model parameters and then by processing measurement results of several motors. Simulation and experimental results support the validity of the proposed algorithm

  14. A Parallel Prefix Algorithm for Almost Toeplitz Tridiagonal Systems

    Science.gov (United States)

    Sun, Xian-He; Joslin, Ronald D.

    1995-01-01

    A compact scheme is a discretization scheme that is advantageous in obtaining highly accurate solutions. However, the resulting systems from compact schemes are tridiagonal systems that are difficult to solve efficiently on parallel computers. Considering the almost symmetric Toeplitz structure, a parallel algorithm, simple parallel prefix (SPP), is proposed. The SPP algorithm requires less memory than the conventional LU decomposition and is efficient on parallel machines. It consists of a prefix communication pattern and AXPY operations. Both the computation and the communication can be truncated without degrading the accuracy when the system is diagonally dominant. A formal accuracy study has been conducted to provide a simple truncation formula. Experimental results have been measured on a MasPar MP-1 SIMD machine and on a Cray 2 vector machine. Experimental results show that the simple parallel prefix algorithm is a good algorithm for symmetric, almost symmetric Toeplitz tridiagonal systems and for the compact scheme on high-performance computers.

  15. Is weak violation of the Pauli principle possible?

    International Nuclear Information System (INIS)

    Ignat'ev, A.Yu.; Kuz'min, V.A.

    1987-01-01

    The question considered in the work is whether there are models which can account for small violation of the Pauli principle. A simple algebra is constructed for the creation-annihilation operators, which contains a parameter β and describe small violation of the Pauli principle (the Pauli principle is valid exactly for β=0). The commutation relations in this algebra are trilinear. A model is presented, basing upon this commutator algebra, which allows transitions violating the Pauli principle, their probability being suppressed by a factor of β 2 (even though the Hamiltonian does not contain small parameters)

  16. Space mapping optimization algorithms for engineering design

    DEFF Research Database (Denmark)

    Koziel, Slawomir; Bandler, John W.; Madsen, Kaj

    2006-01-01

    A simple, efficient optimization algorithm based on space mapping (SM) is presented. It utilizes input SM to reduce the misalignment between the coarse and fine models of the optimized object over a region of interest, and output space mapping (OSM) to ensure matching of response and first...... to a benchmark problem. In comparison with SMIS, the models presented are simple and have a small number of parameters that need to be extracted. The new algorithm is applied to the optimization of coupled-line band-pass filter....

  17. Simple-MSSM: a simple and efficient method for simultaneous multi-site saturation mutagenesis.

    Science.gov (United States)

    Cheng, Feng; Xu, Jian-Miao; Xiang, Chao; Liu, Zhi-Qiang; Zhao, Li-Qing; Zheng, Yu-Guo

    2017-04-01

    To develop a practically simple and robust multi-site saturation mutagenesis (MSSM) method that enables simultaneously recombination of amino acid positions for focused mutant library generation. A general restriction enzyme-free and ligase-free MSSM method (Simple-MSSM) based on prolonged overlap extension PCR (POE-PCR) and Simple Cloning techniques. As a proof of principle of Simple-MSSM, the gene of eGFP (enhanced green fluorescent protein) was used as a template gene for simultaneous mutagenesis of five codons. Forty-eight randomly selected clones were sequenced. Sequencing revealed that all the 48 clones showed at least one mutant codon (mutation efficiency = 100%), and 46 out of the 48 clones had mutations at all the five codons. The obtained diversities at these five codons are 27, 24, 26, 26 and 22, respectively, which correspond to 84, 75, 81, 81, 69% of the theoretical diversity offered by NNK-degeneration (32 codons; NNK, K = T or G). The enzyme-free Simple-MSSM method can simultaneously and efficiently saturate five codons within one day, and therefore avoid missing interactions between residues in interacting amino acid networks.

  18. VARIATIONAL PRINCIPLE FOR PLANETARY INTERIORS

    International Nuclear Information System (INIS)

    Zeng, Li; Jacobsen, Stein B.

    2016-01-01

    In the past few years, the number of confirmed planets has grown above 2000. It is clear that they represent a diversity of structures not seen in our own solar system. In addition to very detailed interior modeling, it is valuable to have a simple analytical framework for describing planetary structures. The variational principle is a fundamental principle in physics, entailing that a physical system follows the trajectory, which minimizes its action. It is alternative to the differential equation formulation of a physical system. Applying the variational principle to the planetary interior can beautifully summarize the set of differential equations into one, which provides us some insight into the problem. From this principle, a universal mass–radius relation, an estimate of the error propagation from the equation of state to the mass–radius relation, and a form of the virial theorem applicable to planetary interiors are derived.

  19. Robustness of Multiple Clustering Algorithms on Hyperspectral Images

    National Research Council Canada - National Science Library

    Williams, Jason P

    2007-01-01

    .... Various clustering algorithms were employed, including a hierarchical method, ISODATA, K-means, and X-means, and were used on a simple two dimensional dataset in order to discover potential problems with the algorithms...

  20. Automatic bounding estimation in modified NLMS algorithm

    International Nuclear Information System (INIS)

    Shahtalebi, K.; Doost-Hoseini, A.M.

    2002-01-01

    Modified Normalized Least Mean Square algorithm, which is a sign form of Nlm based on set-membership (S M) theory in the class of optimal bounding ellipsoid (OBE) algorithms, requires a priori knowledge of error bounds that is unknown in most applications. In a special but popular case of measurement noise, a simple algorithm has been proposed. With some simulation examples the performance of algorithm is compared with Modified Normalized Least Mean Square

  1. Adaptive modification of the delayed feedback control algorithm with a continuously varying time delay

    International Nuclear Information System (INIS)

    Pyragas, V.; Pyragas, K.

    2011-01-01

    We propose a simple adaptive delayed feedback control algorithm for stabilization of unstable periodic orbits with unknown periods. The state dependent time delay is varied continuously towards the period of controlled orbit according to a gradient-descent method realized through three simple ordinary differential equations. We demonstrate the efficiency of the algorithm with the Roessler and Mackey-Glass chaotic systems. The stability of the controlled orbits is proven by computation of the Lyapunov exponents of linearized equations. -- Highlights: → A simple adaptive modification of the delayed feedback control algorithm is proposed. → It enables the control of unstable periodic orbits with unknown periods. → The delay time is varied continuously according to a gradient descend method. → The algorithm is embodied by three simple ordinary differential equations. → The validity of the algorithm is proven by computation of the Lyapunov exponents.

  2. Storage capacity of the Tilinglike Learning Algorithm

    International Nuclear Information System (INIS)

    Buhot, Arnaud; Gordon, Mirta B.

    2001-01-01

    The storage capacity of an incremental learning algorithm for the parity machine, the Tilinglike Learning Algorithm, is analytically determined in the limit of a large number of hidden perceptrons. Different learning rules for the simple perceptron are investigated. The usual Gardner-Derrida rule leads to a storage capacity close to the upper bound, which is independent of the learning algorithm considered

  3. Simple stochastic simulation.

    Science.gov (United States)

    Schilstra, Maria J; Martin, Stephen R

    2009-01-01

    Stochastic simulations may be used to describe changes with time of a reaction system in a way that explicitly accounts for the fact that molecules show a significant degree of randomness in their dynamic behavior. The stochastic approach is almost invariably used when small numbers of molecules or molecular assemblies are involved because this randomness leads to significant deviations from the predictions of the conventional deterministic (or continuous) approach to the simulation of biochemical kinetics. Advances in computational methods over the three decades that have elapsed since the publication of Daniel Gillespie's seminal paper in 1977 (J. Phys. Chem. 81, 2340-2361) have allowed researchers to produce highly sophisticated models of complex biological systems. However, these models are frequently highly specific for the particular application and their description often involves mathematical treatments inaccessible to the nonspecialist. For anyone completely new to the field to apply such techniques in their own work might seem at first sight to be a rather intimidating prospect. However, the fundamental principles underlying the approach are in essence rather simple, and the aim of this article is to provide an entry point to the field for a newcomer. It focuses mainly on these general principles, both kinetic and computational, which tend to be not particularly well covered in specialist literature, and shows that interesting information may even be obtained using very simple operations in a conventional spreadsheet.

  4. SIMPLE HEURISTIC ALGORITHM FOR DYNAMIC VM REALLOCATION IN IAAS CLOUDS

    Directory of Open Access Journals (Sweden)

    Nikita A. Balashov

    2018-03-01

    Full Text Available The rapid development of cloud technologies and its high prevalence in both commercial and academic areas have stimulated active research in the domain of optimal cloud resource management. One of the most active research directions is dynamic virtual machine (VM placement optimization in clouds build on Infrastructure-as-a-Service model. This kind of research may pursue different goals with energy-aware optimization being the most common goal as it aims at a urgent problem of green cloud computing - reducing energy consumption by data centers. In this paper we present a new heuristic algorithm of dynamic reallocation of VMs based on an approach presented in one of our previous works. In the algorithm we apply a 2-rank strategy to classify VMs and servers corresponding to the highly and lowly active VMs and solve four tasks: VM classification, host classification, forming a VM migration map and VMs migration. Dividing all of the VMs and servers into two classes we attempt to implement the possibility of risk reduction in case of hardware overloads under overcommitment conditions and to reduce the influence of the occurring overloads on the performance of the cloud VMs. Presented algorithm was developed based on the workload profile of the JINR cloud (a scientific private cloud with the goal of maximizing its usage, but it can also be applied in both public and private commercial clouds to organize the simultaneous use of different SLA and QoS levels in the same cloud environment by giving each VM rank its own level of overcommitment.

  5. The Effect of Swarming on a Voltage Potential-Based Conflict Resolution Algorithm

    NARCIS (Netherlands)

    Maas, J.B.; Sunil, E.; Ellerbroek, J.; Hoekstra, J.M.; Tra, M.A.P.

    2016-01-01

    Several conflict resolution algorithms for airborne self-separation rely on principles derived from the repulsive forces that exist between similarly charged particles. This research investigates whether the performance of the Modified Voltage Potential algorithm, which is based on this algorithm,

  6. Time-Delay System Identification Using Genetic Algorithm

    DEFF Research Database (Denmark)

    Yang, Zhenyu; Seested, Glen Thane

    2013-01-01

    Due to the unknown dead-time coefficient, the time-delay system identification turns to be a non-convex optimization problem. This paper investigates the identification of a simple time-delay system, named First-Order-Plus-Dead-Time (FOPDT), by using the Genetic Algorithm (GA) technique. The qual......Due to the unknown dead-time coefficient, the time-delay system identification turns to be a non-convex optimization problem. This paper investigates the identification of a simple time-delay system, named First-Order-Plus-Dead-Time (FOPDT), by using the Genetic Algorithm (GA) technique...

  7. Bio Inspired Algorithms in Single and Multiobjective Reliability Optimization

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albeanu, Grigore; Burtschy, Bernard

    2014-01-01

    Non-traditional search and optimization methods based on natural phenomena have been proposed recently in order to avoid local or unstable behavior when run towards an optimum state. This paper describes the principles of bio inspired algorithms and reports on Migration Algorithms and Bees...

  8. On a class of adjustable rate mortgage loans subject to a strict balance principle

    DEFF Research Database (Denmark)

    Astrup Jensen, Bjarne

    We describe the background and the basic funding mechanisms for the type of adjustable rate mortgageloans that were introduced in the Danish market in 1996. Each loan is funded separately by tap issuingpass-through mortgage bonds (`strict balance principle'). The novelty is a funding mechanism...... that usesa roll-over strategy, where long term loans are funded by sequentially issuing short term pass-throughbonds, and the first issuer of these loans obtained a patent on the funding principles in 1999. Publiclyavailable descriptions of the principles leave an impression of very complicated numerical...... algorithms.The algorithms described here show that the essentials can be reduced to a `back of an envelope' complexity.Keywords: Adjustable rate mortgages, balance principle, patent, yield curve riding...

  9. Simple Algorithms to Calculate Asymptotic Null Distributions of Robust Tests in Case-Control Genetic Association Studies in R

    Directory of Open Access Journals (Sweden)

    Wing Kam Fung

    2010-02-01

    Full Text Available The case-control study is an important design for testing association between genetic markers and a disease. The Cochran-Armitage trend test (CATT is one of the most commonly used statistics for the analysis of case-control genetic association studies. The asymptotically optimal CATT can be used when the underlying genetic model (mode of inheritance is known. However, for most complex diseases, the underlying genetic models are unknown. Thus, tests robust to genetic model misspecification are preferable to the model-dependant CATT. Two robust tests, MAX3 and the genetic model selection (GMS, were recently proposed. Their asymptotic null distributions are often obtained by Monte-Carlo simulations, because they either have not been fully studied or involve multiple integrations. In this article, we study how components of each robust statistic are correlated, and find a linear dependence among the components. Using this new finding, we propose simple algorithms to calculate asymptotic null distributions for MAX3 and GMS, which greatly reduce the computing intensity. Furthermore, we have developed the R package Rassoc implementing the proposed algorithms to calculate the empirical and asymptotic p values for MAX3 and GMS as well as other commonly used tests in case-control association studies. For illustration, Rassoc is applied to the analysis of case-control data of 17 most significant SNPs reported in four genome-wide association studies.

  10. Pareto Principle in Datamining: an Above-Average Fencing Algorithm

    Directory of Open Access Journals (Sweden)

    K. Macek

    2008-01-01

    Full Text Available This paper formulates a new datamining problem: which subset of input space has the relatively highest output where the minimal size of this subset is given. This can be useful where usual datamining methods fail because of error distribution asymmetry. The paper provides a novel algorithm for this datamining problem, and compares it with clustering of above-average individuals.

  11. An inductive algorithm for smooth approximation of functions

    International Nuclear Information System (INIS)

    Kupenova, T.N.

    2011-01-01

    An inductive algorithm is presented for smooth approximation of functions, based on the Tikhonov regularization method and applied to a specific kind of the Tikhonov parametric functional. The discrepancy principle is used for estimation of the regularization parameter. The principle of heuristic self-organization is applied for assessment of some parameters of the approximating function

  12. Phase-unwrapping algorithm by a rounding-least-squares approach

    Science.gov (United States)

    Juarez-Salazar, Rigoberto; Robledo-Sanchez, Carlos; Guerrero-Sanchez, Fermin

    2014-02-01

    A simple and efficient phase-unwrapping algorithm based on a rounding procedure and a global least-squares minimization is proposed. Instead of processing the gradient of the wrapped phase, this algorithm operates over the gradient of the phase jumps by a robust and noniterative scheme. Thus, the residue-spreading and over-smoothing effects are reduced. The algorithm's performance is compared with four well-known phase-unwrapping methods: minimum cost network flow (MCNF), fast Fourier transform (FFT), quality-guided, and branch-cut. A computer simulation and experimental results show that the proposed algorithm reaches a high-accuracy level than the MCNF method by a low-computing time similar to the FFT phase-unwrapping method. Moreover, since the proposed algorithm is simple, fast, and user-free, it could be used in metrological interferometric and fringe-projection automatic real-time applications.

  13. Approximation algorithms for guarding holey polygons ...

    African Journals Online (AJOL)

    Guarding edges of polygons is a version of art gallery problem.The goal is finding the minimum number of guards to cover the edges of a polygon. This problem is NP-hard, and to our knowledge there are approximation algorithms just for simple polygons. In this paper we present two approximation algorithms for guarding ...

  14. Clustered K nearest neighbor algorithm for daily inflow forecasting

    NARCIS (Netherlands)

    Akbari, M.; Van Overloop, P.J.A.T.M.; Afshar, A.

    2010-01-01

    Instance based learning (IBL) algorithms are a common choice among data driven algorithms for inflow forecasting. They are based on the similarity principle and prediction is made by the finite number of similar neighbors. In this sense, the similarity of a query instance is estimated according to

  15. An investigation of genetic algorithms

    International Nuclear Information System (INIS)

    Douglas, S.R.

    1995-04-01

    Genetic algorithms mimic biological evolution by natural selection in their search for better individuals within a changing population. they can be used as efficient optimizers. This report discusses the developing field of genetic algorithms. It gives a simple example of the search process and introduces the concept of schema. It also discusses modifications to the basic genetic algorithm that result in species and niche formation, in machine learning and artificial evolution of computer programs, and in the streamlining of human-computer interaction. (author). 3 refs., 1 tab., 2 figs

  16. Quantum Image Encryption Algorithm Based on Image Correlation Decomposition

    Science.gov (United States)

    Hua, Tianxiang; Chen, Jiamin; Pei, Dongju; Zhang, Wenquan; Zhou, Nanrun

    2015-02-01

    A novel quantum gray-level image encryption and decryption algorithm based on image correlation decomposition is proposed. The correlation among image pixels is established by utilizing the superposition and measurement principle of quantum states. And a whole quantum image is divided into a series of sub-images. These sub-images are stored into a complete binary tree array constructed previously and then randomly performed by one of the operations of quantum random-phase gate, quantum revolving gate and Hadamard transform. The encrypted image can be obtained by superimposing the resulting sub-images with the superposition principle of quantum states. For the encryption algorithm, the keys are the parameters of random phase gate, rotation angle, binary sequence and orthonormal basis states. The security and the computational complexity of the proposed algorithm are analyzed. The proposed encryption algorithm can resist brute force attack due to its very large key space and has lower computational complexity than its classical counterparts.

  17. A Simple Closed-Form Expression For Calculation Of The Electrospun Nanofiber Diameter By Using ABC Algorithm

    Directory of Open Access Journals (Sweden)

    Cagdas Yilmaz

    2017-05-01

    Full Text Available The producing of nanofiber tissue scaffolds is quite important for enhancing success in tissue engineering. Electrospinning method is used frequently to produce of these scaffolds. In this study a simple and novel expression derived by using artificial bee colony ABC optimization algorithm is presented to calculate the average fiber diameter AFD of the electrospun gelatinbioactive glass GtBG scaffold. The diameter of the fiber produced by electrospinning technique depends on the various parameters like process solution and environmental parameters. The experimental results previously published in the literature which include one solution parameter BG content as well as two process parameters tip to collector distance and solution flow rate related to producing of electrospun GtBG nanofiber have been used for the optimization process. At first the AFD expression has been constructed with the use of the solution and process parameters and then the unknown coefficients belonging to this expression have been accurately determined by using the ABC algorithm. From 19 experimental data 15 ones are used for the optimization phase while the other 4 data are utilized in the verification phase. The values of average percentage error between the calculated average fiber diameters and experimental ones are achieved as 2.2 and 5.7 for the optimization and verification phases respectively. The results obtained from the proposed expression have also been confirmed by comparing with those of AFD expression reported elsewhere. It is illustrated that the AFD of electrospun GtBG can be accurately calculated by the expression proposed here without requiring any complicated or sophisticated knowledge of the mathematical and physical background.

  18. Elementary functions algorithms and implementation

    CERN Document Server

    Muller, Jean-Michel

    2016-01-01

    This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...

  19. An algorithm for learning real-time automata

    NARCIS (Netherlands)

    Verwer, S.E.; De Weerdt, M.M.; Witteveen, C.

    2007-01-01

    We describe an algorithm for learning simple timed automata, known as real-time automata. The transitions of real-time automata can have a temporal constraint on the time of occurrence of the current symbol relative to the previous symbol. The learning algorithm is similar to the redblue fringe

  20. Comparison of two (geometric) algorithms for auto OMA

    DEFF Research Database (Denmark)

    Juul, Martin; Olsen, Peter; Balling, Ole

    2018-01-01

    parameters. The two algorithms are compared and illustrated on simulated data. Different choices of distance measures are discussed and evaluated. It is illustrated how a simple distance measure outperforms traditional distance measures from other Auto OMA algorithms. Traditional measures are unable...

  1. Is a weak violation of the Pauli principle possible?

    International Nuclear Information System (INIS)

    Ignat'ev, A.Y.; Kuz'min, V.A.

    1987-01-01

    We examine models in which there is a weak violation of the Pauli principle. A simple algebra of creation and annihilation operators is constructed which contains a parameter β and describes a weak violation of the Pauli principle (when β = 0 the Pauli principle is satisfied exactly). The commutation relations in this algebra turn out to be trilinear. A model based on this algebra is described. It allows transitions in which the Pauli principle is violated, but the probability of these transitions is suppressed by the quantity β 2 (even though the interaction Hamiltonian does not contain small parameters)

  2. Data structures and algorithm analysis in C++

    CERN Document Server

    Shaffer, Clifford A

    2011-01-01

    With its focus on creating efficient data structures and algorithms, this comprehensive text helps readers understand how to select or design the tools that will best solve specific problems. It uses Microsoft C++ as the programming language and is suitable for second-year data structure courses and computer science courses in algorithm analysis.Techniques for representing data are presented within the context of assessing costs and benefits, promoting an understanding of the principles of algorithm analysis and the effects of a chosen physical medium. The text also explores tradeoff issues, f

  3. Data structures and algorithm analysis in Java

    CERN Document Server

    Shaffer, Clifford A

    2011-01-01

    With its focus on creating efficient data structures and algorithms, this comprehensive text helps readers understand how to select or design the tools that will best solve specific problems. It uses Java as the programming language and is suitable for second-year data structure courses and computer science courses in algorithm analysis. Techniques for representing data are presented within the context of assessing costs and benefits, promoting an understanding of the principles of algorithm analysis and the effects of a chosen physical medium. The text also explores tradeoff issues, familiari

  4. Demonstrating Fermat's Principle in Optics

    Science.gov (United States)

    Paleiov, Orr; Pupko, Ofir; Lipson, S. G.

    2011-01-01

    We demonstrate Fermat's principle in optics by a simple experiment using reflection from an arbitrarily shaped one-dimensional reflector. We investigated a range of possible light paths from a lamp to a fixed slit by reflection in a curved reflector and showed by direct measurement that the paths along which light is concentrated have either…

  5. The Top Ten Algorithms in Data Mining

    CERN Document Server

    Wu, Xindong

    2009-01-01

    From classification and clustering to statistical learning, association analysis, and link mining, this book covers the most important topics in data mining research. It presents the ten most influential algorithms used in the data mining community today. Each chapter provides a detailed description of the algorithm, a discussion of available software implementation, advanced topics, and exercises. With a simple data set, examples illustrate how each algorithm works and highlight the overall performance of each algorithm in a real-world application. Featuring contributions from leading researc

  6. Identification of Super Phenix steam generator by a simple polynomial model

    International Nuclear Information System (INIS)

    Rousseau, I.

    1981-01-01

    This note suggests a method of identification for the steam generator of the Super-Phenix fast neutron power plant for simple polynomial models. This approach is justified in the selection of the adaptive control. The identification algorithms presented will be applied to multivariable input-output behaviours. The results obtained with the representation in self-regressive form and by simple polynomial models will be compared and the effect of perturbations on the output signal will be tested, in order to select a good identification algorithm for multivariable adaptive regulation [fr

  7. Rules Extraction with an Immune Algorithm

    Directory of Open Access Journals (Sweden)

    Deqin Yan

    2007-12-01

    Full Text Available In this paper, a method of extracting rules with immune algorithms from information systems is proposed. Designing an immune algorithm is based on a sharing mechanism to extract rules. The principle of sharing and competing resources in the sharing mechanism is consistent with the relationship of sharing and rivalry among rules. In order to extract rules efficiently, a new concept of flexible confidence and rule measurement is introduced. Experiments demonstrate that the proposed method is effective.

  8. Particle swarm genetic algorithm and its application

    International Nuclear Information System (INIS)

    Liu Chengxiang; Yan Changxiang; Wang Jianjun; Liu Zhenhai

    2012-01-01

    To solve the problems of slow convergence speed and tendency to fall into the local optimum of the standard particle swarm optimization while dealing with nonlinear constraint optimization problem, a particle swarm genetic algorithm is designed. The proposed algorithm adopts feasibility principle handles constraint conditions and avoids the difficulty of penalty function method in selecting punishment factor, generates initial feasible group randomly, which accelerates particle swarm convergence speed, and introduces genetic algorithm crossover and mutation strategy to avoid particle swarm falls into the local optimum Through the optimization calculation of the typical test functions, the results show that particle swarm genetic algorithm has better optimized performance. The algorithm is applied in nuclear power plant optimization, and the optimization results are significantly. (authors)

  9. Flux-corrected transport principles, algorithms, and applications

    CERN Document Server

    Löhner, Rainald; Turek, Stefan

    2012-01-01

    Many modern high-resolution schemes for Computational Fluid Dynamics trace their origins to the Flux-Corrected Transport (FCT) paradigm. FCT maintains monotonicity using a nonoscillatory low-order scheme to determine the bounds for a constrained high-order approximation. This book begins with historical notes by J.P. Boris and D.L. Book who invented FCT in the early 1970s. The chapters that follow describe the design of fully multidimensional FCT algorithms for structured and unstructured grids, limiting for systems of conservation laws, and the use of FCT as an implicit subgrid scale model. The second edition presents 200 pages of additional material. The main highlights of the three new chapters include: FCT-constrained interpolation for Arbitrary Lagrangian-Eulerian methods, an optimization-based approach to flux correction, and FCT simulations of high-speed flows on overset grids. Addressing students and researchers, as well as CFD practitioners, the book is focused on computational aspects and contains m...

  10. An Educational System for Learning Search Algorithms and Automatically Assessing Student Performance

    Science.gov (United States)

    Grivokostopoulou, Foteini; Perikos, Isidoros; Hatzilygeroudis, Ioannis

    2017-01-01

    In this paper, first we present an educational system that assists students in learning and tutors in teaching search algorithms, an artificial intelligence topic. Learning is achieved through a wide range of learning activities. Algorithm visualizations demonstrate the operational functionality of algorithms according to the principles of active…

  11. Simple model of the arms race

    International Nuclear Information System (INIS)

    Zane, L.I.

    1982-01-01

    A simple model of a two-party arms race is developed based on the principle that the race will continue so long as either side can unleash an effective first strike against the other side. The model is used to examine how secrecy, the ABM, MIRV-ing, and an MX system affect the arms race

  12. Real time algorithms for sharp wave ripple detection.

    Science.gov (United States)

    Sethi, Ankit; Kemere, Caleb

    2014-01-01

    Neural activity during sharp wave ripples (SWR), short bursts of co-ordinated oscillatory activity in the CA1 region of the rodent hippocampus, is implicated in a variety of memory functions from consolidation to recall. Detection of these events in an algorithmic framework, has thus far relied on simple thresholding techniques with heuristically derived parameters. This study is an investigation into testing and improving the current methods for detection of SWR events in neural recordings. We propose and profile methods to reduce latency in ripple detection. Proposed algorithms are tested on simulated ripple data. The findings show that simple realtime algorithms can improve upon existing power thresholding methods and can detect ripple activity with latencies in the range of 10-20 ms.

  13. Algorithms for Brownian first-passage-time estimation

    Science.gov (United States)

    Adib, Artur B.

    2009-09-01

    A class of algorithms in discrete space and continuous time for Brownian first-passage-time estimation is considered. A simple algorithm is derived that yields exact mean first-passage times (MFPTs) for linear potentials in one dimension, regardless of the lattice spacing. When applied to nonlinear potentials and/or higher spatial dimensions, numerical evidence suggests that this algorithm yields MFPT estimates that either outperform or rival Langevin-based (discrete time and continuous space) estimates.

  14. A simple algorithm for the identification of clinical COPD phenotypes

    NARCIS (Netherlands)

    Burgel, Pierre-Régis; Paillasseur, Jean-Louis; Janssens, Wim; Piquet, Jacques; ter Riet, Gerben; Garcia-Aymerich, Judith; Cosio, Borja; Bakke, Per; Puhan, Milo A.; Langhammer, Arnulf; Alfageme, Inmaculada; Almagro, Pere; Ancochea, Julio; Celli, Bartolome R.; Casanova, Ciro; de-Torres, Juan P.; Decramer, Marc; Echazarreta, Andrés; Esteban, Cristobal; Gomez Punter, Rosa Mar; Han, MeiLan K.; Johannessen, Ane; Kaiser, Bernhard; Lamprecht, Bernd; Lange, Peter; Leivseth, Linda; Marin, Jose M.; Martin, Francis; Martinez-Camblor, Pablo; Miravitlles, Marc; Oga, Toru; Sofia Ramírez, Ana; Sin, Don D.; Sobradillo, Patricia; Soler-Cataluña, Juan J.; Turner, Alice M.; Verdu Rivera, Francisco Javier; Soriano, Joan B.; Roche, Nicolas

    2017-01-01

    This study aimed to identify simple rules for allocating chronic obstructive pulmonary disease (COPD) patients to clinical phenotypes identified by cluster analyses. Data from 2409 COPD patients of French/Belgian COPD cohorts were analysed using cluster analysis resulting in the identification of

  15. The Porter Stemming Algorithm: Then and Now

    Science.gov (United States)

    Willett, Peter

    2006-01-01

    Purpose: In 1980, Porter presented a simple algorithm for stemming English language words. This paper summarises the main features of the algorithm, and highlights its role not just in modern information retrieval research, but also in a range of related subject domains. Design/methodology/approach: Review of literature and research involving use…

  16. Performance of the Tariff Method: validation of a simple additive algorithm for analysis of verbal autopsies

    Directory of Open Access Journals (Sweden)

    Murray Christopher JL

    2011-08-01

    Full Text Available Abstract Background Verbal autopsies provide valuable information for studying mortality patterns in populations that lack reliable vital registration data. Methods for transforming verbal autopsy results into meaningful information for health workers and policymakers, however, are often costly or complicated to use. We present a simple additive algorithm, the Tariff Method (termed Tariff, which can be used for assigning individual cause of death and for determining cause-specific mortality fractions (CSMFs from verbal autopsy data. Methods Tariff calculates a score, or "tariff," for each cause, for each sign/symptom, across a pool of validated verbal autopsy data. The tariffs are summed for a given response pattern in a verbal autopsy, and this sum (score provides the basis for predicting the cause of death in a dataset. We implemented this algorithm and evaluated the method's predictive ability, both in terms of chance-corrected concordance at the individual cause assignment level and in terms of CSMF accuracy at the population level. The analysis was conducted separately for adult, child, and neonatal verbal autopsies across 500 pairs of train-test validation verbal autopsy data. Results Tariff is capable of outperforming physician-certified verbal autopsy in most cases. In terms of chance-corrected concordance, the method achieves 44.5% in adults, 39% in children, and 23.9% in neonates. CSMF accuracy was 0.745 in adults, 0.709 in children, and 0.679 in neonates. Conclusions Verbal autopsies can be an efficient means of obtaining cause of death data, and Tariff provides an intuitive, reliable method for generating individual cause assignment and CSMFs. The method is transparent and flexible and can be readily implemented by users without training in statistics or computer science.

  17. A simple, practical and complete O-time Algorithm for RNA folding using the Four-Russians Speedup

    Directory of Open Access Journals (Sweden)

    Gusfield Dan

    2010-01-01

    Full Text Available Abstract Background The problem of computationally predicting the secondary structure (or folding of RNA molecules was first introduced more than thirty years ago and yet continues to be an area of active research and development. The basic RNA-folding problem of finding a maximum cardinality, non-crossing, matching of complimentary nucleotides in an RNA sequence of length n, has an O(n3-time dynamic programming solution that is widely applied. It is known that an o(n3 worst-case time solution is possible, but the published and suggested methods are complex and have not been established to be practical. Significant practical improvements to the original dynamic programming method have been introduced, but they retain the O(n3 worst-case time bound when n is the only problem-parameter used in the bound. Surprisingly, the most widely-used, general technique to achieve a worst-case (and often practical speed up of dynamic programming, the Four-Russians technique, has not been previously applied to the RNA-folding problem. This is perhaps due to technical issues in adapting the technique to RNA-folding. Results In this paper, we give a simple, complete, and practical Four-Russians algorithm for the basic RNA-folding problem, achieving a worst-case time-bound of O(n3/log(n. Conclusions We show that this time-bound can also be obtained for richer nucleotide matching scoring-schemes, and that the method achieves consistent speed-ups in practice. The contribution is both theoretical and practical, since the basic RNA-folding problem is often solved multiple times in the inner-loop of more complex algorithms, and for long RNA molecules in the study of RNA virus genomes.

  18. An Empirical Derivation of the Run Time of the Bubble Sort Algorithm.

    Science.gov (United States)

    Gonzales, Michael G.

    1984-01-01

    Suggests a moving pictorial tool to help teach principles in the bubble sort algorithm. Develops such a tool applied to an unsorted list of numbers and describes a method to derive the run time of the algorithm. The method can be modified to run the times of various other algorithms. (JN)

  19. Actuator Disc Model Using a Modified Rhie-Chow/SIMPLE Pressure Correction Algorithm

    DEFF Research Database (Denmark)

    Rethore, Pierre-Elouan; Sørensen, Niels

    2008-01-01

    An actuator disc model for the flow solver EllipSys (2D&3D) is proposed. It is based on a correction of the Rhie-Chow algorithm for using discreet body forces in collocated variable finite volume CFD code. It is compared with three cases where an analytical solution is known.......An actuator disc model for the flow solver EllipSys (2D&3D) is proposed. It is based on a correction of the Rhie-Chow algorithm for using discreet body forces in collocated variable finite volume CFD code. It is compared with three cases where an analytical solution is known....

  20. Simple Numerical Simulation of Strain Measurement

    Science.gov (United States)

    Tai, H.

    2002-01-01

    By adopting the basic principle of the reflection (and transmission) of a plane polarized electromagnetic wave incident normal to a stack of films of alternating refractive index, a simple numerical code was written to simulate the maximum reflectivity (transmittivity) of a fiber optic Bragg grating corresponding to various non-uniform strain conditions including photo-elastic effect in certain cases.

  1. Algorithms for worst-case tolerance optimization

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans; Madsen, Kaj

    1979-01-01

    New algorithms are presented for the solution of optimum tolerance assignment problems. The problems considered are defined mathematically as a worst-case problem (WCP), a fixed tolerance problem (FTP), and a variable tolerance problem (VTP). The basic optimization problem without tolerances...... is denoted the zero tolerance problem (ZTP). For solution of the WCP we suggest application of interval arithmetic and also alternative methods. For solution of the FTP an algorithm is suggested which is conceptually similar to algorithms previously developed by the authors for the ZTP. Finally, the VTP...... is solved by a double-iterative algorithm in which the inner iteration is performed by the FTP- algorithm. The application of the algorithm is demonstrated by means of relatively simple numerical examples. Basic properties, such as convergence properties, are displayed based on the examples....

  2. Semioptimal practicable algorithmic cooling

    International Nuclear Information System (INIS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-01-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  3. CHESS-changing horizon efficient set search: A simple principle for multiobjective optimization

    DEFF Research Database (Denmark)

    Borges, Pedro Manuel F. C.

    2000-01-01

    This paper presents a new concept for generating approximations to the non-dominated set in multiobjective optimization problems. The approximation set A is constructed by solving several single-objective minimization problems in which a particular function D(A, z) is minimized. A new algorithm t...

  4. Realization of seven-qubit Deutsch-Jozsa algorithm on NMR quantum computer

    International Nuclear Information System (INIS)

    Wei Daxiu; Yang Xiaodong; Luo Jun; Sun Xianping; Zeng Xizhi; Liu Maili; Ding Shangwu

    2002-01-01

    Recent years, remarkable progresses in experimental realization of quantum information have been made, especially based on nuclear magnetic resonance (NMR) theory. In all quantum algorithms, Deutsch-Jozsa algorithm has been widely studied. It can be realized on NMR quantum computer and also can be simplified by using the Cirac's scheme. At first the principle of Deutsch-Jozsa quantum algorithm is analyzed, then the authors implement the seven-qubit Deutsch-Jozsa algorithm on NMR quantum computer

  5. Forecasting building energy consumption with hybrid genetic algorithm-hierarchical adaptive network-based fuzzy inference system

    Energy Technology Data Exchange (ETDEWEB)

    Li, Kangji [Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027 (China); School of Electricity Information Engineering, Jiangsu University, Zhenjiang 212013 (China); Su, Hongye [Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027 (China)

    2010-11-15

    There are several ways to forecast building energy consumption, varying from simple regression to models based on physical principles. In this paper, a new method, namely, the hybrid genetic algorithm-hierarchical adaptive network-based fuzzy inference system (GA-HANFIS) model is developed. In this model, hierarchical structure decreases the rule base dimension. Both clustering and rule base parameters are optimized by GAs and neural networks (NNs). The model is applied to predict a hotel's daily air conditioning consumption for a period over 3 months. The results obtained by the proposed model are presented and compared with regular method of NNs, which indicates that GA-HANFIS model possesses better performance than NNs in terms of their forecasting accuracy. (author)

  6. A simple algorithm for identifying periods of snow accumulation on a radiometer

    Science.gov (United States)

    Lapo, Karl E.; Hinkelman, Laura M.; Landry, Christopher C.; Massmann, Adam K.; Lundquist, Jessica D.

    2015-09-01

    Downwelling solar, Qsi, and longwave, Qli, irradiances at the earth's surface are the primary energy inputs for many hydrologic processes, and uncertainties in measurements of these two terms confound evaluations of estimated irradiances and negatively impact hydrologic modeling. Observations of Qsi and Qli in cold environments are subject to conditions that create additional uncertainties not encountered in other climates, specifically the accumulation of snow on uplooking radiometers. To address this issue, we present an automated method for estimating these periods of snow accumulation. Our method is based on forest interception of snow and uses common meteorological observations. In this algorithm, snow accumulation must exceed a threshold to obscure the sensor and is only removed through scouring by wind or melting. The algorithm is evaluated at two sites representing different mountain climates: (1) Snoqualmie Pass, Washington (maritime) and (2) the Senator Beck Basin Study Area, Colorado (continental). The algorithm agrees well with time-lapse camera observations at the Washington site and with multiple measurements at the Colorado site, with 70-80% of observed snow accumulation events correctly identified. We suggest using the method for quality controlling irradiance observations in snow-dominated climates where regular, daily maintenance is not possible.

  7. A retrodictive stochastic simulation algorithm

    International Nuclear Information System (INIS)

    Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.

    2010-01-01

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  8. An algorithm for discovering Lagrangians automatically from data

    Directory of Open Access Journals (Sweden)

    Daniel J.A. Hills

    2015-11-01

    Full Text Available An activity fundamental to science is building mathematical models. These models are used to both predict the results of future experiments and gain insight into the structure of the system under study. We present an algorithm that automates the model building process in a scientifically principled way. The algorithm can take observed trajectories from a wide variety of mechanical systems and, without any other prior knowledge or tuning of parameters, predict the future evolution of the system. It does this by applying the principle of least action and searching for the simplest Lagrangian that describes the system’s behaviour. By generating this Lagrangian in a human interpretable form, it can also provide insight into the workings of the system.

  9. Executive Financial Reporting: Seven Principles to Use in Developing Effective Reports.

    Science.gov (United States)

    Jenkins, William A.; Fischer, Mary

    1991-01-01

    Higher education institution business officers need to follow principles of presentation, judgment, and measurement in developing effective executive financial reports. Principles include (1) keep the statement simple; (2) be consistent in reporting from year to year; (3) determine user needs and interests; (4) limit data; (5) provide trend lines;…

  10. Branch and peg algorithms for the simple plant location problem

    NARCIS (Netherlands)

    Goldengorin, B.; Ghosh, D.; Sierksma, G.

    The simple plant location problem is a well-studied problem in combinatorial optimization. It is one of deciding where to locate a set of plants so that a set of clients can be supplied by them at the minimum cost. This problem often appears as a subproblem in other combinatorial problems. Several

  11. Branch and peg algorithms for the simple plant location problem

    NARCIS (Netherlands)

    Goldengorin, Boris; Ghosh, Diptesh; Sierksma, Gerard

    2001-01-01

    The simple plant location problem is a well-studied problem in combinatorial optimization. It is one of deciding where to locate a set of plants so that a set of clients can be supplied by them at the minimum cost. This problem of ten appears as a subproblem in other combinatorial problems. Several

  12. Numerical algorithms for contact problems in linear elastostatics

    International Nuclear Information System (INIS)

    Barbosa, H.J.C.; Feijoo, R.A.

    1984-01-01

    In this work contact problems in linear elasticity are analysed by means of Finite Elements and Mathematical Programming Techniques. The principle of virtual work leads in this case to a variational inequality which in turn is equivalent, for Hookean materials and infinitesimal strains, to the minimization of the total potential energy over the set of all admissible virtual displacements. The use of Gauss-Seidel algorithm with relaxation and projection and also Lemke's algorithm and Uzawa's algorithm for solving the minimization problem is discussed. Finally numerical examples are presented. (Author) [pt

  13. Efficient Algorithms for gcd and Cubic Residuosity in the Ring of Eisenstein Integers

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2003-01-01

    We present simple and efficient algorithms for computing gcd and cubic residuosity in the ring of Eisenstein integers, bf Z[ ]i.e. the integers extended with , a complex primitive third root of unity. The algorithms are similar and may be seen as generalisations of the binary integer gcd and deri......We present simple and efficient algorithms for computing gcd and cubic residuosity in the ring of Eisenstein integers, bf Z[ ]i.e. the integers extended with , a complex primitive third root of unity. The algorithms are similar and may be seen as generalisations of the binary integer gcd...

  14. Kinetics of enzyme action: essential principles for drug hunters

    National Research Council Canada - National Science Library

    Stein, Ross L

    2011-01-01

    ... field. Beginning with the most basic principles pertaining to simple, one-substrate enzyme reactions and their inhibitors, and progressing to a thorough treatment of two-substrate enzymes, Kinetics of Enzyme Action...

  15. ASP made simple

    CERN Document Server

    Deane, Sharon

    2003-01-01

    ASP Made Simple provides a brief introduction to ASP for the person who favours self teaching and/or does not have expensive computing facilities to learn on. The book will demonstrate how the principles of ASP can be learned with an ordinary PC running Personal Web Server, MS Access and a general text editor like Notepad.After working through the material readers should be able to:* Write ASP scripts that can display changing information on a web browser* Request records from a remote database or add records to it* Check user names & passwords and take this knowledge forward, either for their

  16. Application of the maximum entropy production principle to electrical systems

    International Nuclear Information System (INIS)

    Christen, Thomas

    2006-01-01

    For a simple class of electrical systems, the principle of the maximum entropy production rate (MaxEP) is discussed. First, we compare the MaxEP principle and the principle of the minimum entropy production rate and illustrate the superiority of the MaxEP principle for the example of two parallel constant resistors. Secondly, we show that the Steenbeck principle for the electric arc as well as the ohmic contact behaviour of space-charge limited conductors follow from the MaxEP principle. In line with work by Dewar, the investigations seem to suggest that the MaxEP principle can also be applied to systems far from equilibrium, provided appropriate information is available that enters the constraints of the optimization problem. Finally, we apply the MaxEP principle to a mesoscopic system and show that the universal conductance quantum, e 2 /h, of a one-dimensional ballistic conductor can be estimated

  17. Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds

    International Nuclear Information System (INIS)

    Kaya, Savaş; Kaya, Cemal; Islam, Nazmul

    2016-01-01

    The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.

  18. Maximum hardness and minimum polarizability principles through lattice energies of ionic compounds

    Energy Technology Data Exchange (ETDEWEB)

    Kaya, Savaş, E-mail: savaskaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Kaya, Cemal, E-mail: kaya@cumhuriyet.edu.tr [Department of Chemistry, Faculty of Science, Cumhuriyet University, Sivas 58140 (Turkey); Islam, Nazmul, E-mail: nazmul.islam786@gmail.com [Theoretical and Computational Chemistry Research Laboratory, Department of Basic Science and Humanities/Chemistry Techno Global-Balurghat, Balurghat, D. Dinajpur 733103 (India)

    2016-03-15

    The maximum hardness (MHP) and minimum polarizability (MPP) principles have been analyzed using the relationship among the lattice energies of ionic compounds with their electronegativities, chemical hardnesses and electrophilicities. Lattice energy, electronegativity, chemical hardness and electrophilicity values of ionic compounds considered in the present study have been calculated using new equations derived by some of the authors in recent years. For 4 simple reactions, the changes of the hardness (Δη), polarizability (Δα) and electrophilicity index (Δω) were calculated. It is shown that the maximum hardness principle is obeyed by all chemical reactions but minimum polarizability principles and minimum electrophilicity principle are not valid for all reactions. We also proposed simple methods to compute the percentage of ionic characters and inter nuclear distances of ionic compounds. Comparative studies with experimental sets of data reveal that the proposed methods of computation of the percentage of ionic characters and inter nuclear distances of ionic compounds are valid.

  19. VLSI PARTITIONING ALGORITHM WITH ADAPTIVE CONTROL PARAMETER

    Directory of Open Access Journals (Sweden)

    P. N. Filippenko

    2013-03-01

    Full Text Available The article deals with the problem of very large-scale integration circuit partitioning. A graph is selected as a mathematical model describing integrated circuit. Modification of ant colony optimization algorithm is presented, which is used to solve graph partitioning problem. Ant colony optimization algorithm is an optimization method based on the principles of self-organization and other useful features of the ants’ behavior. The proposed search system is based on ant colony optimization algorithm with the improved method of the initial distribution and dynamic adjustment of the control search parameters. The experimental results and performance comparison show that the proposed method of very large-scale integration circuit partitioning provides the better search performance over other well known algorithms.

  20. Chemical Kinetics, Heat Transfer, and Sensor Dynamics Revisited in a Simple Experiment

    Science.gov (United States)

    Sad, Maria E.; Sad, Mario R.; Castro, Alberto A.; Garetto, Teresita F.

    2008-01-01

    A simple experiment about thermal effects in chemical reactors is described, which can be used to illustrate chemical reactor models, the determination and validation of their parameters, and some simple principles of heat transfer and sensor dynamics. It is based in the exothermic reaction between aqueous solutions of sodium thiosulfate and…

  1. Models and algorithms for biomolecules and molecular networks

    CERN Document Server

    DasGupta, Bhaskar

    2016-01-01

    By providing expositions to modeling principles, theories, computational solutions, and open problems, this reference presents a full scope on relevant biological phenomena, modeling frameworks, technical challenges, and algorithms. * Up-to-date developments of structures of biomolecules, systems biology, advanced models, and algorithms * Sampling techniques for estimating evolutionary rates and generating molecular structures * Accurate computation of probability landscape of stochastic networks, solving discrete chemical master equations * End-of-chapter exercises

  2. Applying Kitaev's algorithm in an ion trap quantum computer

    International Nuclear Information System (INIS)

    Travaglione, B.; Milburn, G.J.

    2000-01-01

    Full text: Kitaev's algorithm is a method of estimating eigenvalues associated with an operator. Shor's factoring algorithm, which enables a quantum computer to crack RSA encryption codes, is a specific example of Kitaev's algorithm. It has been proposed that the algorithm can also be used to generate eigenstates. We extend this proposal for small quantum systems, identifying the conditions under which the algorithm can successfully generate eigenstates. We then propose an implementation scheme based on an ion trap quantum computer. This scheme allows us to illustrate a simple example, in which the algorithm effectively generates eigenstates

  3. An improved Landauer principle with finite-size corrections

    International Nuclear Information System (INIS)

    Reeb, David; Wolf, Michael M

    2014-01-01

    Landauer's principle relates entropy decrease and heat dissipation during logically irreversible processes. Most theoretical justifications of Landauer's principle either use thermodynamic reasoning or rely on specific models based on arguable assumptions. Here, we aim at a general and minimal setup to formulate Landauer's principle in precise terms. We provide a simple and rigorous proof of an improved version of the principle, which is formulated in terms of an equality rather than an inequality. The proof is based on quantum statistical mechanics concepts rather than on thermodynamic argumentation. From this equality version, we obtain explicit improvements of Landauer's bound that depend on the effective size of the thermal reservoir and reduce to Landauer's bound only for infinite-sized reservoirs. (paper)

  4. Teaching learning based optimization algorithm and its engineering applications

    CERN Document Server

    Rao, R Venkata

    2016-01-01

    Describing a new optimization algorithm, the “Teaching-Learning-Based Optimization (TLBO),” in a clear and lucid style, this book maximizes reader insights into how the TLBO algorithm can be used to solve continuous and discrete optimization problems involving single or multiple objectives. As the algorithm operates on the principle of teaching and learning, where teachers influence the quality of learners’ results, the elitist version of TLBO algorithm (ETLBO) is described along with applications of the TLBO algorithm in the fields of electrical engineering, mechanical design, thermal engineering, manufacturing engineering, civil engineering, structural engineering, computer engineering, electronics engineering, physics and biotechnology. The book offers a valuable resource for scientists, engineers and practitioners involved in the development and usage of advanced optimization algorithms.

  5. ALGORITHMS FOR TETRAHEDRAL NETWORK (TEN) GENERATION

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The Tetrahedral Network(TEN) is a powerful 3-D vector structure in GIS, which has a lot of advantages such as simple structure, fast topological relation processing and rapid visualization. The difficulty of TEN application is automatic creating data structure. Al though a raster algorithm has been introduced by some authors, the problems in accuracy, memory requirement, speed and integrity are still existent. In this paper, the raster algorithm is completed and a vector algorithm is presented after a 3-D data model and structure of TEN have been introducted. Finally, experiment, conclusion and future work are discussed.

  6. A simple technique to increase profits in wood products marketing

    Science.gov (United States)

    George B. Harpole

    1971-01-01

    Mathematical models can be used to solve quickly some simple day-to-day marketing problems. This note explains how a sawmill production manager, who has an essentially fixed-capacity mill, can solve several optimization problems by using pencil and paper, a forecast of market prices, and a simple algorithm. One such problem is to maximize profits in an operating period...

  7. Fast algorithm of track detection

    International Nuclear Information System (INIS)

    Nehrguj, B.

    1980-01-01

    A fast algorithm of variable-slope histograms is proposed, which allows a considerable reduction of computer memory size and is quite simple to carry out. Corresponding FORTRAN subprograms given a triple speed gain have been included in spiral reader data handling software

  8. The Design of SimpleITK

    Directory of Open Access Journals (Sweden)

    Bradley Christopher Lowekamp

    2013-12-01

    Full Text Available SimpleITK is a new interface to the Insight Segmentation andRegistration Toolkit (ITK designed to facilitate rapid prototyping, educationand scientific activities, via high level programminglanguages. ITK is a templated C++ library of image processingalgorithms and frameworks for biomedical and other applications, andit was designed to be generic, flexible and extensible. Initially, ITKprovided a direct wrapping interface to languages such as Python andTcl through the WrapITK system. Unlike WrapITK, which exposed ITK'scomplex templated interface, SimpleITK was designed to provide an easyto use and simplified interface to ITK's algorithms. It includesprocedural methods, hides ITK's demand driven pipeline, and provides atemplate-less layer. Also SimpleITK provides practical conveniencessuch as binary distribution packages and overloaded operators. Ouruser-friendly design goals dictated a departure from the directinterface wrapping approach of WrapITK, towards a new facadeclass structure that only exposes the required functionality, hidingITK's extensive template use. Internally SimpleITK utilizes a manualdescription of each filter with code-generation and advanced C++meta-programming to provide the higher-level interface, bringing thecapabilities of ITK to a wider audience. SimpleITK is licensed asopen source software under the Apache License Version 2.0 and more informationabout downloading it can be found at http://www.simpleitk.org.

  9. Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands

    Science.gov (United States)

    Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas

    2010-01-01

    This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).

  10. A NEW HYBRID GENETIC ALGORITHM FOR VERTEX COVER PROBLEM

    OpenAIRE

    UĞURLU, Onur

    2015-01-01

    The minimum vertex cover  problem belongs to the  class  of  NP-compl ete  graph  theoretical problems. This paper presents a hybrid genetic algorithm to solve minimum ver tex cover problem. In this paper, it has been shown that when local optimization technique is added t o genetic algorithm to form hybrid genetic algorithm, it gives more quality solution than simple genet ic algorithm. Also, anew mutation operator has been developed especially for minimum verte...

  11. Empirical study of parallel LRU simulation algorithms

    Science.gov (United States)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  12. Principles of speech coding

    CERN Document Server

    Ogunfunmi, Tokunbo

    2010-01-01

    It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the

  13. Turning Simulation into Estimation: Generalized Exchange Algorithms for Exponential Family Models.

    Directory of Open Access Journals (Sweden)

    Maarten Marsman

    Full Text Available The Single Variable Exchange algorithm is based on a simple idea; any model that can be simulated can be estimated by producing draws from the posterior distribution. We build on this simple idea by framing the Exchange algorithm as a mixture of Metropolis transition kernels and propose strategies that automatically select the more efficient transition kernels. In this manner we achieve significant improvements in convergence rate and autocorrelation of the Markov chain without relying on more than being able to simulate from the model. Our focus will be on statistical models in the Exponential Family and use two simple models from educational measurement to illustrate the contribution.

  14. Global structural optimizations of surface systems with a genetic algorithm

    International Nuclear Information System (INIS)

    Chuang, Feng-Chuan

    2005-01-01

    Global structural optimizations with a genetic algorithm were performed for atomic cluster and surface systems including aluminum atomic clusters, Si magic clusters on the Si(111) 7 x 7 surface, silicon high-index surfaces, and Ag-induced Si(111) reconstructions. First, the global structural optimizations of neutral aluminum clusters Al n (n up to 23) were performed using a genetic algorithm coupled with a tight-binding potential. Second, a genetic algorithm in combination with tight-binding and first-principles calculations were performed to study the structures of magic clusters on the Si(111) 7 x 7 surface. Extensive calculations show that the magic cluster observed in scanning tunneling microscopy (STM) experiments consist of eight Si atoms. Simulated STM images of the Si magic cluster exhibit a ring-like feature similar to STM experiments. Third, a genetic algorithm coupled with a highly optimized empirical potential were used to determine the lowest energy structure of high-index semiconductor surfaces. The lowest energy structures of Si(105) and Si(114) were determined successfully. The results of Si(105) and Si(114) are reported within the framework of highly optimized empirical potential and first-principles calculations. Finally, a genetic algorithm coupled with Si and Ag tight-binding potentials were used to search for Ag-induced Si(111) reconstructions at various Ag and Si coverages. The optimized structural models of √3 x √3, 3 x 1, and 5 x 2 phases were reported using first-principles calculations. A novel model is found to have lower surface energy than the proposed double-honeycomb chained (DHC) model both for Au/Si(111) 5 x 2 and Ag/Si(111) 5 x 2 systems

  15. Goal-Setting Learning Principles: A Lesson From Practitioner

    OpenAIRE

    Zainudin bin Abu Bakar; Lee Mei Yun; NG Siew Keow; Tan Hui Li

    2014-01-01

    One of the prominent theory was the goal-setting theory which was widely been used in educational setting. It is an approach than can enhance the teaching and learning activities in the classroom. This is a report paper about a simple study of the implementation of the goal-setting principle in the classroom. A clinical data of the teaching and learning session was then analysed to address several issues highlighted. It is found that the goal-setting principles if understood clearly by the te...

  16. Simple Electromagnetic Analysis in Cryptography

    Directory of Open Access Journals (Sweden)

    Zdenek Martinasek

    2012-07-01

    Full Text Available The article describes the main principle and methods of simple electromagnetic analysis and thus provides an overview of simple electromagnetic analysis.The introductions chapters describe specific SPA attack used visual inspection of EM traces, template based attack and collision attack.After reading the article, the reader is sufficiently informed of any context of SEMA.Another aim of the article is the practical realization of SEMA which is focused on AES implementation.The visual inspection of EM trace of AES is performed step by step and the result is the determination of secret key Hamming weight.On the resulting EM trace, the Hamming weight of the secret key 1 to 8 was clearly visible.This method allows reduction from the number of possible keys for following brute force attack.

  17. Simple simulation schemes for CIR and Wishart processes

    DEFF Research Database (Denmark)

    Pisani, Camilla

    2013-01-01

    We develop some simple simulation algorithms for CIR and Wishart processes. The main idea is the splitting of their generator into the sum of the square of an Ornstein-Uhlenbeck matrix process and a deterministic process. Joint work with Paolo Baldi, Tor Vergata University, Rome...

  18. A Simple Encryption Algorithm for Quantum Color Image

    Science.gov (United States)

    Li, Panchi; Zhao, Ya

    2017-06-01

    In this paper, a simple encryption scheme for quantum color image is proposed. Firstly, a color image is transformed into a quantum superposition state by employing NEQR (novel enhanced quantum representation), where the R,G,B values of every pixel in a 24-bit RGB true color image are represented by 24 single-qubit basic states, and each value has 8 qubits. Then, these 24 qubits are respectively transformed from a basic state into a balanced superposition state by employed the controlled rotation gates. At this time, the gray-scale values of R, G, B of every pixel are in a balanced superposition of 224 multi-qubits basic states. After measuring, the whole image is an uniform white noise, which does not provide any information. Decryption is the reverse process of encryption. The experimental results on the classical computer show that the proposed encryption scheme has better security.

  19. Algorithmic Mechanism Design of Evolutionary Computation.

    Science.gov (United States)

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm.

  20. Key Distribution and Changing Key Cryptosystem Based on Phase Retrieval Algorithm and RSA Public-Key Algorithm

    Directory of Open Access Journals (Sweden)

    Tieyu Zhao

    2015-01-01

    Full Text Available The optical image encryption has attracted more and more researchers’ attention, and the various encryption schemes have been proposed. In existing optical cryptosystem, the phase functions or images are usually used as the encryption keys, and it is difficult that the traditional public-key algorithm (such as RSA, ECC, etc. is used to complete large numerical key transfer. In this paper, we propose a key distribution scheme based on the phase retrieval algorithm and the RSA public-key algorithm, which solves the problem for the key distribution in optical image encryption system. Furthermore, we also propose a novel image encryption system based on the key distribution principle. In the system, the different keys can be used in every encryption process, which greatly improves the security of the system.

  1. Modeling of diatomic molecule using the Morse potential and the Verlet algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Fidiani, Elok [Department of Physics, Parahyangan Catholic University, Bandung-Jawa Barat (Indonesia)

    2016-03-11

    Performing molecular modeling usually uses special software for Molecular Dynamics (MD) such as: GROMACS, NAMD, JMOL etc. Molecular dynamics is a computational method to calculate the time dependent behavior of a molecular system. In this work, MATLAB was used as numerical method for a simple modeling of some diatomic molecules: HCl, H{sub 2} and O{sub 2}. MATLAB is a matrix based numerical software, in order to do numerical analysis, all the functions and equations describing properties of atoms and molecules must be developed manually in MATLAB. In this work, a Morse potential was generated to describe the bond interaction between the two atoms. In order to analyze the simultaneous motion of molecules, the Verlet Algorithm derived from Newton’s Equations of Motion (classical mechanics) was operated. Both the Morse potential and the Verlet algorithm were integrated using MATLAB to derive physical properties and the trajectory of the molecules. The data computed by MATLAB is always in the form of a matrix. To visualize it, Visualized Molecular Dynamics (VMD) was performed. Such method is useful for development and testing some types of interaction on a molecular scale. Besides, this can be very helpful for describing some basic principles of molecular interaction for educational purposes.

  2. Modeling of diatomic molecule using the Morse potential and the Verlet algorithm

    International Nuclear Information System (INIS)

    Fidiani, Elok

    2016-01-01

    Performing molecular modeling usually uses special software for Molecular Dynamics (MD) such as: GROMACS, NAMD, JMOL etc. Molecular dynamics is a computational method to calculate the time dependent behavior of a molecular system. In this work, MATLAB was used as numerical method for a simple modeling of some diatomic molecules: HCl, H_2 and O_2. MATLAB is a matrix based numerical software, in order to do numerical analysis, all the functions and equations describing properties of atoms and molecules must be developed manually in MATLAB. In this work, a Morse potential was generated to describe the bond interaction between the two atoms. In order to analyze the simultaneous motion of molecules, the Verlet Algorithm derived from Newton’s Equations of Motion (classical mechanics) was operated. Both the Morse potential and the Verlet algorithm were integrated using MATLAB to derive physical properties and the trajectory of the molecules. The data computed by MATLAB is always in the form of a matrix. To visualize it, Visualized Molecular Dynamics (VMD) was performed. Such method is useful for development and testing some types of interaction on a molecular scale. Besides, this can be very helpful for describing some basic principles of molecular interaction for educational purposes.

  3. Quad-Polarization Transmission for High-Capacity IM/DD Links

    DEFF Research Database (Denmark)

    Estaran Tolosa, Jose Manuel; Castaneda, Mario A. Usuga; Porto da Silva, Edson

    2014-01-01

    We report the first experimental demonstration of IM/DD links usi ng four states of polarization. Fiber - Induced polarization rotation is compensated with a simple tracking algorithm operating on the Stokes space. The principle is prove n at 128 Gb/s over 2 - km SSMF......We report the first experimental demonstration of IM/DD links usi ng four states of polarization. Fiber - Induced polarization rotation is compensated with a simple tracking algorithm operating on the Stokes space. The principle is prove n at 128 Gb/s over 2 - km SSMF...

  4. A Simple Inquiry-Based Lab for Teaching Osmosis

    Science.gov (United States)

    Taylor, John R.

    2014-01-01

    This simple inquiry-based lab was designed to teach the principle of osmosis while also providing an experience for students to use the skills and practices commonly found in science. Students first design their own experiment using very basic equipment and supplies, which generally results in mixed, but mostly poor, outcomes. Classroom "talk…

  5. Description of a Normal-Force In-Situ Turbulence Algorithm for Airplanes

    Science.gov (United States)

    Stewart, Eric C.

    2003-01-01

    A normal-force in-situ turbulence algorithm for potential use on commercial airliners is described. The algorithm can produce information that can be used to predict hazardous accelerations of airplanes or to aid meteorologists in forecasting weather patterns. The algorithm uses normal acceleration and other measures of the airplane state to approximate the vertical gust velocity. That is, the fundamental, yet simple, relationship between normal acceleration and the change in normal force coefficient is exploited to produce an estimate of the vertical gust velocity. This simple approach is robust and produces a time history of the vertical gust velocity that would be intuitively useful to pilots. With proper processing, the time history can be transformed into the eddy dissipation rate that would be useful to meteorologists. Flight data for a simplified research implementation of the algorithm are presented for a severe turbulence encounter of the NASA ARIES Boeing 757 research airplane. The results indicate that the algorithm has potential for producing accurate in-situ turbulence measurements. However, more extensive tests and analysis are needed with an operational implementation of the algorithm to make comparisons with other algorithms or methods.

  6. Nuclear detectors. Physical principles of operation

    International Nuclear Information System (INIS)

    Pochet, Th.

    2005-01-01

    Nuclear detection is used in several domains of activity from the physics research, the nuclear industry, the medical and industrial sectors, the security etc. The particles of interest are the α, β, X, γ and neutrons. This article treats of the basic physical properties of radiation detection, the general characteristics of the different classes of existing detectors and the particle/matter interactions: 1 - general considerations; 2 - measurement types and definitions: pulse mode, current mode, definitions; 3 - physical principles of direct detection: introduction and general problem, materials used in detection, simple device, junction semiconductor device, charges generation and transport inside matter, signal generation; 4 - physical principles of indirect detection: introduction, scintillation mechanisms, definition and properties of scintillators. (J.S.)

  7. Transfer-Matrix Monte Carlo Estimates of Critical Points in the Simple Cubic Ising, Planar and Heisenberg Models

    NARCIS (Netherlands)

    Nightingale, M.P.; Blöte, H.W.J.

    1996-01-01

    The principle and the efficiency of the Monte Carlo transfer-matrix algorithm are discussed. Enhancements of this algorithm are illustrated by applications to several phase transitions in lattice spin models. We demonstrate how the statistical noise can be reduced considerably by a similarity

  8. Optimal Power Flow in Islanded Microgrids Using a Simple Distributed Algorithm

    DEFF Research Database (Denmark)

    Sanseverino, Eleonora Riva; Di Silvestre, Maria Luisa; Badalamenti, Romina

    2015-01-01

    In this paper, the problem of distributed power losses minimization in islanded distribution systems is dealt with. The problem is formulated in a very simple manner and a solution is reached after a few iterations. The considered distribution system, a microgrid, will not need large bandwidth co...

  9. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  10. SPHINX--an algorithm for taxonomic binning of metagenomic sequences.

    Science.gov (United States)

    Mohammed, Monzoorul Haque; Ghosh, Tarini Shankar; Singh, Nitin Kumar; Mande, Sharmila S

    2011-01-01

    Compared with composition-based binning algorithms, the binning accuracy and specificity of alignment-based binning algorithms is significantly higher. However, being alignment-based, the latter class of algorithms require enormous amount of time and computing resources for binning huge metagenomic datasets. The motivation was to develop a binning approach that can analyze metagenomic datasets as rapidly as composition-based approaches, but nevertheless has the accuracy and specificity of alignment-based algorithms. This article describes a hybrid binning approach (SPHINX) that achieves high binning efficiency by utilizing the principles of both 'composition'- and 'alignment'-based binning algorithms. Validation results with simulated sequence datasets indicate that SPHINX is able to analyze metagenomic sequences as rapidly as composition-based algorithms. Furthermore, the binning efficiency (in terms of accuracy and specificity of assignments) of SPHINX is observed to be comparable with results obtained using alignment-based algorithms. A web server for the SPHINX algorithm is available at http://metagenomics.atc.tcs.com/SPHINX/.

  11. Principles of fluid-structure interaction

    International Nuclear Information System (INIS)

    Schumann, U.; Kernforschungszentrum Karlsruhe G.m.b.H.

    1981-01-01

    Fluid-structure interaction (FSI) is an important physical phenomenon which has attracted significant attention in nuclear reactor safety analysis. Here, simple explanations of the principle effects of FSI are given and illustrated by reference to numerical and experimental results. First, a very simple fluid-structure model is introduced which consists of a spring supported piston closing a fluid filled rigid pipe. The motion of the piston and the fluid is approximately described by one degree of freedom, respectively. Depending on the load frequency and material parameters one finds that the coupled system is characterized by virtual masses and stiffnesses or by the inverse properties which are termed virtual fluidities and compressibilities. Thus the two parts interact as if they are connected in series or in parallel. The two eigenfrequencies of the coupled system enclose the eigenfrequencies of the individual fluid and structure parts. Second, the great importance of Hamilton's principle for derivation of the coupled equations of motion is emphasized. From this principle upper and lower bounds for the effective density of a heterogeneous fluid-solid mixture are deduced. Continuum models for such mixtures contain a virtual density tensor. Finally, we discuss FSI for the case of a loss-of-coolant accident (LOCA) in a pressurized water reactor (PWR) in the first (subcooled) blowdown period. Here, the fluid imposes pressure loadings on internal structures like the core barrel and the motion of these structures influences the fluid motion. Recent experimental results obtained at the HDR are compared with numerical predictions of the FLUX 2-code. The fair agreement confirms that we have well understood the principal effects of FSI. (orig.) [de

  12. NeatSort - A practical adaptive algorithm

    OpenAIRE

    La Rocca, Marcello; Cantone, Domenico

    2014-01-01

    We present a new adaptive sorting algorithm which is optimal for most disorder metrics and, more important, has a simple and quick implementation. On input $X$, our algorithm has a theoretical $\\Omega (|X|)$ lower bound and a $\\mathcal{O}(|X|\\log|X|)$ upper bound, exhibiting amazing adaptive properties which makes it run closer to its lower bound as disorder (computed on different metrics) diminishes. From a practical point of view, \\textit{NeatSort} has proven itself competitive with (and of...

  13. A new model and simple algorithms for multi-label mumford-shah problems

    KAUST Repository

    Hong, Byungwoo; Lu, Zhaojin; Sundaramoorthi, Ganesh

    2013-01-01

    is that the underlying variables: the labels and the functions are less coupled than in the original formulation, and the labels can be computed from the functions with more global updates. The resulting algorithms can be tuned to the desired level of locality

  14. Goal-Setting Learning Principles: A Lesson From Practitioner

    Directory of Open Access Journals (Sweden)

    Zainudin bin Abu Bakar

    2014-02-01

    Full Text Available One of the prominent theory was the goal-setting theory which was widely been used in educational setting. It is an approach than can enhance the teaching and learning activities in the classroom. This is a report paper about a simple study of the implementation of the goal-setting principle in the classroom. A clinical data of the teaching and learning session was then analysed to address several issues highlighted. It is found that the goal-setting principles if understood clearly by the teachers can enhance the teaching and learning activities. Failed to see the needs of the session will revoke the students learning interest. It is suggested that goal-setting learning principles could become a powerful aid for the teachers in the classroom.

  15. The principle of least action history and physics

    CERN Document Server

    Rojo, Alberto

    2018-01-01

    The principle of least action originates in the idea that, if nature has a purpose, it should follow a minimum or critical path. This simple principle, and its variants and generalizations, applies to optics, mechanics, electromagnetism, relativity, and quantum mechanics, and provides an essential guide to understanding the beauty of physics. This unique text provides an accessible introduction to the action principle across these various fields of physics, and examines its history and fundamental role in science. It includes - with varying levels of mathematical sophistication - explanations from historical sources, discussion of classic papers, and original worked examples. The result is a story that is understandable to those with a modest mathematical background, as well as to researchers and students in physics and the history of physics.

  16. Decoding using back-project algorithm from coded image in ICF

    International Nuclear Information System (INIS)

    Jiang shaoen; Liu Zhongli; Zheng Zhijian; Tang Daoyuan

    1999-01-01

    The principle of the coded imaging and its decoding in inertial confinement fusion is described simply. The authors take ring aperture microscope for example and use back-project (BP) algorithm to decode the coded image. The decoding program has been performed for numerical simulation. Simulations of two models are made, and the results show that the accuracy of BP algorithm is high and effect of reconstruction is good. Thus, it indicates that BP algorithm is applicable to decoding for coded image in ICF experiments

  17. A New Approximate Chimera Donor Cell Search Algorithm

    Science.gov (United States)

    Holst, Terry L.; Nixon, David (Technical Monitor)

    1998-01-01

    The objectives of this study were to develop chimera-based full potential methodology which is compatible with overflow (Euler/Navier-Stokes) chimera flow solver and to develop a fast donor cell search algorithm that is compatible with the chimera full potential approach. Results of this work included presenting a new donor cell search algorithm suitable for use with a chimera-based full potential solver. This algorithm was found to be extremely fast and simple producing donor cells as fast as 60,000 per second.

  18. Local anesthesia selection algorithm in patients with concomitant somatic diseases.

    Science.gov (United States)

    Anisimova, E N; Sokhov, S T; Letunova, N Y; Orekhova, I V; Gromovik, M V; Erilin, E A; Ryazantsev, N A

    2016-01-01

    The paper presents basic principles of local anesthesia selection in patients with concomitant somatic diseases. These principles are history taking; analysis of drugs interaction with local anesthetic and sedation agents; determination of the functional status of the patient; patient anxiety correction; dental care with monitoring of hemodynamics parameters. It was found that adhering to this algorithm promotes prevention of urgent conditions in patients in outpatient dentistry.

  19. An improved VSS NLMS algorithm for active noise cancellation

    Science.gov (United States)

    Sun, Yunzhuo; Wang, Mingjiang; Han, Yufei; Zhang, Congyan

    2017-08-01

    In this paper, an improved variable step size NLMS algorithm is proposed. NLMS has fast convergence rate and low steady state error compared to other traditional adaptive filtering algorithm. But there is a contradiction between the convergence speed and steady state error that affect the performance of the NLMS algorithm. Now, we propose a new variable step size NLMS algorithm. It dynamically changes the step size according to current error and iteration times. The proposed algorithm has simple formulation and easily setting parameters, and effectively solves the contradiction in NLMS. The simulation results show that the proposed algorithm has a good tracking ability, fast convergence rate and low steady state error simultaneously.

  20. Principles of broadband switching and networking

    CERN Document Server

    Liew, Soung C

    2010-01-01

    An authoritative introduction to the roles of switching and transmission in broadband integrated services networks Principles of Broadband Switching and Networking explains the design and analysis of switch architectures suitable for broadband integrated services networks, emphasizing packet-switched interconnection networks with distributed routing algorithms. The text examines the mathematical properties of these networks, rather than specific implementation technologies. Although the pedagogical explanations in this book are in the context of switches, many of the fundamenta

  1. Foam: A general purpose Monte Carlo cellular algorithm

    International Nuclear Information System (INIS)

    Jadach, S.

    2003-01-01

    A general-purpose, self-adapting Monte Carlo (MC) algorithm implemented in the program Foam is described. The high efficiency of the MC, that is small maximum weight or variance of the MC weight is achieved by means of dividing the integration domain into small cells. The cells can be n-dimensional simplices, hyperrectangles cells. The next cell to be divided and the position/direction of the division hyperplane is chosen by the algorithm which optimizes the ratio of the maximum weight to the average weight or (optionally) the total variance. The algorithm is able to deal, in principle, with an arbitrary pattern of the singularities in the distribution

  2. Evaluation of simple rapid HIV assays and development of national rapid HIV test algorithms in Dar es Salaam, Tanzania.

    Science.gov (United States)

    Lyamuya, Eligius F; Aboud, Said; Urassa, Willy K; Sufi, Jaffer; Mbwana, Judica; Ndugulile, Faustin; Massambu, Charles

    2009-02-18

    Suitable algorithms based on a combination of two or more simple rapid HIV assays have been shown to have a diagnostic accuracy comparable to double enzyme-linked immunosorbent assay (ELISA) or double ELISA with Western Blot strategies. The aims of this study were to evaluate the performance of five simple rapid HIV assays using whole blood samples from HIV-infected patients, pregnant women, voluntary counseling and testing attendees and blood donors, and to formulate an alternative confirmatory strategy based on rapid HIV testing algorithms suitable for use in Tanzania. Five rapid HIV assays: Determine HIV-1/2 (Inverness Medical), SD Bioline HIV 1/2 3.0 (Standard Diagnostics Inc.), First Response HIV Card 1-2.0 (PMC Medical India Pvt Ltd), HIV1/2 Stat-Pak Dipstick (Chembio Diagnostic System, Inc) and Uni-Gold HIV-1/2 (Trinity Biotech) were evaluated between June and September 2006 using 1433 whole blood samples from hospital patients, pregnant women, voluntary counseling and testing attendees and blood donors. All samples that were reactive on all or any of the five rapid assays and 10% of non-reactive samples were tested on a confirmatory Inno-Lia HIV I/II immunoblot assay (Immunogenetics). Three hundred and ninety samples were confirmed HIV-1 antibody positive, while 1043 were HIV negative. The sensitivity at initial testing of Determine, SD Bioline and Uni-Gold was 100% (95% CI; 99.1-100) while First Response and Stat-Pak had sensitivity of 99.5% (95% CI; 98.2-99.9) and 97.7% (95% CI; 95.7-98.9), respectively, which increased to 100% (95% CI; 99.1-100) on repeat testing. The initial specificity of the Uni-Gold assay was 100% (95% CI; 99.6-100) while specificities were 99.6% (95% CI; 99-99.9), 99.4% (95% CI; 98.8-99.7), 99.6% (95% CI; 99-99.9) and 99.8% (95% CI; 99.3-99.9) for Determine, SD Bioline, First Response and Stat-Pak assays, respectively. There was no any sample which was concordantly false positive in Uni-Gold, Determine and SD Bioline assays. An

  3. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures

    Directory of Open Access Journals (Sweden)

    Dong-Sup Lee

    2015-01-01

    Full Text Available Independent Component Analysis (ICA, one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: insta- bility and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to vali- date the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.

  4. The principles of electronic and electromechanic power conversion a systems approach

    CERN Document Server

    Ferreira, Braham

    2013-01-01

    Teaching the principles of power electronics and electromechanical power conversion through a unique top down systems approach, The Principles of Electromechanical Power Conversion takes the role and system context of power conversion functions as the starting point. Following this approach, the text defines the building blocks of the system and describes the theory of how they exchange power with each other. The authors introduce a modern, simple approach to machines, which makes the principles of field oriented control and space vector theory approachable to undergraduate students as well as

  5. Sequential bayes estimation algorithm with cubic splines on uniform meshes

    International Nuclear Information System (INIS)

    Hossfeld, F.; Mika, K.; Plesser-Walk, E.

    1975-11-01

    After outlining the principles of some recent developments in parameter estimation, a sequential numerical algorithm for generalized curve-fitting applications is presented combining results from statistical estimation concepts and spline analysis. Due to its recursive nature, the algorithm can be used most efficiently in online experimentation. Using computer-sumulated and experimental data, the efficiency and the flexibility of this sequential estimation procedure is extensively demonstrated. (orig.) [de

  6. BIBLIO: A Reprint File Management Algorithm

    Science.gov (United States)

    Zelnio, Robert N.; And Others

    1977-01-01

    The development of a simple computer algorithm designed for use by the individual educator or researcher in maintaining and searching reprint files is reported. Called BIBLIO, the system is inexpensive and easy to operate and maintain without sacrificing flexibility and utility. (LBH)

  7. Optimization algorithms intended for self-tuning feedwater heater model

    International Nuclear Information System (INIS)

    Czop, P; Barszcz, T; Bednarz, J

    2013-01-01

    This work presents a self-tuning feedwater heater model. This work continues the work on first-principle gray-box methodology applied to diagnostics and condition assessment of power plant components. The objective of this work is to review and benchmark the optimization algorithms regarding the time required to achieve the best model fit to operational power plant data. The paper recommends the most effective algorithm to be used in the model adjustment process.

  8. Basic flight mechanics a simple approach without equations

    CERN Document Server

    Tewari, Ashish

    2016-01-01

    This book presents flight mechanics of aircraft, spacecraft, and rockets to technical and non-technical readers in simple terms and based purely on physical principles. Adapting an accessible and lucid writing style, the book retains the scientific authority and conceptual substance of an engineering textbook without requiring a background in physics or engineering mathematics. Professor Tewari explains relevant physical principles of flight by straightforward examples and meticulous diagrams and figures. Important aspects of both atmospheric and space flight mechanics are covered, including performance, stability and control, aeroelasticity, orbital mechanics, and altitude control. The book describes airplanes, gliders, rotary wing and flapping wing flight vehicles, rockets, and spacecraft and visualizes the essential principles using detailed illustration. It is an ideal resource for managers and technicians in the aerospace industry without engineering degrees, pilots, and anyone interested in the mechanic...

  9. Fast Algorithms for Fitting Active Appearance Models to Unconstrained Images

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Pantic, Maja

    2016-01-01

    Fitting algorithms for Active Appearance Models (AAMs) are usually considered to be robust but slow or fast but less able to generalize well to unseen variations. In this paper, we look into AAM fitting algorithms and make the following orthogonal contributions: We present a simple “project-out‿

  10. Basic principles of forest fuel reduction treatments

    Science.gov (United States)

    James K. Agee; Carl N. Skinner

    2005-01-01

    Successful fire exclusion in the 20th century has created severe fire problems across the West. Not every forest is at risk of uncharacteristically severe wildfire, but drier forests are in need of active management to mitigate fire hazard. We summarize a set of simple principles important to address in fuel reduction treatments: reduction of surface fuels, increasing...

  11. Development of an inter-layer solute transport algorithm for SOLTR computer program. Part 1. The algorithm

    International Nuclear Information System (INIS)

    Miller, I.; Roman, K.

    1979-12-01

    In order to perform studies of the influence of regional groundwater flow systems on the long-term performance of potential high-level nuclear waste repositories, it was determined that an adequate computer model would have to consider the full three-dimensional flow system. Golder Associates' SOLTR code, while three-dimensional, has an overly simple algorithm for simulating the passage of radionuclides from one aquifier to another above or below it. Part 1 of this report describes the algorithm developed to provide SOLTR with an improved capability for simulating interaquifer transport

  12. Fractal Hypothesis of the Pelagic Microbial Ecosystem—Can Simple Ecological Principles Lead to Self-Similar Complexity in the Pelagic Microbial Food Web?

    Science.gov (United States)

    Våge, Selina; Thingstad, T. Frede

    2015-01-01

    Trophic interactions are highly complex and modern sequencing techniques reveal enormous biodiversity across multiple scales in marine microbial communities. Within the chemically and physically relatively homogeneous pelagic environment, this calls for an explanation beyond spatial and temporal heterogeneity. Based on observations of simple parasite-host and predator-prey interactions occurring at different trophic levels and levels of phylogenetic resolution, we present a theoretical perspective on this enormous biodiversity, discussing in particular self-similar aspects of pelagic microbial food web organization. Fractal methods have been used to describe a variety of natural phenomena, with studies of habitat structures being an application in ecology. In contrast to mathematical fractals where pattern generating rules are readily known, however, identifying mechanisms that lead to natural fractals is not straight-forward. Here we put forward the hypothesis that trophic interactions between pelagic microbes may be organized in a fractal-like manner, with the emergent network resembling the structure of the Sierpinski triangle. We discuss a mechanism that could be underlying the formation of repeated patterns at different trophic levels and discuss how this may help understand characteristic biomass size-spectra that hint at scale-invariant properties of the pelagic environment. If the idea of simple underlying principles leading to a fractal-like organization of the pelagic food web could be formalized, this would extend an ecologists mindset on how biological complexity could be accounted for. It may furthermore benefit ecosystem modeling by facilitating adequate model resolution across multiple scales. PMID:26648929

  13. Evaluation of train-speed control algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Slavik, M.M. [BKS Advantech (Pty.) Ltd., Pretoria (South Africa)

    2000-07-01

    A relatively simple and fast simulator has been developed and used for the preliminary testing of train cruise-control algorithms. The simulation is done in software on a PC. The simulator is used to gauge the consequences and feasibility of a cruise-control strategy prior to more elaborate testing and evaluation. The tool was used to design and pre-test a train-cruise control algorithm called NSS, which does not require knowledge of exact train mass, vertical alignment, or actual braking force. Only continuous measurements on the speed of the train and electrical current are required. With this modest input, the NSS algorithm effected speed changes smoothly and efficiently for a wide range of operating conditions. (orig.)

  14. Hi-Plex for Simple, Accurate, and Cost-Effective Amplicon-based Targeted DNA Sequencing.

    Science.gov (United States)

    Pope, Bernard J; Hammet, Fleur; Nguyen-Dumont, Tu; Park, Daniel J

    2018-01-01

    Hi-Plex is a suite of methods to enable simple, accurate, and cost-effective highly multiplex PCR-based targeted sequencing (Nguyen-Dumont et al., Biotechniques 58:33-36, 2015). At its core is the principle of using gene-specific primers (GSPs) to "seed" (or target) the reaction and universal primers to "drive" the majority of the reaction. In this manner, effects on amplification efficiencies across the target amplicons can, to a large extent, be restricted to early seeding cycles. Product sizes are defined within a relatively narrow range to enable high-specificity size selection, replication uniformity across target sites (including in the context of fragmented input DNA such as that derived from fixed tumor specimens (Nguyen-Dumont et al., Biotechniques 55:69-74, 2013; Nguyen-Dumont et al., Anal Biochem 470:48-51, 2015), and application of high-specificity genetic variant calling algorithms (Pope et al., Source Code Biol Med 9:3, 2014; Park et al., BMC Bioinformatics 17:165, 2016). Hi-Plex offers a streamlined workflow that is suitable for testing large numbers of specimens without the need for automation.

  15. A Medieval Clock Made out of Simple Materials

    Science.gov (United States)

    Danese, B.; Oss, S.

    2008-01-01

    A cheap replica of the verge-and-foliot clock has been built from simple materials. It is a didactic tool of great power for physics teaching at every stage of schooling, in particular at university level. An account is given of its construction and its working principles, together with motivated examples of a few activities. (Contains 3 tables…

  16. Optimisation combinatoire Theorie et algorithmes

    CERN Document Server

    Korte, Bernhard; Fonlupt, Jean

    2010-01-01

    Ce livre est la traduction fran aise de la quatri me et derni re dition de Combinatorial Optimization: Theory and Algorithms crit par deux minents sp cialistes du domaine: Bernhard Korte et Jens Vygen de l'universit de Bonn en Allemagne. Il met l accent sur les aspects th oriques de l'optimisation combinatoire ainsi que sur les algorithmes efficaces et exacts de r solution de probl mes. Il se distingue en cela des approches heuristiques plus simples et souvent d crites par ailleurs. L ouvrage contient de nombreuses d monstrations, concises et l gantes, de r sultats difficiles. Destin aux tudia

  17. TURING MACHINE AS UNIVERSAL ALGORITHM EXECUTOR AND ITS APPLICATION IN THE PROCESS OF HIGH-SCHOOL STUDENTS` ADVANCED STUDY OF ALGORITHMIZATION AND PROGRAMMING FUNDAMENTALS

    Directory of Open Access Journals (Sweden)

    Oleksandr B. Yashchyk

    2016-05-01

    Full Text Available The article discusses the importance of studying the notion of algorithm and its formal specification using Turing machines. In the article it was identified the basic hypothesis of the theory of algorithms for Turing as well as reviewed scientific research of modern scientists devoted to this issue and found the main principles of the Turing machine as an abstract mathematical model. The process of forming information competencies components, information culture and students` logical thinking development with the inclusion of the topic “Study and Application of Turing machine as Universal Algorithm Executor” in the course of Informatics was analyzed.

  18. Geometry correction Algorithm for UAV Remote Sensing Image Based on Improved Neural Network

    Science.gov (United States)

    Liu, Ruian; Liu, Nan; Zeng, Beibei; Chen, Tingting; Yin, Ninghao

    2018-03-01

    Aiming at the disadvantage of current geometry correction algorithm for UAV remote sensing image, a new algorithm is proposed. Adaptive genetic algorithm (AGA) and RBF neural network are introduced into this algorithm. And combined with the geometry correction principle for UAV remote sensing image, the algorithm and solving steps of AGA-RBF are presented in order to realize geometry correction for UAV remote sensing. The correction accuracy and operational efficiency is improved through optimizing the structure and connection weight of RBF neural network separately with AGA and LMS algorithm. Finally, experiments show that AGA-RBF algorithm has the advantages of high correction accuracy, high running rate and strong generalization ability.

  19. Algorithmic Complexity and Reprogrammability of Chemical Structure Networks

    KAUST Repository

    Zenil, Hector; Kiani, Narsis A.; Shang, Ming-mei; Tegner, Jesper

    2018-01-01

    Here we address the challenge of profiling causal properties and tracking the transformation of chemical compounds from an algorithmic perspective. We explore the potential of applying a computational interventional calculus based on the principles of algorithmic probability to chemical structure networks. We profile the sensitivity of the elements and covalent bonds in a chemical structure network algorithmically, asking whether reprogrammability affords information about thermodynamic and chemical processes involved in the transformation of different compound classes. We arrive at numerical results suggesting a correspondence between some physical, structural and functional properties. Our methods are capable of separating chemical classes that reflect functional and natural differences without considering any information about atomic and molecular properties. We conclude that these methods, with their links to chemoinformatics via algorithmic, probability hold promise for future research.

  20. Algorithmic Complexity and Reprogrammability of Chemical Structure Networks

    KAUST Repository

    Zenil, Hector

    2018-02-16

    Here we address the challenge of profiling causal properties and tracking the transformation of chemical compounds from an algorithmic perspective. We explore the potential of applying a computational interventional calculus based on the principles of algorithmic probability to chemical structure networks. We profile the sensitivity of the elements and covalent bonds in a chemical structure network algorithmically, asking whether reprogrammability affords information about thermodynamic and chemical processes involved in the transformation of different compound classes. We arrive at numerical results suggesting a correspondence between some physical, structural and functional properties. Our methods are capable of separating chemical classes that reflect functional and natural differences without considering any information about atomic and molecular properties. We conclude that these methods, with their links to chemoinformatics via algorithmic, probability hold promise for future research.

  1. Algorithmic Complexity and Reprogrammability of Chemical Structure Networks

    KAUST Repository

    Zenil, Hector

    2018-04-02

    Here we address the challenge of profiling causal properties and tracking the transformation of chemical compounds from an algorithmic perspective. We explore the potential of applying a computational interventional calculus based on the principles of algorithmic probability to chemical structure networks. We profile the sensitivity of the elements and covalent bonds in a chemical structure network algorithmically, asking whether reprogrammability affords information about thermodynamic and chemical processes involved in the transformation of different compound classes. We arrive at numerical results suggesting a correspondence between some physical, structural and functional properties. Our methods are capable of separating chemical classes that reflect functional and natural differences without considering any information about atomic and molecular properties. We conclude that these methods, with their links to chemoinformatics via algorithmic, probability hold promise for future research.

  2. A simple, remote, video based breathing monitor.

    Science.gov (United States)

    Regev, Nir; Wulich, Dov

    2017-07-01

    Breathing monitors have become the all-important cornerstone of a wide variety of commercial and personal safety applications, ranging from elderly care to baby monitoring. Many such monitors exist in the market, some, with vital signs monitoring capabilities, but none remote. This paper presents a simple, yet efficient, real time method of extracting the subject's breathing sinus rhythm. Points of interest are detected on the subject's body, and the corresponding optical flow is estimated and tracked using the well known Lucas-Kanade algorithm on a frame by frame basis. A generalized likelihood ratio test is then utilized on each of the many interest points to detect which is moving in harmonic fashion. Finally, a spectral estimation algorithm based on Pisarenko harmonic decomposition tracks the harmonic frequency in real time, and a fusion maximum likelihood algorithm optimally estimates the breathing rate using all points considered. The results show a maximal error of 1 BPM between the true breathing rate and the algorithm's calculated rate, based on experiments on two babies and three adults.

  3. Genetic algorithms and supernovae type Ia analysis

    International Nuclear Information System (INIS)

    Bogdanos, Charalampos; Nesseris, Savvas

    2009-01-01

    We introduce genetic algorithms as a means to analyze supernovae type Ia data and extract model-independent constraints on the evolution of the Dark Energy equation of state w(z) ≡ P DE /ρ DE . Specifically, we will give a brief introduction to the genetic algorithms along with some simple examples to illustrate their advantages and finally we will apply them to the supernovae type Ia data. We find that genetic algorithms can lead to results in line with already established parametric and non-parametric reconstruction methods and could be used as a complementary way of treating SNIa data. As a non-parametric method, genetic algorithms provide a model-independent way to analyze data and can minimize bias due to premature choice of a dark energy model

  4. Principle and Reconstruction Algorithm for Atomic-Resolution Holography

    Science.gov (United States)

    Matsushita, Tomohiro; Muro, Takayuki; Matsui, Fumihiko; Happo, Naohisa; Hosokawa, Shinya; Ohoyama, Kenji; Sato-Tomita, Ayana; Sasaki, Yuji C.; Hayashi, Kouichi

    2018-06-01

    Atomic-resolution holography makes it possible to obtain the three-dimensional (3D) structure around a target atomic site. Translational symmetry of the atomic arrangement of the sample is not necessary, and the 3D atomic image can be measured when the local structure of the target atomic site is oriented. Therefore, 3D local atomic structures such as dopants and adsorbates are observable. Here, the atomic-resolution holography comprising photoelectron holography, X-ray fluorescence holography, neutron holography, and their inverse modes are treated. Although the measurement methods are different, they can be handled with a unified theory. The algorithm for reconstructing 3D atomic images from holograms plays an important role. Although Fourier transform-based methods have been proposed, they require the multiple-energy holograms. In addition, they cannot be directly applied to photoelectron holography because of the phase shift problem. We have developed methods based on the fitting method for reconstructing from single-energy and photoelectron holograms. The developed methods are applicable to all types of atomic-resolution holography.

  5. Babinet principle and diffraction losses in laser resonators

    International Nuclear Information System (INIS)

    Kubarev, V V

    2000-01-01

    A simple analytical technique, based on the Babinet principle, for calculating low diffraction losses of different kinds in stable resonators is described. The technique was verified by comparison with the known numerical and analytical calculations of the losses in specific diffraction problems. (laser applications and other topics in quantum electronics)

  6. Optimal quantum state estimation with use of the no-signaling principle

    International Nuclear Information System (INIS)

    Han, Yeong-Deok; Bae, Joonwoo; Wang Xiangbin; Hwang, Won-Young

    2010-01-01

    A simple derivation of the optimal state estimation of a quantum bit was obtained by using the no-signaling principle. In particular, the no-signaling principle determines a unique form of the guessing probability independent of figures of merit, such as the fidelity or information gain. This proves that the optimal estimation for a quantum bit can be achieved by the same measurement for almost all figures of merit.

  7. Polyhedral Computations for the Simple Graph Partitioning Problem

    DEFF Research Database (Denmark)

    Sørensen, Michael Malmros

    The simple graph partitioning problem is to partition an edge-weighted graph into mutually disjoint subgraphs, each containing no more than b nodes, such that the sum of the weights of all edges in the subgraphs is maximal. In this paper we present a branch-and-cut algorithm for the problem that ...

  8. Historical Feature Pattern Extraction Based Network Attack Situation Sensing Algorithm

    OpenAIRE

    Zeng, Yong; Liu, Dacheng; Lei, Zhou

    2014-01-01

    The situation sequence contains a series of complicated and multivariate random trends, which are very sudden, uncertain, and difficult to recognize and describe its principle by traditional algorithms. To solve the above questions, estimating parameters of super long situation sequence is essential, but very difficult, so this paper proposes a situation prediction method based on historical feature pattern extraction (HFPE). First, HFPE algorithm seeks similar indications from the history si...

  9. Incoherent beam combining based on the momentum SPGD algorithm

    Science.gov (United States)

    Yang, Guoqing; Liu, Lisheng; Jiang, Zhenhua; Guo, Jin; Wang, Tingfeng

    2018-05-01

    Incoherent beam combining (ICBC) technology is one of the most promising ways to achieve high-energy, near-diffraction laser output. In this paper, the momentum method is proposed as a modification of the stochastic parallel gradient descent (SPGD) algorithm. The momentum method can improve the speed of convergence of the combining system efficiently. The analytical method is employed to interpret the principle of the momentum method. Furthermore, the proposed algorithm is testified through simulations as well as experiments. The results of the simulations and the experiments show that the proposed algorithm not only accelerates the speed of the iteration, but also keeps the stability of the combining process. Therefore the feasibility of the proposed algorithm in the beam combining system is testified.

  10. Simple and robust image-based autofocusing for digital microscopy.

    Science.gov (United States)

    Yazdanfar, Siavash; Kenny, Kevin B; Tasimi, Krenar; Corwin, Alex D; Dixon, Elizabeth L; Filkins, Robert J

    2008-06-09

    A simple image-based autofocusing scheme for digital microscopy is demonstrated that uses as few as two intermediate images to bring the sample into focus. The algorithm is adapted to a commercial inverted microscope and used to automate brightfield and fluorescence imaging of histopathology tissue sections.

  11. Equalization Algorithm for Distributed Energy Storage Systems in Islanded AC Microgrids

    DEFF Research Database (Denmark)

    Aldana, Nelson Leonardo Diaz; Hernández, Adriana Carolina Luna; Quintero, Juan Carlos Vasquez

    2015-01-01

    This paper presents a centralized strategy for equalizing the state of charge of distributed energy storage systems in an islanded ac microgrid. The strategy is based on a simple algorithm denoted as equalization algorithm, which modifies the charge or discharge ratio on the time, for distributed...

  12. Hydraulic Pump Fault Diagnosis Control Research Based on PARD-BP Algorithm

    Directory of Open Access Journals (Sweden)

    LV Dongmei

    2014-12-01

    Full Text Available Combining working principle and failure mechanism of RZU2000HM hydraulic press, with its present fault cases being collected, the working principle of the oil pressure and faults phenomenon of the hydraulic power unit –swash-plate axial piston pump were studied with some emphasis, whose faults will directly affect the dynamic performance of the oil pressure and flow. In order to make hydraulic power unit work reliably, PARD-BP (Pruning Algorithm based Random Degree neural network fault algorithm was introduced, with swash-plate axial piston pump’s vibration fault sample data regarded as input, and fault mode matrix regarded as target output, so that PARD-BP algorithm could be trained. In the end, the vibration results were verified by the vibration modal test, and it was shown that the biggest upward peaks of vacuum pump in X-direction, Y-direction and Z- direction have fallen by 30.49 %, 21.13 % and 18.73 % respectively, so that the reliability of the fact that PARD-BP algorithm could be used for the online fault detection and diagnosis of the hydraulic pump was verified.

  13. Mi Quinto Libro de Maquinas Simples: El Plano Inclinado. Escuela Intermedia Grados 7, 8 y 9 (My Fifth Book of Simple Machines: The Inclined Plane. Intermediate School Grades 7, 8, and 9).

    Science.gov (United States)

    Alvarado, Patricio R.; Montalvo, Luis

    This is the fifth book in a five-book physical science series on simple machines. The books are designed for Spanish-speaking junior high school students. This volume explains the principles and some of the uses of inclined planes, as they appear in simple machines, by suggesting experiments and posing questions concerning drawings in the book…

  14. Cosmological principles. II. Physical principles

    International Nuclear Information System (INIS)

    Harrison, E.R.

    1974-01-01

    The discussion of cosmological principle covers the uniformity principle of the laws of physics, the gravitation and cognizability principles, and the Dirac creation, chaos, and bootstrap principles. (U.S.)

  15. Controlling principles for prior probability assignments in nuclear risk assessment

    International Nuclear Information System (INIS)

    Cook, I.; Unwin, S.D.

    1986-01-01

    As performed conventionally, nuclear probabilistic risk assessment (PRA) may be criticized as utilizing inscrutable and unjustifiably ''precise'' quantitative informed judgment or extrapolation from that judgment. To meet this criticism, controlling principles that govern the formulation of probability densities are proposed, given only the informed input that would be required for a simple bounding analysis. These principles are founded upon information theoretic ideas of maximum uncertainty and cover both cases in which there exists a stochastic model of the phenomenon of interest and cases in which these is no such model. In part, the principles are conventional, and such an approach is justified by appealing to certain analogies in accounting practice and judicial decision making. Examples are given. Appropriate employment of these principles is expected to facilitate substantial progress toward PRA scrutability and transparency

  16. Time series classification using k-Nearest neighbours, Multilayer Perceptron and Learning Vector Quantization algorithms

    Directory of Open Access Journals (Sweden)

    Jiří Fejfar

    2012-01-01

    Full Text Available We are presenting results comparison of three artificial intelligence algorithms in a classification of time series derived from musical excerpts in this paper. Algorithms were chosen to represent different principles of classification – statistic approach, neural networks and competitive learning. The first algorithm is a classical k-Nearest neighbours algorithm, the second algorithm is Multilayer Perceptron (MPL, an example of artificial neural network and the third one is a Learning Vector Quantization (LVQ algorithm representing supervised counterpart to unsupervised Self Organizing Map (SOM.After our own former experiments with unlabelled data we moved forward to the data labels utilization, which generally led to a better accuracy of classification results. As we need huge data set of labelled time series (a priori knowledge of correct class which each time series instance belongs to, we used, with a good experience in former studies, musical excerpts as a source of real-world time series. We are using standard deviation of the sound signal as a descriptor of a musical excerpts volume level.We are describing principle of each algorithm as well as its implementation briefly, giving links for further research. Classification results of each algorithm are presented in a confusion matrix showing numbers of misclassifications and allowing to evaluate overall accuracy of the algorithm. Results are compared and particular misclassifications are discussed for each algorithm. Finally the best solution is chosen and further research goals are given.

  17. Ab initio optimization principle for the ground states of translationally invariant strongly correlated quantum lattice models.

    Science.gov (United States)

    Ran, Shi-Ju

    2016-05-01

    In this work, a simple and fundamental numeric scheme dubbed as ab initio optimization principle (AOP) is proposed for the ground states of translational invariant strongly correlated quantum lattice models. The idea is to transform a nondeterministic-polynomial-hard ground-state simulation with infinite degrees of freedom into a single optimization problem of a local function with finite number of physical and ancillary degrees of freedom. This work contributes mainly in the following aspects: (1) AOP provides a simple and efficient scheme to simulate the ground state by solving a local optimization problem. Its solution contains two kinds of boundary states, one of which play the role of the entanglement bath that mimics the interactions between a supercell and the infinite environment, and the other gives the ground state in a tensor network (TN) form. (2) In the sense of TN, a novel decomposition named as tensor ring decomposition (TRD) is proposed to implement AOP. Instead of following the contraction-truncation scheme used by many existing TN-based algorithms, TRD solves the contraction of a uniform TN in an opposite way by encoding the contraction in a set of self-consistent equations that automatically reconstruct the whole TN, making the simulation simple and unified; (3) AOP inherits and develops the ideas of different well-established methods, including the density matrix renormalization group (DMRG), infinite time-evolving block decimation (iTEBD), network contractor dynamics, density matrix embedding theory, etc., providing a unified perspective that is previously missing in this fields. (4) AOP as well as TRD give novel implications to existing TN-based algorithms: A modified iTEBD is suggested and the two-dimensional (2D) AOP is argued to be an intrinsic 2D extension of DMRG that is based on infinite projected entangled pair state. This paper is focused on one-dimensional quantum models to present AOP. The benchmark is given on a transverse Ising

  18. Physics Without Physics. The Power of Information-theoretical Principles

    Science.gov (United States)

    D'Ariano, Giacomo Mauro

    2017-01-01

    David Finkelstein was very fond of the new information-theoretic paradigm of physics advocated by John Archibald Wheeler and Richard Feynman. Only recently, however, the paradigm has concretely shown its full power, with the derivation of quantum theory (Chiribella et al., Phys. Rev. A 84:012311, 2011; D'Ariano et al., 2017) and of free quantum field theory (D'Ariano and Perinotti, Phys. Rev. A 90:062106, 2014; Bisio et al., Phys. Rev. A 88:032301, 2013; Bisio et al., Ann. Phys. 354:244, 2015; Bisio et al., Ann. Phys. 368:177, 2016) from informational principles. The paradigm has opened for the first time the possibility of avoiding physical primitives in the axioms of the physical theory, allowing a re-foundation of the whole physics over logically solid grounds. In addition to such methodological value, the new information-theoretic derivation of quantum field theory is particularly interesting for establishing a theoretical framework for quantum gravity, with the idea of obtaining gravity itself as emergent from the quantum information processing, as also suggested by the role played by information in the holographic principle (Susskind, J. Math. Phys. 36:6377, 1995; Bousso, Rev. Mod. Phys. 74:825, 2002). In this paper I review how free quantum field theory is derived without using mechanical primitives, including space-time, special relativity, Hamiltonians, and quantization rules. The theory is simply provided by the simplest quantum algorithm encompassing a countable set of quantum systems whose network of interactions satisfies the three following simple principles: homogeneity, locality, and isotropy. The inherent discrete nature of the informational derivation leads to an extension of quantum field theory in terms of a quantum cellular automata and quantum walks. A simple heuristic argument sets the scale to the Planck one, and the currently observed regime where discreteness is not visible is the so-called "relativistic regime" of small wavevectors, which

  19. Inverse kinematic solution for near-simple robots and its application to robot calibration

    Science.gov (United States)

    Hayati, Samad A.; Roston, Gerald P.

    1986-01-01

    This paper provides an inverse kinematic solution for a class of robot manipulators called near-simple manipulators. The kinematics of these manipulators differ from those of simple-robots by small parameter variations. Although most robots are by design simple, in practice, due to manufacturing tolerances, every robot is near-simple. The method in this paper gives an approximate inverse kinematics solution for real time applications based on the nominal solution for these robots. The validity of the results are tested both by a simulation study and by applying the algorithm to a PUMA robot.

  20. Deterministic and stochastic algorithms for resolving the flow fields in ducts and networks using energy minimization

    Science.gov (United States)

    Sochi, Taha

    2016-09-01

    Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton and global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of computational fluid dynamics for solving the flow fields in tubes and networks for various types of Newtonian and non-Newtonian fluids.

  1. A Nonmonotone Trust Region Method for Nonlinear Programming with Simple Bound Constraints

    International Nuclear Information System (INIS)

    Chen, Z.-W.; Han, J.-Y.; Xu, D.-C.

    2001-01-01

    In this paper we propose a nonmonotone trust region algorithm for optimization with simple bound constraints. Under mild conditions, we prove the global convergence of the algorithm. For the monotone case it is also proved that the correct active set can be identified in a finite number of iterations if the strict complementarity slackness condition holds, and so the proposed algorithm reduces finally to an unconstrained minimization method in a finite number of iterations, allowing a fast asymptotic rate of convergence. Numerical experiments show that the method is efficient

  2. Research on retailer data clustering algorithm based on Spark

    Science.gov (United States)

    Huang, Qiuman; Zhou, Feng

    2017-03-01

    Big data analysis is a hot topic in the IT field now. Spark is a high-reliability and high-performance distributed parallel computing framework for big data sets. K-means algorithm is one of the classical partition methods in clustering algorithm. In this paper, we study the k-means clustering algorithm on Spark. Firstly, the principle of the algorithm is analyzed, and then the clustering analysis is carried out on the supermarket customers through the experiment to find out the different shopping patterns. At the same time, this paper proposes the parallelization of k-means algorithm and the distributed computing framework of Spark, and gives the concrete design scheme and implementation scheme. This paper uses the two-year sales data of a supermarket to validate the proposed clustering algorithm and achieve the goal of subdividing customers, and then analyze the clustering results to help enterprises to take different marketing strategies for different customer groups to improve sales performance.

  3. Figuring Control in the Algorithmic Era

    DEFF Research Database (Denmark)

    Markham, Annette; Bossen, Claus

    Drawing on actor network theory, we follow how algorithms, information, selfhood and identity-for-others tangle in interesting and unexpected ways. Starting with simple moments in everyday life that might be described as having implications for ‘control,’ we focus attention on the ways in which t...

  4. Optimized theory for simple and molecular fluids.

    Science.gov (United States)

    Marucho, M; Montgomery Pettitt, B

    2007-03-28

    An optimized closure approximation for both simple and molecular fluids is presented. A smooth interpolation between Perkus-Yevick and hypernetted chain closures is optimized by minimizing the free energy self-consistently with respect to the interpolation parameter(s). The molecular version is derived from a refinement of the method for simple fluids. In doing so, a method is proposed which appropriately couples an optimized closure with the variant of the diagrammatically proper integral equation recently introduced by this laboratory [K. M. Dyer et al., J. Chem. Phys. 123, 204512 (2005)]. The simplicity of the expressions involved in this proposed theory has allowed the authors to obtain an analytic expression for the approximate excess chemical potential. This is shown to be an efficient tool to estimate, from first principles, the numerical value of the interpolation parameters defining the aforementioned closure. As a preliminary test, representative models for simple fluids and homonuclear diatomic Lennard-Jones fluids were analyzed, obtaining site-site correlation functions in excellent agreement with simulation data.

  5. Mi Tercer Libro de Maquinas Simples: La Rueda y la Polea. Escuela Intermedia Grados 7, 8 y 9 (My Third Book of Simple Machines: The Wheel and the Pulley. Intermediate School Grades 7, 8, and 9).

    Science.gov (United States)

    Alvarado, Patricio R.; Montalvo, Luis

    This is the third book in a five-book physical science series on simple machines. The books are designed for Spanish-speaking junior high school students. This volume explains principles governing wheels and pulleys by suggesting experiments and posing questions concerning drawings in the book which illustrate the scientific principles. Friction…

  6. Simple regular black hole with logarithmic entropy correction

    Energy Technology Data Exchange (ETDEWEB)

    Morales-Duran, Nicolas; Vargas, Andres F.; Hoyos-Restrepo, Paulina; Bargueno, Pedro [Universidad de los Andes, Departamento de Fisica, Bogota, Distrito Capital (Colombia)

    2016-10-15

    A simple regular black hole solution satisfying the weak energy condition is obtained within Einstein-non-linear electrodynamics theory. We have computed the thermodynamic properties of this black hole by a careful analysis of the horizons and we have found that the usual Bekenstein-Hawking entropy gets corrected by a logarithmic term. Therefore, in this sense our model realises some quantum gravity predictions which add this kind of correction to the black hole entropy. In particular, we have established some similitudes between our model and a quadratic generalised uncertainty principle. This similitude has been confirmed by the existence of a remnant, which prevents complete evaporation, in agreement with the quadratic generalised uncertainty principle case. (orig.)

  7. Trophic dynamics of a simple model ecosystem.

    Science.gov (United States)

    Bell, Graham; Fortier-Dubois, Étienne

    2017-09-13

    We have constructed a model of community dynamics that is simple enough to enumerate all possible food webs, yet complex enough to represent a wide range of ecological processes. We use the transition matrix to predict the outcome of succession and then investigate how the transition probabilities are governed by resource supply and immigration. Low-input regimes lead to simple communities whereas trophically complex communities develop when there is an adequate supply of both resources and immigrants. Our interpretation of trophic dynamics in complex communities hinges on a new principle of mutual replenishment, defined as the reciprocal alternation of state in a pair of communities linked by the invasion and extinction of a shared species. Such neutral couples are the outcome of succession under local dispersal and imply that food webs will often be made up of suites of trophically equivalent species. When immigrants arrive from an external pool of fixed composition a similar principle predicts a dynamic core of webs constituting a neutral interchange network, although communities may express an extensive range of other webs whose membership is only in part predictable. The food web is not in general predictable from whole-community properties such as productivity or stability, although it may profoundly influence these properties. © 2017 The Author(s).

  8. Applied predictive analytics principles and techniques for the professional data analyst

    CERN Document Server

    Abbott, Dean

    2014-01-01

    Learn the art and science of predictive analytics - techniques that get results Predictive analytics is what translates big data into meaningful, usable business information. Written by a leading expert in the field, this guide examines the science of the underlying algorithms as well as the principles and best practices that govern the art of predictive analytics. It clearly explains the theory behind predictive analytics, teaches the methods, principles, and techniques for conducting predictive analytics projects, and offers tips and tricks that are essential for successful predictive mode

  9. Application of genetic algorithms for parameter estimation in liquid chromatography

    International Nuclear Information System (INIS)

    Hernandez Torres, Reynier; Irizar Mesa, Mirtha; Tavares Camara, Leoncio Diogenes

    2012-01-01

    In chromatography, complex inverse problems related to the parameters estimation and process optimization are presented. Metaheuristics methods are known as general purpose approximated algorithms which seek and hopefully find good solutions at a reasonable computational cost. These methods are iterative process to perform a robust search of a solution space. Genetic algorithms are optimization techniques based on the principles of genetics and natural selection. They have demonstrated very good performance as global optimizers in many types of applications, including inverse problems. In this work, the effectiveness of genetic algorithms is investigated to estimate parameters in liquid chromatography

  10. Two General Extension Algorithms of Latin Hypercube Sampling

    Directory of Open Access Journals (Sweden)

    Zhi-zhao Liu

    2015-01-01

    Full Text Available For reserving original sampling points to reduce the simulation runs, two general extension algorithms of Latin Hypercube Sampling (LHS are proposed. The extension algorithms start with an original LHS of size m and construct a new LHS of size m+n that contains the original points as many as possible. In order to get a strict LHS of larger size, some original points might be deleted. The relationship of original sampling points in the new LHS structure is shown by a simple undirected acyclic graph. The basic general extension algorithm is proposed to reserve the most original points, but it costs too much time. Therefore, a general extension algorithm based on greedy algorithm is proposed to reduce the extension time, which cannot guarantee to contain the most original points. These algorithms are illustrated by an example and applied to evaluating the sample means to demonstrate the effectiveness.

  11. Reconstruction algorithm in compressed sensing based on maximum a posteriori estimation

    International Nuclear Information System (INIS)

    Takeda, Koujin; Kabashima, Yoshiyuki

    2013-01-01

    We propose a systematic method for constructing a sparse data reconstruction algorithm in compressed sensing at a relatively low computational cost for general observation matrix. It is known that the cost of ℓ 1 -norm minimization using a standard linear programming algorithm is O(N 3 ). We show that this cost can be reduced to O(N 2 ) by applying the approach of posterior maximization. Furthermore, in principle, the algorithm from our approach is expected to achieve the widest successful reconstruction region, which is evaluated from theoretical argument. We also discuss the relation between the belief propagation-based reconstruction algorithm introduced in preceding works and our approach

  12. A fast algorithm for identifying friends-of-friends halos

    Science.gov (United States)

    Feng, Y.; Modi, C.

    2017-07-01

    We describe a simple and fast algorithm for identifying friends-of-friends features and prove its correctness. The algorithm avoids unnecessary expensive neighbor queries, uses minimal memory overhead, and rejects slowdown in high over-density regions. We define our algorithm formally based on pair enumeration, a problem that has been heavily studied in fast 2-point correlation codes and our reference implementation employs a dual KD-tree correlation function code. We construct features in a hierarchical tree structure, and use a splay operation to reduce the average cost of identifying the root of a feature from O [ log L ] to O [ 1 ] (L is the size of a feature) without additional memory costs. This reduces the overall time complexity of merging trees from O [ L log L ] to O [ L ] , reducing the number of operations per splay by orders of magnitude. We next introduce a pruning operation that skips merge operations between two fully self-connected KD-tree nodes. This improves the robustness of the algorithm, reducing the number of merge operations in high density peaks from O [δ2 ] to O [ δ ] . We show that for cosmological data set the algorithm eliminates more than half of merge operations for typically used linking lengths b ∼ 0 . 2 (relative to mean separation). Furthermore, our algorithm is extremely simple and easy to implement on top of an existing pair enumeration code, reusing the optimization effort that has been invested in fast correlation function codes.

  13. Extracting quantum dynamics from genetic learning algorithms through principal control analysis

    International Nuclear Information System (INIS)

    White, J L; Pearson, B J; Bucksbaum, P H

    2004-01-01

    Genetic learning algorithms are widely used to control ultrafast optical pulse shapes for photo-induced quantum control of atoms and molecules. An unresolved issue is how to use the solutions found by these algorithms to learn about the system's quantum dynamics. We propose a simple method based on covariance analysis of the control space, which can reveal the degrees of freedom in the effective control Hamiltonian. We have applied this technique to stimulated Raman scattering in liquid methanol. A simple model of two-mode stimulated Raman scattering is consistent with the results. (letter to the editor)

  14. Biology, the way it should have been, experiments with a Lamarckian algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Brown, F.M.; Snider, J. [Univ. of Kansas, Lawrence, KS (United States)

    1996-12-31

    This paper investigates the case where some information can be extracted directly from the fitness function of a genetic algorithm so that mutation may be achieved essentially on the Lamarckian principle of acquired characteristics. The basic rationale is that such additional information will provide better mutations, thus speeding up the search process. Comparisons are made between a pure Neo-Darwinian genetic algorithm and this Lamarckian algorithm on a number of problems, including a problem of interest to the US Army.

  15. Evaluation of simple rapid HIV assays and development of national rapid HIV test algorithms in Dar es Salaam, Tanzania

    Directory of Open Access Journals (Sweden)

    Mbwana Judica

    2009-02-01

    Full Text Available Abstract Background Suitable algorithms based on a combination of two or more simple rapid HIV assays have been shown to have a diagnostic accuracy comparable to double enzyme-linked immunosorbent assay (ELISA or double ELISA with Western Blot strategies. The aims of this study were to evaluate the performance of five simple rapid HIV assays using whole blood samples from HIV-infected patients, pregnant women, voluntary counseling and testing attendees and blood donors, and to formulate an alternative confirmatory strategy based on rapid HIV testing algorithms suitable for use in Tanzania. Methods Five rapid HIV assays: Determine™ HIV-1/2 (Inverness Medical, SD Bioline HIV 1/2 3.0 (Standard Diagnostics Inc., First Response HIV Card 1–2.0 (PMC Medical India Pvt Ltd, HIV1/2 Stat-Pak Dipstick (Chembio Diagnostic System, Inc and Uni-Gold™ HIV-1/2 (Trinity Biotech were evaluated between June and September 2006 using 1433 whole blood samples from hospital patients, pregnant women, voluntary counseling and testing attendees and blood donors. All samples that were reactive on all or any of the five rapid assays and 10% of non-reactive samples were tested on a confirmatory Inno-Lia HIV I/II immunoblot assay (Immunogenetics. Results Three hundred and ninety samples were confirmed HIV-1 antibody positive, while 1043 were HIV negative. The sensitivity at initial testing of Determine, SD Bioline and Uni-Gold™ was 100% (95% CI; 99.1–100 while First Response and Stat-Pak had sensitivity of 99.5% (95% CI; 98.2–99.9 and 97.7% (95% CI; 95.7–98.9, respectively, which increased to 100% (95% CI; 99.1–100 on repeat testing. The initial specificity of the Uni-Gold™ assay was 100% (95% CI; 99.6–100 while specificities were 99.6% (95% CI; 99–99.9, 99.4% (95% CI; 98.8–99.7, 99.6% (95% CI; 99–99.9 and 99.8% (95% CI; 99.3–99.9 for Determine, SD Bioline, First Response and Stat-Pak assays, respectively. There was no any sample which was

  16. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets

    Science.gov (United States)

    Thieberger, P.; Gassner, D.; Hulsart, R.; Michnoff, R.; Miller, T.; Minty, M.; Sorrell, Z.; Bartnik, A.

    2018-04-01

    A simple, analytically correct algorithm is developed for calculating "pencil" relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a Field Programmable Gate Array-based BPM readout implementation of the new algorithm has been developed and characterized. Finally, the algorithm is tested with BPM data from the Cornell Preinjector.

  17. Some special features of the le chatelier-braun principle

    Science.gov (United States)

    Nesis, E. I.; Skibin, Yu. N.

    2000-07-01

    The relaxation reaction of a system that follows from the Le Chatelier-Braun principle and weakens the result of an external influence turns out to be more intense under a complex action. A method for quantitative determination of the weakening effect for simple and complex actions is suggested.

  18. Star identification methods, techniques and algorithms

    CERN Document Server

    Zhang, Guangjun

    2017-01-01

    This book summarizes the research advances in star identification that the author’s team has made over the past 10 years, systematically introducing the principles of star identification, general methods, key techniques and practicable algorithms. It also offers examples of hardware implementation and performance evaluation for the star identification algorithms. Star identification is the key step for celestial navigation and greatly improves the performance of star sensors, and as such the book include the fundamentals of star sensors and celestial navigation, the processing of the star catalog and star images, star identification using modified triangle algorithms, star identification using star patterns and using neural networks, rapid star tracking using star matching between adjacent frames, as well as implementation hardware and using performance tests for star identification. It is not only valuable as a reference book for star sensor designers and researchers working in pattern recognition and othe...

  19. Geomagnetic matching navigation algorithm based on robust estimation

    Science.gov (United States)

    Xie, Weinan; Huang, Liping; Qu, Zhenshen; Wang, Zhenhuan

    2017-08-01

    The outliers in the geomagnetic survey data seriously affect the precision of the geomagnetic matching navigation and badly disrupt its reliability. A novel algorithm which can eliminate the outliers influence is investigated in this paper. First, the weight function is designed and its principle of the robust estimation is introduced. By combining the relation equation between the matching trajectory and the reference trajectory with the Taylor series expansion for geomagnetic information, a mathematical expression of the longitude, latitude and heading errors is acquired. The robust target function is obtained by the weight function and the mathematical expression. Then the geomagnetic matching problem is converted to the solutions of nonlinear equations. Finally, Newton iteration is applied to implement the novel algorithm. Simulation results show that the matching error of the novel algorithm is decreased to 7.75% compared to the conventional mean square difference (MSD) algorithm, and is decreased to 18.39% to the conventional iterative contour matching algorithm when the outlier is 40nT. Meanwhile, the position error of the novel algorithm is 0.017° while the other two algorithms fail to match when the outlier is 400nT.

  20. Rack Protection Monitor - A Simple System

    International Nuclear Information System (INIS)

    Orr, S.

    1997-12-01

    The Rack Protection Monitor is a simple, fail-safe device to monitor smoke, temperature and ventilation sensors. It accepts inputs from redundant sensors and has a hardwired algorithm to prevent nuisance power trips due to random sensor failures. When a sensor is triggered the Rack Protection Monitor latches and annunicates the alarm. If another sensor is triggered, the Rack Protection Monitor locally shuts down the power to the relay rack and sends alarm to central control

  1. Dynamics Assessment of Grid-Synchronization Algorithms for Single-Phase Grid-Connected Converters

    DEFF Research Database (Denmark)

    Han, Yang; Luo, Mingyu; Guerrero, Josep M.

    2015-01-01

    Several advanced phase-lock-loop (PLL) algorithms have been proposed for single-phase power electronic systems. Among these algorithms, the orthogonal signal generators (OSGs) are widely utilized to generate a set of in-quadrature signals, owing to its benefit of simple digital implementation and...

  2. Hippocrates' First Aphorism: Reflections on Ageless Principles for the Practice of Medicine.

    Science.gov (United States)

    Loscalzo, Joseph

    2016-01-01

    Hippocrates' first aphorism presents a structurally simple but conceptually complex series of observations on the art and science of medicine. Its principles are timeless, relevant to physicians in antiquity as well as in the current era. This article analyzes Hippocrates' aphorism in light of Galen's and others' commentaries on it and interprets the principles espoused by Hippocrates in light of the perennial challenges of the practice of medicine.

  3. A Cultural Algorithm for Optimal Design of Truss Structures

    Directory of Open Access Journals (Sweden)

    Shahin Jalili

    Full Text Available Abstract A cultural algorithm was utilized in this study to solve optimal design of truss structures problem achieving minimum weight objective under stress and deflection constraints. The algorithm is inspired by principles of human social evolution. It simulates the social interaction between the peoples and their beliefs in a belief space. Cultural Algorithm (CA utilizes the belief space and population space which affects each other based on acceptance and influence functions. The belief space of CA consists of different knowledge components. In this paper, only situational and normative knowledge components are used within the belief space. The performance of the method is demonstrated through four benchmark design examples. Comparison of the obtained results with those of some previous studies demonstrates the efficiency of this algorithm.

  4. A Superresolution Image Reconstruction Algorithm Based on Landweber in Electrical Capacitance Tomography

    Directory of Open Access Journals (Sweden)

    Chen Deyun

    2013-01-01

    Full Text Available According to the image reconstruction accuracy influenced by the “soft field” nature and ill-conditioned problems in electrical capacitance tomography, a superresolution image reconstruction algorithm based on Landweber is proposed in the paper, which is based on the working principle of the electrical capacitance tomography system. The method uses the algorithm which is derived by regularization of solutions derived and derives closed solution by fast Fourier transform of the convolution kernel. So, it ensures the certainty of the solution and improves the stability and quality of image reconstruction results. Simulation results show that the imaging precision and real-time imaging of the algorithm are better than Landweber algorithm, and this algorithm proposes a new method for the electrical capacitance tomography image reconstruction algorithm.

  5. Quantum approximate optimization algorithm for MaxCut: A fermionic view

    Science.gov (United States)

    Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.

    2018-02-01

    Farhi et al. recently proposed a class of quantum algorithms, the quantum approximate optimization algorithm (QAOA), for approximately solving combinatorial optimization problems (E. Farhi et al., arXiv:1411.4028; arXiv:1412.6062; arXiv:1602.07674). A level-p QAOA circuit consists of p steps; in each step a classical Hamiltonian, derived from the cost function, is applied followed by a mixing Hamiltonian. The 2 p times for which these two Hamiltonians are applied are the parameters of the algorithm, which are to be optimized classically for the best performance. As p increases, parameter optimization becomes inefficient due to the curse of dimensionality. The success of the QAOA approach will depend, in part, on finding effective parameter-setting strategies. Here we analytically and numerically study parameter setting for the QAOA applied to MaxCut. For the level-1 QAOA, we derive an analytical expression for a general graph. In principle, expressions for higher p could be derived, but the number of terms quickly becomes prohibitive. For a special case of MaxCut, the "ring of disagrees," or the one-dimensional antiferromagnetic ring, we provide an analysis for an arbitrarily high level. Using a fermionic representation, the evolution of the system under the QAOA translates into quantum control of an ensemble of independent spins. This treatment enables us to obtain analytical expressions for the performance of the QAOA for any p . It also greatly simplifies the numerical search for the optimal values of the parameters. By exploring symmetries, we identify a lower-dimensional submanifold of interest; the search effort can be accordingly reduced. This analysis also explains an observed symmetry in the optimal parameter values. Further, we numerically investigate the parameter landscape and show that it is a simple one in the sense of having no local optima.

  6. Efficient Implementation of Nested-Loop Multimedia Algorithms

    Directory of Open Access Journals (Sweden)

    Kittitornkun Surin

    2001-01-01

    Full Text Available A novel dependence graph representation called the multiple-order dependence graph for nested-loop formulated multimedia signal processing algorithms is proposed. It allows a concise representation of an entire family of dependence graphs. This powerful representation facilitates the development of innovative implementation approach for nested-loop formulated multimedia algorithms such as motion estimation, matrix-matrix product, 2D linear transform, and others. In particular, algebraic linear mapping (assignment and scheduling methodology can be applied to implement such algorithms on an array of simple-processing elements. The feasibility of this new approach is demonstrated in three major target architectures: application-specific integrated circuit (ASIC, field programmable gate array (FPGA, and a programmable clustered VLIW processor.

  7. Control algorithms and applications of the wavefront sensorless adaptive optics

    Science.gov (United States)

    Ma, Liang; Wang, Bin; Zhou, Yuanshen; Yang, Huizhen

    2017-10-01

    Compared with the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system need not to measure the wavefront and reconstruct it. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. Based on the analysis of principle and system model of the WFSless AO system, wavefront correction methods of the WFSless AO system were divided into two categories: model-free-based and model-based control algorithms. The WFSless AO system based on model-free-based control algorithms commonly considers the performance metric as a function of the control parameters and then uses certain control algorithm to improve the performance metric. The model-based control algorithms include modal control algorithms, nonlinear control algorithms and control algorithms based on geometrical optics. Based on the brief description of above typical control algorithms, hybrid methods combining the model-free-based control algorithm with the model-based control algorithm were generalized. Additionally, characteristics of various control algorithms were compared and analyzed. We also discussed the extensive applications of WFSless AO system in free space optical communication (FSO), retinal imaging in the human eye, confocal microscope, coherent beam combination (CBC) techniques and extended objects.

  8. Primal-dual path-following algorithms for circular programming

    Directory of Open Access Journals (Sweden)

    Baha Alzalg

    2017-06-01

    Full Text Available Circular programming problems are a new class of convex optimization problems that include second-order cone programming problems as a special case‎. ‎Alizadeh and Goldfarb [Math‎. ‎Program‎. ‎Ser‎. ‎A 95 (2003 3--51] introduced primal-dual path-following algorithms for solving second-order cone programming problems‎. ‎In this paper‎, ‎we generalize their work by using the machinery of Euclidean Jordan algebras associated with the circular cones to derive primal-dual path-following interior point algorithms for circular programming problems‎. ‎We prove polynomial convergence of the proposed algorithms by showing that the circular logarithmic barrier is a strongly self-concordant barrier‎. ‎The numerical examples show the path-following algorithms are simple and efficient‎.

  9. Feicim: A browser for data and algorithms

    International Nuclear Information System (INIS)

    Lazar, Z I; McNulty, R; Kechadi, T

    2008-01-01

    As programming and programming environments become increasingly complex, more effort must be invested in presenting the user with a simple yet comprehensive interface. Feicim is a tool that unifies the representation of data and algorithms. It provides resource discovery of data-files, data-content and algorithm implementation through an intuitive graphical user interface. It allows local or remote data stored on Grid type platforms to be accessed by the users, the viewing and creation of user-defined or collaboration-defined algorithms, the implementation of algorithms, and the production of output data-files and/or histograms. An application of Feicim is illustrated using the LHCb data. It provides a graphical view of the Gaudi architecture, LHCb event data model, and interfaces to the file catalogue. Feicim is particularly suited to such frameworks as Gaudi which consider algorithms as objects [2]. Instant viewing of any LHCb data will be of particular value in the commissioning of the detector and for quickly familiarizing newcomers to the data and software environment

  10. The lucky image-motion prediction for simple scene observation based soft-sensor technology

    Science.gov (United States)

    Li, Yan; Su, Yun; Hu, Bin

    2015-08-01

    High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.

  11. Principles of Stagewise Separation Process Calculations: A Simple Algebraic Approach Using Solvent Extraction.

    Science.gov (United States)

    Crittenden, Barry D.

    1991-01-01

    A simple liquid-liquid equilibrium (LLE) system involving a constant partition coefficient based on solute ratios is used to develop an algebraic understanding of multistage contacting in a first-year separation processes course. This algebraic approach to the LLE system is shown to be operable for the introduction of graphical techniques…

  12. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  13. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  14. Multidimensional generalized-ensemble algorithms for complex systems.

    Science.gov (United States)

    Mitsutake, Ayori; Okamoto, Yuko

    2009-06-07

    We give general formulations of the multidimensional multicanonical algorithm, simulated tempering, and replica-exchange method. We generalize the original potential energy function E(0) by adding any physical quantity V of interest as a new energy term. These multidimensional generalized-ensemble algorithms then perform a random walk not only in E(0) space but also in V space. Among the three algorithms, the replica-exchange method is the easiest to perform because the weight factor is just a product of regular Boltzmann-like factors, while the weight factors for the multicanonical algorithm and simulated tempering are not a priori known. We give a simple procedure for obtaining the weight factors for these two latter algorithms, which uses a short replica-exchange simulation and the multiple-histogram reweighting techniques. As an example of applications of these algorithms, we have performed a two-dimensional replica-exchange simulation and a two-dimensional simulated-tempering simulation using an alpha-helical peptide system. From these simulations, we study the helix-coil transitions of the peptide in gas phase and in aqueous solution.

  15. HARDWARE AND ALGORITHMIC STRUCTURE OF MULTIPOSITION RADIO RANGEFINDING NAVIGATION AND LANDING AERO-SYSTEM

    Directory of Open Access Journals (Sweden)

    Yaroslav Kondrashov

    2011-03-01

    Full Text Available Abstract. The principles of formation of the structure and functioning algorithms are developed andpresented for multipositioning radio rangefinding land-based system which identifies the location ofcommuter aircrafts in the vertical and horizontal planes by the method of three ranges without usingaltimeter.Keywords: aircraft, commuter airlines, aeronavigation, landing, airborne radio ranging equipment,operation algorithms.

  16. Transcranial electrical neuromodulation based on the reciprocity principle

    Directory of Open Access Journals (Sweden)

    Mariano eFernandez Corazza

    2016-05-01

    Full Text Available A key challenge in multi-electrode transcranial electrical stimulation (TES or transcranial direct current stimulation (tDCS is to find a current injection pattern that delivers the necessary current density at a target and minimizes it in the rest of the head, which is mathematically modelled as an optimization problem. Such an optimization with the Least Squares (LS or Linearly Constrained Minimum Variance (LCMV algorithms is generally computationally expensive and requires multiple independent current sources. Based on the reciprocity principle in electroencephalography (EEG and TES, it could be possible to find the optimal TES patterns quickly whenever the solution of the forward EEG problem is available for a brain region of interest. Here, we investigate the reciprocity principle as a guideline for finding optimal current injection patterns in TES that comply with safety constraints. We define four different trial cortical targets in a detailed seven-tissue finite element head model, and analyze the performance of the reciprocity family of TES methods in terms of electrode density, targeting error, focality, intensity, and directionality using the LS and LCMV solutions as the reference standards. It is found that the reciprocity algorithms show good performance comparable to the LCMV and LS solutions. Comparing the 128 and 256 electrode cases, we found that use of greater electrode density improves focality, directionality, and intensity parameters. The results show that reciprocity principle can be used to quickly determine optimal current injection patterns in TES and help to simplify TES protocols that are consistent with hardware and software availability and with safety constraints.

  17. Transcranial Electrical Neuromodulation Based on the Reciprocity Principle.

    Science.gov (United States)

    Fernández-Corazza, Mariano; Turovets, Sergei; Luu, Phan; Anderson, Erik; Tucker, Don

    2016-01-01

    A key challenge in multi-electrode transcranial electrical stimulation (TES) or transcranial direct current stimulation (tDCS) is to find a current injection pattern that delivers the necessary current density at a target and minimizes it in the rest of the head, which is mathematically modeled as an optimization problem. Such an optimization with the Least Squares (LS) or Linearly Constrained Minimum Variance (LCMV) algorithms is generally computationally expensive and requires multiple independent current sources. Based on the reciprocity principle in electroencephalography (EEG) and TES, it could be possible to find the optimal TES patterns quickly whenever the solution of the forward EEG problem is available for a brain region of interest. Here, we investigate the reciprocity principle as a guideline for finding optimal current injection patterns in TES that comply with safety constraints. We define four different trial cortical targets in a detailed seven-tissue finite element head model, and analyze the performance of the reciprocity family of TES methods in terms of electrode density, targeting error, focality, intensity, and directionality using the LS and LCMV solutions as the reference standards. It is found that the reciprocity algorithms show good performance comparable to the LCMV and LS solutions. Comparing the 128 and 256 electrode cases, we found that use of greater electrode density improves focality, directionality, and intensity parameters. The results show that reciprocity principle can be used to quickly determine optimal current injection patterns in TES and help to simplify TES protocols that are consistent with hardware and software availability and with safety constraints.

  18. A scalable and practical one-pass clustering algorithm for recommender system

    Science.gov (United States)

    Khalid, Asra; Ghazanfar, Mustansar Ali; Azam, Awais; Alahmari, Saad Ali

    2015-12-01

    KMeans clustering-based recommendation algorithms have been proposed claiming to increase the scalability of recommender systems. One potential drawback of these algorithms is that they perform training offline and hence cannot accommodate the incremental updates with the arrival of new data, making them unsuitable for the dynamic environments. From this line of research, a new clustering algorithm called One-Pass is proposed, which is a simple, fast, and accurate. We show empirically that the proposed algorithm outperforms K-Means in terms of recommendation and training time while maintaining a good level of accuracy.

  19. The correctness of Newman’s typability algorithm and some of its extensions

    NARCIS (Netherlands)

    Geuvers, J.H.; Krebbers, R.

    2011-01-01

    We study Newman’s typability algorithm (Newman, 1943) [14] for simple type theory. The algorithm originates from 1943, but was left unnoticed until (Newman, 1943) [14] was recently rediscovered by Hindley (2008) [10]. The remarkable thing is that it decides typability without computing a type. We

  20. Basic economic principles of road pricing: From theory to applications

    NARCIS (Netherlands)

    Rouwendal, J.; Verhoef, E.T.

    2006-01-01

    This paper presents, a non-technical introduction to the economic principles relevant for transport pricing design and analysis. We provide the basic rationale behind pricing of externalities, discuss why simple Pigouvian tax rules that equate charges to marginal external costs are not optimal in

  1. Designing and implementing of improved cryptographic algorithm using modular arithmetic theory

    Directory of Open Access Journals (Sweden)

    Maryam Kamarzarrin

    2015-05-01

    Full Text Available Maintaining the privacy and security of people information are two most important principles of electronic health plan. One of the methods of creating privacy and securing of information is using Public key cryptography system. In this paper, we compare two algorithms, Common And Fast Exponentiation algorithms, for enhancing the efficiency of public key cryptography. We express that a designed system by Fast Exponentiation Algorithm has high speed and performance but low power consumption and space occupied compared with Common Exponentiation algorithm. Although designed systems by Common Exponentiation algorithm have slower speed and lower performance, designing by this algorithm has less complexity, and easier designing compared with Fast Exponentiation algorithm. In this paper, we will try to examine and compare two different methods of exponentiation, also observe performance Impact of these two approaches in the form of hardware with VHDL language on FPGA.

  2. Search Parameter Optimization for Discrete, Bayesian, and Continuous Search Algorithms

    Science.gov (United States)

    2017-09-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS SEARCH PARAMETER OPTIMIZATION FOR DISCRETE , BAYESIAN, AND CONTINUOUS SEARCH ALGORITHMS by...to 09-22-2017 4. TITLE AND SUBTITLE SEARCH PARAMETER OPTIMIZATION FOR DISCRETE , BAYESIAN, AND CON- TINUOUS SEARCH ALGORITHMS 5. FUNDING NUMBERS 6...simple search and rescue acts to prosecuting aerial/surface/submersible targets on mission. This research looks at varying the known discrete and

  3. A simple, practical and complete O(n3/log n)-time algorithm for RNA folding using the Four-Russians speedup.

    Science.gov (United States)

    Frid, Yelena; Gusfield, Dan

    2010-01-04

    The problem of computationally predicting the secondary structure (or folding) of RNA molecules was first introduced more than thirty years ago and yet continues to be an area of active research and development. The basic RNA-folding problem of finding a maximum cardinality, non-crossing, matching of complimentary nucleotides in an RNA sequence of length n, has an O(n3)-time dynamic programming solution that is widely applied. It is known that an o(n3) worst-case time solution is possible, but the published and suggested methods are complex and have not been established to be practical. Significant practical improvements to the original dynamic programming method have been introduced, but they retain the O(n3) worst-case time bound when n is the only problem-parameter used in the bound. Surprisingly, the most widely-used, general technique to achieve a worst-case (and often practical) speed up of dynamic programming, the Four-Russians technique, has not been previously applied to the RNA-folding problem. This is perhaps due to technical issues in adapting the technique to RNA-folding. In this paper, we give a simple, complete, and practical Four-Russians algorithm for the basic RNA-folding problem, achieving a worst-case time-bound of O(n3/log(n)). We show that this time-bound can also be obtained for richer nucleotide matching scoring-schemes, and that the method achieves consistent speed-ups in practice. The contribution is both theoretical and practical, since the basic RNA-folding problem is often solved multiple times in the inner-loop of more complex algorithms, and for long RNA molecules in the study of RNA virus genomes.

  4. Double-resolution electron holography with simple Fourier transform of fringe-shifted holograms

    International Nuclear Information System (INIS)

    Volkov, V.V.; Han, M.G.; Zhu, Y.

    2013-01-01

    We propose a fringe-shifting holographic method with an appropriate image wave recovery algorithm leading to exact solution of holographic equations. With this new method the complex object image wave recovered from holograms appears to have much less traditional artifacts caused by the autocorrelation band present practically in all Fourier transformed holograms. The new analytical solutions make possible a double-resolution electron holography free from autocorrelation band artifacts and thus push the limits for phase resolution. The new image wave recovery algorithm uses a popular Fourier solution of the side band-pass filter technique, while the fringe-shifting holographic method is simple to implement in practice. - Highlights: • We propose a fringe-shifting holographic method simple enough for practical implementations. • Our new image-wave-recovery algorithm follows from exact solution of holographic equations. • With autocorrelation band removal from holograms it is possible to achieve double-resolution electron holography data free from several commonly known artifacts. • The new fringe-shifting method can reach an image wave resolution close to single fringe spacing

  5. Double-resolution electron holography with simple Fourier transform of fringe-shifted holograms

    Energy Technology Data Exchange (ETDEWEB)

    Volkov, V.V., E-mail: volkov@bnl.gov; Han, M.G.; Zhu, Y.

    2013-11-15

    We propose a fringe-shifting holographic method with an appropriate image wave recovery algorithm leading to exact solution of holographic equations. With this new method the complex object image wave recovered from holograms appears to have much less traditional artifacts caused by the autocorrelation band present practically in all Fourier transformed holograms. The new analytical solutions make possible a double-resolution electron holography free from autocorrelation band artifacts and thus push the limits for phase resolution. The new image wave recovery algorithm uses a popular Fourier solution of the side band-pass filter technique, while the fringe-shifting holographic method is simple to implement in practice. - Highlights: • We propose a fringe-shifting holographic method simple enough for practical implementations. • Our new image-wave-recovery algorithm follows from exact solution of holographic equations. • With autocorrelation band removal from holograms it is possible to achieve double-resolution electron holography data free from several commonly known artifacts. • The new fringe-shifting method can reach an image wave resolution close to single fringe spacing.

  6. Design principle and structure of the ANI data centre

    International Nuclear Information System (INIS)

    Akopov, N.Z.; Arutyunyan, S.Kh.; Chilingaryan, A.A.; Galfayan, S.Kh.; Matevosyan, V.Kh.; Zazyan, M.Z.

    1985-01-01

    The design principles and structure of applied statistical programms used for processing the data from the ANI experiments are described. Nonparametric algorithms provide development of high-efficient method for simultaneous analysis of computerized and experimental data, from cosmic ray experiments. Relation data base for unified data storage, protection, renewing and erasuring as well as for fast and convenient information retrieval is considered

  7. Model parameter estimations from residual gravity anomalies due to simple-shaped sources using Differential Evolution Algorithm

    Science.gov (United States)

    Ekinci, Yunus Levent; Balkaya, Çağlayan; Göktürkler, Gökhan; Turan, Seçil

    2016-06-01

    An efficient approach to estimate model parameters from residual gravity data based on differential evolution (DE), a stochastic vector-based metaheuristic algorithm, has been presented. We have showed the applicability and effectiveness of this algorithm on both synthetic and field anomalies. According to our knowledge, this is a first attempt of applying DE for the parameter estimations of residual gravity anomalies due to isolated causative sources embedded in the subsurface. The model parameters dealt with here are the amplitude coefficient (A), the depth and exact origin of causative source (zo and xo, respectively) and the shape factors (q and ƞ). The error energy maps generated for some parameter pairs have successfully revealed the nature of the parameter estimation problem under consideration. Noise-free and noisy synthetic single gravity anomalies have been evaluated with success via DE/best/1/bin, which is a widely used strategy in DE. Additionally some complicated gravity anomalies caused by multiple source bodies have been considered, and the results obtained have showed the efficiency of the algorithm. Then using the strategy applied in synthetic examples some field anomalies observed for various mineral explorations such as a chromite deposit (Camaguey district, Cuba), a manganese deposit (Nagpur, India) and a base metal sulphide deposit (Quebec, Canada) have been considered to estimate the model parameters of the ore bodies. Applications have exhibited that the obtained results such as the depths and shapes of the ore bodies are quite consistent with those published in the literature. Uncertainty in the solutions obtained from DE algorithm has been also investigated by Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing without cooling schedule. Based on the resulting histogram reconstructions of both synthetic and field data examples the algorithm has provided reliable parameter estimations being within the sampling limits of

  8. A Simple Introduction to Grobner Basis Methods in String Phenomenology

    International Nuclear Information System (INIS)

    Gray, J.

    2011-01-01

    I give an elementary introduction to the key algorithm used in recent applications of computational algebraic geometry to the subject of string phenomenology. I begin with a simple description of the algorithm itself and then give 3 examples of its use in physics. I describe how it can be used to obtain constraints on flux parameters, how it can simplify the equations describing vacua in 4D string models, and lastly how it can be used to compute the vacuum space of the electroweak sector of the MSSM.

  9. Simple and tight monogamy relations for a class of Bell inequalities

    Science.gov (United States)

    Augusiak, Remigiusz

    2017-01-01

    Physical principles constrain the way nonlocal correlations can be distributed among distant parties in a Bell-type experiment. These constraints are usually expressed by monogamy relations that bound the amount of Bell inequality violation observed by a set of parties by the violation observed by a different set of parties. Here we show that the no-signaling principle yields simple and tight monogamy relations for an important class of bipartite and multipartite Bell inequalities. We also link these trade-offs to the guessing probability—a key quantity in device-independent information processing.

  10. Regularization parameter estimation for underdetermined problems by the χ 2 principle with application to 2D focusing gravity inversion

    International Nuclear Information System (INIS)

    Vatankhah, Saeed; Ardestani, Vahid E; Renaut, Rosemary A

    2014-01-01

    The χ 2 principle generalizes the Morozov discrepancy principle to the augmented residual of the Tikhonov regularized least squares problem. For weighting of the data fidelity by a known Gaussian noise distribution on the measured data, when the stabilizing, or regularization, term is considered to be weighted by unknown inverse covariance information on the model parameters, the minimum of the Tikhonov functional becomes a random variable that follows a χ 2 -distribution with m+p−n degrees of freedom for the model matrix G of size m×n, m⩾n, and regularizer L of size p × n. Then, a Newton root-finding algorithm, employing the generalized singular value decomposition, or singular value decomposition when L = I, can be used to find the regularization parameter α. Here the result and algorithm are extended to the underdetermined case, m 2 algorithms when m 2 and unbiased predictive risk estimator of the regularization parameter are used for the first time in this context. For a simulated underdetermined data set with noise, these regularization parameter estimation methods, as well as the generalized cross validation method, are contrasted with the use of the L-curve and the Morozov discrepancy principle. Experiments demonstrate the efficiency and robustness of the χ 2 principle and unbiased predictive risk estimator, moreover showing that the L-curve and Morozov discrepancy principle are outperformed in general by the other three techniques. Furthermore, the minimum support stabilizer is of general use for the χ 2 principle when implemented without the desirable knowledge of the mean value of the model. (paper)

  11. An improved optimum-path forest clustering algorithm for remote sensing image segmentation

    Science.gov (United States)

    Chen, Siya; Sun, Tieli; Yang, Fengqin; Sun, Hongguang; Guan, Yu

    2018-03-01

    Remote sensing image segmentation is a key technology for processing remote sensing images. The image segmentation results can be used for feature extraction, target identification and object description. Thus, image segmentation directly affects the subsequent processing results. This paper proposes a novel Optimum-Path Forest (OPF) clustering algorithm that can be used for remote sensing segmentation. The method utilizes the principle that the cluster centres are characterized based on their densities and the distances between the centres and samples with higher densities. A new OPF clustering algorithm probability density function is defined based on this principle and applied to remote sensing image segmentation. Experiments are conducted using five remote sensing land cover images. The experimental results illustrate that the proposed method can outperform the original OPF approach.

  12. A Hybrid Genetic Algorithm Approach for Optimal Power Flow

    Directory of Open Access Journals (Sweden)

    Sydulu Maheswarapu

    2011-08-01

    Full Text Available This paper puts forward a reformed hybrid genetic algorithm (GA based approach to the optimal power flow. In the approach followed here, continuous variables are designed using real-coded GA and discrete variables are processed as binary strings. The outcomes are compared with many other methods like simple genetic algorithm (GA, adaptive genetic algorithm (AGA, differential evolution (DE, particle swarm optimization (PSO and music based harmony search (MBHS on a IEEE30 bus test bed, with a total load of 283.4 MW. Its found that the proposed algorithm is found to offer lowest fuel cost. The proposed method is found to be computationally faster, robust, superior and promising form its convergence characteristics.

  13. Strategies and Principles of Distributed Machine Learning on Big Data

    Directory of Open Access Journals (Sweden)

    Eric P. Xing

    2016-06-01

    Full Text Available The rise of big data has led to new demands for machine learning (ML systems to learn complex models, with millions to billions of parameters, that promise adequate capacity to digest massive datasets and offer powerful predictive analytics (such as high-dimensional latent features, intermediate representations, and decision functions thereupon. In order to run ML algorithms at such scales, on a distributed cluster with tens to thousands of machines, it is often the case that significant engineering efforts are required—and one might fairly ask whether such engineering truly falls within the domain of ML research. Taking the view that “big” ML systems can benefit greatly from ML-rooted statistical and algorithmic insights—and that ML researchers should therefore not shy away from such systems design—we discuss a series of principles and strategies distilled from our recent efforts on industrial-scale ML solutions. These principles and strategies span a continuum from application, to engineering, and to theoretical research and development of big ML systems and architectures, with the goal of understanding how to make them efficient, generally applicable, and supported with convergence and scaling guarantees. They concern four key questions that traditionally receive little attention in ML research: How can an ML program be distributed over a cluster? How can ML computation be bridged with inter-machine communication? How can such communication be performed? What should be communicated between machines? By exposing underlying statistical and algorithmic characteristics unique to ML programs but not typically seen in traditional computer programs, and by dissecting successful cases to reveal how we have harnessed these principles to design and develop both high-performance distributed ML software as well as general-purpose ML frameworks, we present opportunities for ML researchers and practitioners to further shape and enlarge the area

  14. Presentation of a general algorithm for effect-assessment on secondary poisoning. II Terrestrial food chains

    NARCIS (Netherlands)

    Romijn CAFM; Luttik R; Slooff W; Canton JH

    1991-01-01

    In an earlier report, a simple algorithm for effect-assessment on secondary poisoning of birds and mammals was presented. This algorithm (MAR = NOEC/BCF) was drawn up by analyzing an aquatic food chain. In the present study it was tested whether this algorithm can be used equally well for

  15. A new BP Fourier algorithm and its application in English teaching evaluation

    Science.gov (United States)

    Pei, Xuehui; Pei, Guixin

    2017-08-01

    BP neural network algorithm has wide adaptability and accuracy when used in complicated system evaluation, but its calculation defects such as slow convergence have limited its practical application. The paper tries to speed up the calculation convergence of BP neural network algorithm with Fourier basis functions and presents a new BP Fourier algorithm for complicated system evaluation. First, shortages and working principle of BP algorithm are analyzed for subsequent targeted improvement; Second, the presented BP Fourier algorithm adopts Fourier basis functions to simplify calculation structure, designs new calculation transfer function between input and output layers, and conducts theoretical analysis to prove the efficiency of the presented algorithm; Finally, the presented algorithm is used in evaluating university English teaching and the application results shows that the presented BP Fourier algorithm has better performance in calculation efficiency and evaluation accuracy and can be used in evaluating complicated system practically.

  16. Investigation of energy windowing algorithms for effective cargo screening with radiation portal monitors

    International Nuclear Information System (INIS)

    Hevener, Ryne; Yim, Man-Sung; Baird, Ken

    2013-01-01

    Radiation portal monitors (RPMs) are distributed across the globe in an effort to decrease the illicit trafficking of nuclear materials. Many current generation RPMs utilizes large polyvinyltoluene (PVT) plastic scintillators. These detectors are low cost and reliable but have very poor energy resolution. The lack of spectroscopic detail available from PVT spectra has restricted these systems primarily to performing simple gross counting measurements in the past. A common approach to extend the capability of PVT detectors beyond simple “gross-gamma” use is to apply a technique known as energy windowing (EW) to perform rough nuclide identification with limited spectral information. An approach to creating EW algorithms was developed in this work utilizing a specific set of calibration sources and modified EW equations; this algorithm provided a degree of increased identification capability. A simulated real-time emulation of the algorithm utilizing actual port-of-entry RPM data supplied by ORNL provided an extensive proving ground for the algorithm. This algorithm is able to identify four potential threat nuclides and the major NORM source with a high degree of accuracy. High-energy masking, a major detriment of EW algorithms, is reduced by the algorithm's design. - Highlights: • Gross counting algorithms do not produce detailed screenings. • Energy windowing algorithms enhance nuclide identification capability. • Proper use of EW algorithm can identify multiple threat nuclides. • Utilizing specific set of calibration sources is important for nuclide identification

  17. Reactor controller design using genetic algorithms with simulated annealing

    International Nuclear Information System (INIS)

    Erkan, K.; Buetuen, E.

    2000-01-01

    This chapter presents a digital control system for ITU TRIGA Mark-II reactor using genetic algorithms with simulated annealing. The basic principles of genetic algorithms for problem solving are inspired by the mechanism of natural selection. Natural selection is a biological process in which stronger individuals are likely to be winners in a competing environment. Genetic algorithms use a direct analogy of natural evolution. Genetic algorithms are global search techniques for optimisation but they are poor at hill-climbing. Simulated annealing has the ability of probabilistic hill-climbing. Thus, the two techniques are combined here to get a fine-tuned algorithm that yields a faster convergence and a more accurate search by introducing a new mutation operator like simulated annealing or an adaptive cooling schedule. In control system design, there are currently no systematic approaches to choose the controller parameters to obtain the desired performance. The controller parameters are usually determined by test and error with simulation and experimental analysis. Genetic algorithm is used automatically and efficiently searching for a set of controller parameters for better performance. (orig.)

  18. Systolic array processing of the sequential decoding algorithm

    Science.gov (United States)

    Chang, C. Y.; Yao, K.

    1989-01-01

    A systolic array processing technique is applied to implementing the stack algorithm form of the sequential decoding algorithm. It is shown that sorting, a key function in the stack algorithm, can be efficiently realized by a special type of systolic arrays known as systolic priority queues. Compared to the stack-bucket algorithm, this approach is shown to have the advantages that the decoding always moves along the optimal path, that it has a fast and constant decoding speed and that its simple and regular hardware architecture is suitable for VLSI implementation. Three types of systolic priority queues are discussed: random access scheme, shift register scheme and ripple register scheme. The property of the entries stored in the systolic priority queue is also investigated. The results are applicable to many other basic sorting type problems.

  19. An algorithm to construct the basic algebra of a skew group algebra

    NARCIS (Netherlands)

    Horobeţ, E.

    2016-01-01

    We give an algorithm for the computation of the basic algebra Morita equivalent to a skew group algebra of a path algebra by obtaining formulas for the number of vertices and arrows of the new quiver Qb. We apply this algorithm to compute the basic algebra corresponding to all simple quaternion

  20. Foam: A general purpose Monte Carlo cellular algorithm

    International Nuclear Information System (INIS)

    Jadach, S.

    2002-01-01

    A general-purpose, self-adapting Monte Carlo (MC) algorithm implemented in the program Foam is described. The high efficiency of the MC, that is small maximum weight or variance of the MC weight is achieved by means of dividing the integration domain into small cells. The cells can be n-dimensional simplices, hyperrectangles or a Cartesian product of them. The grid of cells, called 'foam', is produced in the process of the binary split of the cells. The choice of the next cell to be divided and the position/direction of the division hyperplane is driven by the algorithm which optimizes the ratio of the maximum weight to the average weight or (optionally) the total variance. The algorithm is able to deal, in principle, with an arbitrary pattern of the singularities in the distribution. (author)

  1. Foam A General purpose Monte Carlo Cellular Algorithm

    CERN Document Server

    Jadach, Stanislaw

    2002-01-01

    A general-purpose, self-adapting Monte Carlo (MC) algorithm implemented in the program {\\tt Foam} is described. The high efficiency of the MC, that is small maximum weight or variance of the MC weight is achieved by means of dividing the integration domain into small cells. The cells can be $n$-dimensional simplices, hyperrectangles or a Cartesian product of them. The grid of cells, ``foam'', is produced in the process of the binary split of the cells. The next cell to be divided and the position/direction of the division hyperplane is chosen by the algorithm which optimizes the ratio of the maximum weight to the average weight or (optionally) the total variance. The algorithm is able to deal, in principle, with an arbitrary pattern of the singularities in the distribution.

  2. Optimal control and optimal trajectories of regional macroeconomic dynamics based on the Pontryagin maximum principle

    Science.gov (United States)

    Bulgakov, V. K.; Strigunov, V. V.

    2009-05-01

    The Pontryagin maximum principle is used to prove a theorem concerning optimal control in regional macroeconomics. A boundary value problem for optimal trajectories of the state and adjoint variables is formulated, and optimal curves are analyzed. An algorithm is proposed for solving the boundary value problem of optimal control. The performance of the algorithm is demonstrated by computing an optimal control and the corresponding optimal trajectories.

  3. Highly accurate symplectic element based on two variational principles

    Science.gov (United States)

    Qing, Guanghui; Tian, Jia

    2018-02-01

    For the stability requirement of numerical resultants, the mathematical theory of classical mixed methods are relatively complex. However, generalized mixed methods are automatically stable, and their building process is simple and straightforward. In this paper, based on the seminal idea of the generalized mixed methods, a simple, stable, and highly accurate 8-node noncompatible symplectic element (NCSE8) was developed by the combination of the modified Hellinger-Reissner mixed variational principle and the minimum energy principle. To ensure the accuracy of in-plane stress results, a simultaneous equation approach was also suggested. Numerical experimentation shows that the accuracy of stress results of NCSE8 are nearly the same as that of displacement methods, and they are in good agreement with the exact solutions when the mesh is relatively fine. NCSE8 has advantages of the clearing concept, easy calculation by a finite element computer program, higher accuracy and wide applicability for various linear elasticity compressible and nearly incompressible material problems. It is possible that NCSE8 becomes even more advantageous for the fracture problems due to its better accuracy of stresses.

  4. Analytic representation for first-principles pseudopotentials

    International Nuclear Information System (INIS)

    Lam, P.K.; Cohen, M.L.; Zunger, A.

    1980-01-01

    The first-principles pseudopotentials developed by Zunger and Cohen are fit with a simple analytic form chosen to model the main physical properties of the potentials. The fitting parameters for the first three rows of the Periodic Table are presented, and the quality of the fit is discussed. The parameters reflect chemical trends of the elements. We find that a minimum of three parameters is required to reproduce the regularities of the Periodic Table. Application of these analytic potentials is also discussed

  5. Algorithms of maximum likelihood data clustering with applications

    Science.gov (United States)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  6. Superconductivity in the elements, alloys and simple compounds

    Energy Technology Data Exchange (ETDEWEB)

    Webb, G.W. [Department of Physics, University of California, San Diego, La Jolla, CA 92093-0319 (United States); Marsiglio, F. [Department of Physics, University of Alberta, Edmonton, Alberta, Canada T6G 2E1 (Canada); Hirsch, J.E., E-mail: jhirsch@ucsd.edu [Department of Physics, University of California, San Diego, La Jolla, CA 92093-0319 (United States)

    2015-07-15

    Highlights: • A review of superconductivity in elements, alloys and simple compounds is given. • These materials are believed to be described by BCS–Eliashberg theory. • The McMillan–Rowell inversion process is described. • Calculations of Tc’s from first principles are surveyed. • Other criteria beyond BCS to characterize these superconductors are discussed. - Abstract: We give a brief review of superconductivity at ambient pressure in elements, alloys, and simple three-dimensional compounds. Historically these were the first superconducting materials studied, and based on the experimental knowledge gained from them the BCS theory of superconductivity was developed in 1957. Extended to include the effect of phonon retardation, the theory is believed to describe the subset of superconducting materials known as ‘conventional superconductors’, where superconductivity is caused by the electron–phonon interaction. These include the elements, alloys and simple compounds discussed in this article and several other classes of materials discussed in other articles in this Special Issue.

  7. Superconductivity in the elements, alloys and simple compounds

    International Nuclear Information System (INIS)

    Webb, G.W.; Marsiglio, F.; Hirsch, J.E.

    2015-01-01

    Highlights: • A review of superconductivity in elements, alloys and simple compounds is given. • These materials are believed to be described by BCS–Eliashberg theory. • The McMillan–Rowell inversion process is described. • Calculations of Tc’s from first principles are surveyed. • Other criteria beyond BCS to characterize these superconductors are discussed. - Abstract: We give a brief review of superconductivity at ambient pressure in elements, alloys, and simple three-dimensional compounds. Historically these were the first superconducting materials studied, and based on the experimental knowledge gained from them the BCS theory of superconductivity was developed in 1957. Extended to include the effect of phonon retardation, the theory is believed to describe the subset of superconducting materials known as ‘conventional superconductors’, where superconductivity is caused by the electron–phonon interaction. These include the elements, alloys and simple compounds discussed in this article and several other classes of materials discussed in other articles in this Special Issue

  8. Improved CORF model of simple cell combined with non-classical receptive field and its application on edge detection

    Science.gov (United States)

    Sun, Xiao; Chai, Guobei; Liu, Wei; Bao, Wenzhuo; Zhao, Xiaoning; Ming, Delie

    2018-02-01

    Simple cells in primary visual cortex are believed to extract local edge information from a visual scene. In this paper, inspired by different receptive field properties and visual information flow paths of neurons, an improved Combination of Receptive Fields (CORF) model combined with non-classical receptive fields was proposed to simulate the responses of simple cell's receptive fields. Compared to the classical model, the proposed model is able to better imitate simple cell's physiologic structure with consideration of facilitation and suppression of non-classical receptive fields. And on this base, an edge detection algorithm as an application of the improved CORF model was proposed. Experimental results validate the robustness of the proposed algorithm to noise and background interference.

  9. 3D Radiative Transfer in Eta Carinae: Application of the SimpleX Algorithm to 3D SPH Simulations of Binary Colliding Winds

    Science.gov (United States)

    Clementel, N.; Madura, T. I.; Kruip, C. J. H.; Icke, V.; Gull, T. R.

    2014-01-01

    Eta Carinae is an ideal astrophysical laboratory for studying massive binary interactions and evolution, and stellar wind-wind collisions. Recent three-dimensional (3D) simulations set the stage for understanding the highly complex 3D flows in Eta Car. Observations of different broad high- and low-ionization forbidden emission lines provide an excellent tool to constrain the orientation of the system, the primary's mass-loss rate, and the ionizing flux of the hot secondary. In this work we present the first steps towards generating synthetic observations to compare with available and future HST/STIS data. We present initial results from full 3D radiative transfer simulations of the interacting winds in Eta Car. We use the SimpleX algorithm to post-process the output from 3D SPH simulations and obtain the ionization fractions of hydrogen and helium assuming three different mass-loss rates for the primary star. The resultant ionization maps of both species constrain the regions where the observed forbidden emission lines can form. Including collisional ionization is necessary to achieve a better description of the ionization states, especially in the areas shielded from the secondary's radiation. We find that reducing the primary's mass-loss rate increases the volume of ionized gas, creating larger areas where the forbidden emission lines can form. We conclude that post processing 3D SPH data with SimpleX is a viable tool to create ionization maps for Eta Car.

  10. A Monte Carlo algorithm for the Vavilov distribution

    International Nuclear Information System (INIS)

    Yi, Chul-Young; Han, Hyon-Soo

    1999-01-01

    Using the convolution property of the inverse Laplace transform, an improved Monte Carlo algorithm for the Vavilov energy-loss straggling distribution of the charged particle is developed, which is relatively simple and gives enough accuracy to be used for most Monte Carlo applications

  11. Multi Dimensional Honey Bee Foraging Algorithm Based on Optimal Energy Consumption

    Science.gov (United States)

    Saritha, R.; Vinod Chandra, S. S.

    2017-10-01

    In this paper a new nature inspired algorithm is proposed based on natural foraging behavior of multi-dimensional honey bee colonies. This method handles issues that arise when food is shared from multiple sources by multiple swarms at multiple destinations. The self organizing nature of natural honey bee swarms in multiple colonies is based on the principle of energy consumption. Swarms of multiple colonies select a food source to optimally fulfill the requirements of its colonies. This is based on the energy requirement for transporting food between a source and destination. Minimum use of energy leads to maximizing profit in each colony. The mathematical model proposed here is based on this principle. This has been successfully evaluated by applying it on multi-objective transportation problem for optimizing cost and time. The algorithm optimizes the needs at each destination in linear time.

  12. Upper Bounds on the Number of Errors Corrected by the Koetter–Vardy Algorithm

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2007-01-01

    By introducing a few simplifying assumptions we derive a simple condition for successful decoding using the Koetter-Vardy algorithm for soft-decision decoding of Reed-Solomon codes. We show that the algorithm has a significant advantage over hard decision decoding when the code rate is low, when ...

  13. Acoustic levitation and the Boltzmann-Ehrenfest principle

    Science.gov (United States)

    Putterman, S.; Rudnick, Joseph; Barmatz, M.

    1989-01-01

    The Boltzmann-Ehrenfest principle of adiabatic invariance relates the acoustic potential acting on a sample positioned in a single-mode cavity to the shift in resonant frequency caused by the presence of this sample. This general and simple relation applies to samples and cavities of arbitrary shape, dimension, and compressibility. Positioning forces and torques can, therefore, be determined from straightforward measurements of frequency shifts. Applications to the Rayleigh disk phenomenon and levitated cylinders are presented.

  14. Research on AHP decision algorithms based on BP algorithm

    Science.gov (United States)

    Ma, Ning; Guan, Jianhe

    2017-10-01

    Decision making is the thinking activity that people choose or judge, and scientific decision-making has always been a hot issue in the field of research. Analytic Hierarchy Process (AHP) is a simple and practical multi-criteria and multi-objective decision-making method that combines quantitative and qualitative and can show and calculate the subjective judgment in digital form. In the process of decision analysis using AHP method, the rationality of the two-dimensional judgment matrix has a great influence on the decision result. However, in dealing with the real problem, the judgment matrix produced by the two-dimensional comparison is often inconsistent, that is, it does not meet the consistency requirements. BP neural network algorithm is an adaptive nonlinear dynamic system. It has powerful collective computing ability and learning ability. It can perfect the data by constantly modifying the weights and thresholds of the network to achieve the goal of minimizing the mean square error. In this paper, the BP algorithm is used to deal with the consistency of the two-dimensional judgment matrix of the AHP.

  15. The Principle of Energetic Consistency

    Science.gov (United States)

    Cohn, Stephen E.

    2009-01-01

    A basic result in estimation theory is that the minimum variance estimate of the dynamical state, given the observations, is the conditional mean estimate. This result holds independently of the specifics of any dynamical or observation nonlinearity or stochasticity, requiring only that the probability density function of the state, conditioned on the observations, has two moments. For nonlinear dynamics that conserve a total energy, this general result implies the principle of energetic consistency: if the dynamical variables are taken to be the natural energy variables, then the sum of the total energy of the conditional mean and the trace of the conditional covariance matrix (the total variance) is constant between observations. Ensemble Kalman filtering methods are designed to approximate the evolution of the conditional mean and covariance matrix. For them the principle of energetic consistency holds independently of ensemble size, even with covariance localization. However, full Kalman filter experiments with advection dynamics have shown that a small amount of numerical dissipation can cause a large, state-dependent loss of total variance, to the detriment of filter performance. The principle of energetic consistency offers a simple way to test whether this spurious loss of variance limits ensemble filter performance in full-blown applications. The classical second-moment closure (third-moment discard) equations also satisfy the principle of energetic consistency, independently of the rank of the conditional covariance matrix. Low-rank approximation of these equations offers an energetically consistent, computationally viable alternative to ensemble filtering. Current formulations of long-window, weak-constraint, four-dimensional variational methods are designed to approximate the conditional mode rather than the conditional mean. Thus they neglect the nonlinear bias term in the second-moment closure equation for the conditional mean. The principle of

  16. Algorithm for Compressing Time-Series Data

    Science.gov (United States)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  17. A balance principle approach for modeling phase transformation kinetics

    International Nuclear Information System (INIS)

    Lusk, M.; Krauss, G.; Jou, H.J.

    1995-01-01

    A balance principle is offered to model volume fraction kinetics of phase transformation kinetics at a continuum level. This microbalance provides a differential equation for transformation kinetics which is coupled to the differential equations governing the mechanical and thermal aspects of the process. Application here is restricted to diffusive transformations for the sake of clarity, although the principle is discussed for martensitic phase transitions as well. Avrami-type kinetics are shown to result from a special class of energy functions. An illustrative example using a 0.5% C Chromium steel demonstrates how TTT and CCT curves can be generated using a particularly simple effective energy function. (orig.)

  18. Implementing Modifed Burg Algorithms in Multivariate Subset Autoregressive Modeling

    Directory of Open Access Journals (Sweden)

    A. Alexandre Trindade

    2003-02-01

    Full Text Available The large number of parameters in subset vector autoregressive models often leads one to procure fast, simple, and efficient alternatives or precursors to maximum likelihood estimation. We present the solution of the multivariate subset Yule-Walker equations as one such alternative. In recent work, Brockwell, Dahlhaus, and Trindade (2002, show that the Yule-Walker estimators can actually be obtained as a special case of a general recursive Burg-type algorithm. We illustrate the structure of this Algorithm, and discuss its implementation in a high-level programming language. Applications of the Algorithm in univariate and bivariate modeling are showcased in examples. Univariate and bivariate versions of the Algorithm written in Fortran 90 are included in the appendix, and their use illustrated.

  19. Greatest Happiness Principle in a Complex System Approach

    Directory of Open Access Journals (Sweden)

    Katalin Martinás

    2012-06-01

    Full Text Available The principle of greatest happiness was the basis of ethics in Plato’s and Aristotle’s work, it served as the basis of utility principle in economics, and the happiness research has become a hot topic in social sciences in Western countries in particular in economics recently. Nevertheless there is a considerable scientific pessimism over whether it is even possible to affect sustainable increases in happiness.In this paper we outline an economic theory of decision based on the greatest happiness principle (GHP. Modern equilibrium economics is a simple system simplification of the GHP, the complex approach outlines a non-equilibrium economic theory. The comparison of the approaches reveals the fact that the part of the results – laws of modern economics – follow from the simplifications and they are against the economic nature. The most important consequence is that within the free market economy one cannot be sure that the path found by it leads to a beneficial economic system.

  20. THE RESPONSIBILITY PRINCIPLE

    Directory of Open Access Journals (Sweden)

    Elena ANGHEL

    2015-07-01

    Full Text Available "I'm wishing Law this: all legal obligations sholud be executed with the scrupulosity with which moral obligations are being performed by those people who feel bound by them ...", so beautifully portraited by Nicolae Titulescu`s words1. Life in the society means more than a simple coexistence of human beings, it actually means living together, collaborating and cooperating; that is why I always have to relate to other people and to be aware that only by limiting my freedom of action, the others freedom is feasible. Neminem laedere should be a principle of life for each of us. The individual is a responsible being. But responsibility exceeds legal prescriptions. Romanian Constitution underlines that I have to exercise my rights and freedoms in good faith, without infringing the rights and freedoms of others. The legal norm, developer of the constitutional principles, is endowed with sanction, which grants it exigibility. But I wonder: If I choose to obey the law, is my decision essentially determined only due of the fear of punishment? Is it not because I am a rational being, who developed during its life a conscience towards values, and thus I understand that I have to respect the law and I choose to comply with it?

  1. Image Encryption Using a Lightweight Stream Encryption Algorithm

    Directory of Open Access Journals (Sweden)

    Saeed Bahrami

    2012-01-01

    Full Text Available Security of the multimedia data including image and video is one of the basic requirements for the telecommunications and computer networks. In this paper, we consider a simple and lightweight stream encryption algorithm for image encryption, and a series of tests are performed to confirm suitability of the described encryption algorithm. These tests include visual test, histogram analysis, information entropy, encryption quality, correlation analysis, differential analysis, and performance analysis. Based on this analysis, it can be concluded that the present algorithm in comparison to A5/1 and W7 stream ciphers has the same security level, is better in terms of the speed of performance, and is used for real-time applications.

  2. Development of a mathematical model of the heating phase of rubber mixture and development of the synthesis of the heating control algorithm using the Pontryagin maximum principle

    Directory of Open Access Journals (Sweden)

    V. S. Kudryashov

    2017-01-01

    Full Text Available The article is devoted to the development of the algorithm of the heating phase control of a rubber compound for CJSC “Voronezh tyre plant”. The algorithm is designed for implementation on basis of controller Siemens S-300 to control the RS-270 mixer. To compile the algorithm a systematic analysis of the heating process has been performed as a control object, also the mathematical model of the heating phase has been developed on the basis of the heat balance equation, which describes the process of heating of a heat-transfer agent in the heat exchanger and further heating of the mixture in the mixer. The dynamic characteristics of temperature of the heat exchanger and the rubber mixer have been obtained. Taking into account the complexity and nonlinearity of the control object – a rubber mixer, as well as the availability of methods and great experience in managing this machine in an industrial environment, the algorithm has been implemented using the Pontryagin maximum principle. The optimization problem is reduced to determining the optimal control (heating steam supply and the optimal path of the object’s output coordinates (the temperature of the mixture which ensure the least flow of steam while heating a rubber compound in a limited time. To do this, the mathematical model of the heating phase has been written in matrix form. Coefficients matrices for each state of the control, control and disturbance vectors have been created, the Hamilton function has been obtained and time switching points have been found for constructing an optimal control and escape path of the object. Analysis of the model experiments and practical research results in the process of programming of the controller have showed a decrease in the heating steam consumption by 24.4% during the heating phase of the rubber compound.

  3. A self-organizing algorithm for modeling protein loops.

    Directory of Open Access Journals (Sweden)

    Pu Liu

    2009-08-01

    Full Text Available Protein loops, the flexible short segments connecting two stable secondary structural units in proteins, play a critical role in protein structure and function. Constructing chemically sensible conformations of protein loops that seamlessly bridge the gap between the anchor points without introducing any steric collisions remains an open challenge. A variety of algorithms have been developed to tackle the loop closure problem, ranging from inverse kinematics to knowledge-based approaches that utilize pre-existing fragments extracted from known protein structures. However, many of these approaches focus on the generation of conformations that mainly satisfy the fixed end point condition, leaving the steric constraints to be resolved in subsequent post-processing steps. In the present work, we describe a simple solution that simultaneously satisfies not only the end point and steric conditions, but also chirality and planarity constraints. Starting from random initial atomic coordinates, each individual conformation is generated independently by using a simple alternating scheme of pairwise distance adjustments of randomly chosen atoms, followed by fast geometric matching of the conformationally rigid components of the constituent amino acids. The method is conceptually simple, numerically stable and computationally efficient. Very importantly, additional constraints, such as those derived from NMR experiments, hydrogen bonds or salt bridges, can be incorporated into the algorithm in a straightforward and inexpensive way, making the method ideal for solving more complex multi-loop problems. The remarkable performance and robustness of the algorithm are demonstrated on a set of protein loops of length 4, 8, and 12 that have been used in previous studies.

  4. Robotics, vision and control fundamental algorithms in Matlab

    CERN Document Server

    Corke, Peter

    2017-01-01

    Robotic vision, the combination of robotics and computer vision, involves the application of computer algorithms to data acquired from sensors. The research community has developed a large body of such algorithms but for a newcomer to the field this can be quite daunting. For over 20 years the author has maintained two open-source MATLAB® Toolboxes, one for robotics and one for vision. They provide implementations of many important algorithms and allow users to work with real problems, not just trivial examples. This book makes the fundamental algorithms of robotics, vision and control accessible to all. It weaves together theory, algorithms and examples in a narrative that covers robotics and computer vision separately and together. Using the latest versions of the Toolboxes the author shows how complex problems can be decomposed and solved using just a few simple lines of code. The topics covered are guided by real problems observed by the author over many years as a practitioner of both robotics and compu...

  5. Faster Algorithms for Computing Longest Common Increasing Subsequences

    DEFF Research Database (Denmark)

    Kutz, Martin; Brodal, Gerth Stølting; Kaligosi, Kanela

    2011-01-01

    of the alphabet, and Sort is the time to sort each input sequence. For k⩾3 length-n sequences we present an algorithm which improves the previous best bound by more than a factor k for many inputs. In both cases, our algorithms are conceptually quite simple but rely on existing sophisticated data structures......We present algorithms for finding a longest common increasing subsequence of two or more input sequences. For two sequences of lengths n and m, where m⩾n, we present an algorithm with an output-dependent expected running time of and O(m) space, where ℓ is the length of an LCIS, σ is the size....... Finally, we introduce the problem of longest common weakly-increasing (or non-decreasing) subsequences (LCWIS), for which we present an -time algorithm for the 3-letter alphabet case. For the extensively studied longest common subsequence problem, comparable speedups have not been achieved for small...

  6. Nonlinear Algorithms for Channel Equalization and Map Symbol Detection.

    Science.gov (United States)

    Giridhar, K.

    The transfer of information through a communication medium invariably results in various kinds of distortion to the transmitted signal. In this dissertation, a feed -forward neural network-based equalizer, and a family of maximum a posteriori (MAP) symbol detectors are proposed for signal recovery in the presence of intersymbol interference (ISI) and additive white Gaussian noise. The proposed neural network-based equalizer employs a novel bit-mapping strategy to handle multilevel data signals in an equivalent bipolar representation. It uses a training procedure to learn the channel characteristics, and at the end of training, the multilevel symbols are recovered from the corresponding inverse bit-mapping. When the channel characteristics are unknown and no training sequences are available, blind estimation of the channel (or its inverse) and simultaneous data recovery is required. Convergence properties of several existing Bussgang-type blind equalization algorithms are studied through computer simulations, and a unique gain independent approach is used to obtain a fair comparison of their rates of convergence. Although simple to implement, the slow convergence of these Bussgang-type blind equalizers make them unsuitable for many high data-rate applications. Rapidly converging blind algorithms based on the principle of MAP symbol-by -symbol detection are proposed, which adaptively estimate the channel impulse response (CIR) and simultaneously decode the received data sequence. Assuming a linear and Gaussian measurement model, the near-optimal blind MAP symbol detector (MAPSD) consists of a parallel bank of conditional Kalman channel estimators, where the conditioning is done on each possible data subsequence that can convolve with the CIR. This algorithm is also extended to the recovery of convolutionally encoded waveforms in the presence of ISI. Since the complexity of the MAPSD algorithm increases exponentially with the length of the assumed CIR, a suboptimal

  7. A simple model for retrieving bare soil moisture from radar-scattering coefficients

    International Nuclear Information System (INIS)

    Chen, K.S.; Yen, S.K.; Huang, W.P.

    1995-01-01

    A simple algorithm based on a rough surface scattering model was developed to invert the bare soil moisture content from active microwave remote sensing data. In the algorithm development, a frequency mixing model was used to relate soil moisture to the dielectric constant. In particular, the Integral Equation Model (IEM) was used over a wide range of surface roughness and radar frequencies. To derive the algorithm, a sensitivity analysis was performed using a Monte Carlo simulation to study the effects of surface parameters, including height variance, correlation length, and dielectric constant. Because radar return is inherently dependent on both moisture content and surface roughness, the purpose of the sensitivity testing was to select the proper radar parameters so as to optimally decouple these two factors, in an attempt to minimize the effects of one while the other was observed. As a result, the optimal radar parameter ranges can be chosen for the purpose of soil moisture content inversion. One thousand samples were then generated with the IEM model followed by multivariate linear regression analysis to obtain an empirical soil moisture model. Numerical comparisons were made to illustrate the inversion performance using experimental measurements. Results indicate that the present algorithm is simple and accurate, and can be a useful tool for the remote sensing of bare soil surfaces. (author)

  8. Data-driven gradient algorithm for high-precision quantum control

    Science.gov (United States)

    Wu, Re-Bing; Chu, Bing; Owens, David H.; Rabitz, Herschel

    2018-04-01

    In the quest to achieve scalable quantum information processing technologies, gradient-based optimal control algorithms (e.g., grape) are broadly used for implementing high-precision quantum gates, but their performance is often hindered by deterministic or random errors in the system model and the control electronics. In this paper, we show that grape can be taught to be more effective by jointly learning from the design model and the experimental data obtained from process tomography. The resulting data-driven gradient optimization algorithm (d-grape) can in principle correct all deterministic gate errors, with a mild efficiency loss. The d-grape algorithm may become more powerful with broadband controls that involve a large number of control parameters, while other algorithms usually slow down due to the increased size of the search space. These advantages are demonstrated by simulating the implementation of a two-qubit controlled-not gate.

  9. Implicit assumptions underlying simple harvest models of marine bird populations can mislead environmental management decisions.

    Science.gov (United States)

    O'Brien, Susan H; Cook, Aonghais S C P; Robinson, Robert A

    2017-10-01

    Assessing the potential impact of additional mortality from anthropogenic causes on animal populations requires detailed demographic information. However, these data are frequently lacking, making simple algorithms, which require little data, appealing. Because of their simplicity, these algorithms often rely on implicit assumptions, some of which may be quite restrictive. Potential Biological Removal (PBR) is a simple harvest model that estimates the number of additional mortalities that a population can theoretically sustain without causing population extinction. However, PBR relies on a number of implicit assumptions, particularly around density dependence and population trajectory that limit its applicability in many situations. Among several uses, it has been widely employed in Europe in Environmental Impact Assessments (EIA), to examine the acceptability of potential effects of offshore wind farms on marine bird populations. As a case study, we use PBR to estimate the number of additional mortalities that a population with characteristics typical of a seabird population can theoretically sustain. We incorporated this level of additional mortality within Leslie matrix models to test assumptions within the PBR algorithm about density dependence and current population trajectory. Our analyses suggest that the PBR algorithm identifies levels of mortality which cause population declines for most population trajectories and forms of population regulation. Consequently, we recommend that practitioners do not use PBR in an EIA context for offshore wind energy developments. Rather than using simple algorithms that rely on potentially invalid implicit assumptions, we recommend use of Leslie matrix models for assessing the impact of additional mortality on a population, enabling the user to explicitly define assumptions and test their importance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. GenClust: A genetic algorithm for clustering gene expression data

    Directory of Open Access Journals (Sweden)

    Raimondi Alessandra

    2005-12-01

    Full Text Available Abstract Background Clustering is a key step in the analysis of gene expression data, and in fact, many classical clustering algorithms are used, or more innovative ones have been designed and validated for the task. Despite the widespread use of artificial intelligence techniques in bioinformatics and, more generally, data analysis, there are very few clustering algorithms based on the genetic paradigm, yet that paradigm has great potential in finding good heuristic solutions to a difficult optimization problem such as clustering. Results GenClust is a new genetic algorithm for clustering gene expression data. It has two key features: (a a novel coding of the search space that is simple, compact and easy to update; (b it can be used naturally in conjunction with data driven internal validation methods. We have experimented with the FOM methodology, specifically conceived for validating clusters of gene expression data. The validity of GenClust has been assessed experimentally on real data sets, both with the use of validation measures and in comparison with other algorithms, i.e., Average Link, Cast, Click and K-means. Conclusion Experiments show that none of the algorithms we have used is markedly superior to the others across data sets and validation measures; i.e., in many cases the observed differences between the worst and best performing algorithm may be statistically insignificant and they could be considered equivalent. However, there are cases in which an algorithm may be better than others and therefore worthwhile. In particular, experiments for GenClust show that, although simple in its data representation, it converges very rapidly to a local optimum and that its ability to identify meaningful clusters is comparable, and sometimes superior, to that of more sophisticated algorithms. In addition, it is well suited for use in conjunction with data driven internal validation measures and, in particular, the FOM methodology.

  11. A Teaching Approach from the Exhaustive Search Method to the Needleman-Wunsch Algorithm

    Science.gov (United States)

    Xu, Zhongneng; Yang, Yayun; Huang, Beibei

    2017-01-01

    The Needleman-Wunsch algorithm has become one of the core algorithms in bioinformatics; however, this programming requires more suitable explanations for students with different major backgrounds. In supposing sample sequences and using a simple store system, the connection between the exhaustive search method and the Needleman-Wunsch algorithm…

  12. Principles of linear algebra with Mathematica

    CERN Document Server

    Shiskowski, Kenneth M

    2013-01-01

    A hands-on introduction to the theoretical and computational aspects of linear algebra using Mathematica® Many topics in linear algebra are simple, yet computationally intensive, and computer algebra systems such as Mathematica® are essential not only for learning to apply the concepts to computationally challenging problems, but also for visualizing many of the geometric aspects within this field of study. Principles of Linear Algebra with Mathematica uniquely bridges the gap between beginning linear algebra and computational linear algebra that is often encountered in applied settings,

  13. Super-Resolution Algorithm in Cumulative Virtual Blanking

    Science.gov (United States)

    Montillet, J. P.; Meng, X.; Roberts, G. W.; Woolfson, M. S.

    2008-11-01

    The proliferation of mobile devices and the emergence of wireless location-based services have generated consumer demand for precise location. In this paper, the MUSIC super-resolution algorithm is applied to time delay estimation for positioning purposes in cellular networks. The goal is to position a Mobile Station with UMTS technology. The problem of Base-Stations herability is solved using Cumulative Virtual Blanking. A simple simulator is presented using DS-SS signal. The results show that MUSIC algorithm improves the time delay estimation in both the cases whether or not Cumulative Virtual Blanking was carried out.

  14. The Generalized Principle of the Golden Section and its applications in mathematics, science, and engineering

    International Nuclear Information System (INIS)

    Stakhov, A.P.

    2005-01-01

    The 'Dichotomy Principle' and the classical 'Golden Section Principle' are two of the most important principles of Nature, Science and also Art. The Generalized Principle of the Golden Section that follows from studying the diagonal sums of the Pascal triangle is a sweeping generalization of these important principles. This underlies the foundation of 'Harmony Mathematics', a new proposed mathematical direction. Harmony Mathematics includes a number of new mathematical theories: an algorithmic measurement theory, a new number theory, a new theory of hyperbolic functions based on Fibonacci and Lucas numbers, and a theory of the Fibonacci and 'Golden' matrices. These mathematical theories are the source of many new ideas in mathematics, philosophy, botanic and biology, electrical and computer science and engineering, communication systems, mathematical education as well as theoretical physics and physics of high energy particles

  15. Selecting a general-purpose data compression algorithm

    Science.gov (United States)

    Mathews, Gary Jason

    1995-01-01

    The National Space Science Data Center's Common Data Formate (CDF) is capable of storing many types of data such as scalar data items, vectors, and multidimensional arrays of bytes, integers, or floating point values. However, regardless of the dimensionality and data type, the data break down into a sequence of bytes that can be fed into a data compression function to reduce the amount of data without losing data integrity and thus remaining fully reconstructible. Because of the diversity of data types and high performance speed requirements, a general-purpose, fast, simple data compression algorithm is required to incorporate data compression into CDF. The questions to ask are how to evaluate and compare compression algorithms, and what compression algorithm meets all requirements. The object of this paper is to address these questions and determine the most appropriate compression algorithm to use within the CDF data management package that would be applicable to other software packages with similar data compression needs.

  16. Analysis and application of two recursive parametric estimation algorithms for an asynchronous machine

    International Nuclear Information System (INIS)

    Damek, Nawel; Kamoun, Samira

    2011-01-01

    In this communication, two recursive parametric estimation algorithms are analyzed and applied to an squirrelcage asynchronous machine located at the research ''Unit of Automatic Control'' (UCA) at ENIS. The first algorithm which, use the transfer matrix mathematical model, is based on the gradient principle. The second algorithm, which use the state-space mathematical model, is based on the minimization of the estimation error. These algorithms are applied as a key technique to estimate asynchronous machine with unknown, but constant or timevarying parameters. Stator voltage and current are used as measured data. The proposed recursive parametric estimation algorithms are validated on the experimental data of an asynchronous machine under normal operating condition as full load. The results show that these algorithms can estimate effectively the machine parameters with reliability.

  17. Simple simulation training system for short-wave radio station

    Science.gov (United States)

    Tan, Xianglin; Shao, Zhichao; Tu, Jianhua; Qu, Fuqi

    2018-04-01

    The short-wave radio station is a most important transmission equipment of our signal corps, but in the actual teaching process, which exist the phenomenon of fewer equipment and more students, making the students' short-wave radio operation and practice time is very limited. In order to solve the above problems, to carry out shortwave radio simple simulation training system development is very necessary. This project is developed by combining hardware and software to simulate the voice communication operation and signal principle of shortwave radio station, and can test the signal flow of shortwave radio station. The test results indicate that this system is simple operation, human-machine interface friendly and can improve teaching more efficiency.

  18. AdaBoost-based algorithm for network intrusion detection.

    Science.gov (United States)

    Hu, Weiming; Hu, Wei; Maybank, Steve

    2008-04-01

    Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data.

  19. Development of Variational Guiding Center Algorithms for Parallel Calculations in Experimental Magnetic Equilibria

    Energy Technology Data Exchange (ETDEWEB)

    Ellison, C. Leland [PPPL; Finn, J. M. [LANL; Qin, H. [PPPL; Tang, William M. [PPPL

    2014-10-01

    Structure-preserving algorithms obtained via discrete variational principles exhibit strong promise for the calculation of guiding center test particle trajectories. The non-canonical Hamiltonian structure of the guiding center equations forms a novel and challenging context for geometric integration. To demonstrate the practical relevance of these methods, a prototypical variational midpoint algorithm is applied to an experimental magnetic equilibrium. The stability characteristics, conservation properties, and implementation requirements associated with the variational algorithms are addressed. Furthermore, computational run time is reduced for large numbers of particles by parallelizing the calculation on GPU hardware.

  20. Adler's overrelaxation algorithm for Goldstone bosons

    International Nuclear Information System (INIS)

    Neuberger, H.

    1987-01-01

    A very simple derivation of a closed-form solution to the stochastic evolution defined by Adler's overrelaxation algorithm is given for free massive and massless scalar fields on a finite lattice with periodic boundary conditions and checkerboard updating. It is argued that the results are directly relevant when critical slowing down reflects the existence of Goldstone bosons in the system

  1. A Systematic Approach to Modified BCJR MAP Algorithms for Convolutional Codes

    Directory of Open Access Journals (Sweden)

    Patenaude François

    2006-01-01

    Full Text Available Since Berrou, Glavieux and Thitimajshima published their landmark paper in 1993, different modified BCJR MAP algorithms have appeared in the literature. The existence of a relatively large number of similar but different modified BCJR MAP algorithms, derived using the Markov chain properties of convolutional codes, naturally leads to the following questions. What is the relationship among the different modified BCJR MAP algorithms? What are their relative performance, computational complexities, and memory requirements? In this paper, we answer these questions. We derive systematically four major modified BCJR MAP algorithms from the BCJR MAP algorithm using simple mathematical transformations. The connections between the original and the four modified BCJR MAP algorithms are established. A detailed analysis of the different modified BCJR MAP algorithms shows that they have identical computational complexities and memory requirements. Computer simulations demonstrate that the four modified BCJR MAP algorithms all have identical performance to the BCJR MAP algorithm.

  2. Exploring design tradeoffs of a distributed algorithm for cosmic ray event detection

    Science.gov (United States)

    Yousaf, S.; Bakhshi, R.; van Steen, M.; Voulgaris, S.; Kelley, J. L.

    2013-03-01

    Many sensor networks, including large particle detector arrays measuring high-energy cosmic-ray air showers, traditionally rely on centralised trigger algorithms to find spatial and temporal coincidences of individual nodes. Such schemes suffer from scalability problems, especially if the nodes communicate wirelessly or have bandwidth limitations. However, nodes which instead communicate with each other can, in principle, use a distributed algorithm to find coincident events themselves without communication with a central node. We present such an algorithm and consider various design tradeoffs involved, in the context of a potential trigger for the Auger Engineering Radio Array (AERA).

  3. A Simple Hybrid Synchronization for a Class of Chaotic Financial Systems

    Directory of Open Access Journals (Sweden)

    Jiming Zheng

    2017-01-01

    Full Text Available It is an important to achieve the hybrid synchronization of the chaotic financial system. Chaos synchronization is equivalent to the error system which is asymptotically stable. The hybrid synchronization for a class of finance chaotic systems is discussed. First, a simple single variable controller is obtained to synchronize two identical chaotic financial systems with different initial conditions. Second, a novel algorithm is proposed to determine the variables of the master system that should antisynchronize with corresponding variables of the slave system and use this algorithm to determine the corresponding variables in the chaotic financial systems. The hybrid synchronization of the chaotic financial systems is realized by a simple controller. At the same time, different controllers can implement the chaotic financial system hybrid synchronization. In comparison with the existing results, the obtained controllers in this paper are simpler than those of the existing results. Finally, numerical simulations show the effectiveness of the proposed results.

  4. Simple pragmatic approach to mesh routing using BATMAN

    CSIR Research Space (South Africa)

    Johnson, D

    2008-10-01

    Full Text Available realisation that a routing algorithm for a large static mesh needs to be developed from first principles and as a result the BATMAN project was started. In BATMAN all nodes periodically broadcasts hello packets, also known as originator messages, to its... not have. OLSR was used with the following settings • HELLO interval = 1 second. • Topology Control (TC) interval = 1 second. • HELLO validity interval = 200 seconds. • TC validity interval = 100 seconds. • Fisheye = ON (TC messages are sent with 3...

  5. Fuzzy gain scheduling of velocity PI controller with intelligent learning algorithm for reactor control

    International Nuclear Information System (INIS)

    Kim, Dong Yun; Seong, Poong Hyun

    1996-01-01

    In this study, we proposed a fuzzy gain scheduler with intelligent learning algorithm for a reactor control. In the proposed algorithm, we used the gradient descent method to learn the rule bases of a fuzzy algorithm. These rule bases are learned toward minimizing an objective function, which is called a performance cost function. The objective of fuzzy gain scheduler with intelligent learning algorithm is the generation of adequate gains, which minimize the error of system. The condition of every plant is generally changed as time gose. That is, the initial gains obtained through the analysis of system are no longer suitable for the changed plant. And we need to set new gains, which minimize the error stemmed from changing the condition of a plant. In this paper, we applied this strategy for reactor control of nuclear power plant (NPP), and the results were compared with those of a simple PI controller, which has fixed gains. As a result, it was shown that the proposed algorithm was superior to the simple PI controller

  6. Optimisation in radiotherapy II: Programmed and inversion optimisation algorithms

    International Nuclear Information System (INIS)

    Ebert, M.

    1997-01-01

    This is the second article in a three part examination of optimisation in radiotherapy. The previous article established the bases of optimisation in radiotherapy, and the formulation of the optimisation problem. This paper outlines several algorithms that have been used in radiotherapy, for searching for the best irradiation strategy within the full set of possible strategies. Two principle classes of algorithm are considered - those associated with mathematical programming which employ specific search techniques, linear programming type searches or artificial intelligence - and those which seek to perform a numerical inversion of the optimisation problem, finishing with deterministic iterative inversion. (author)

  7. Effective calculation algorithm for nuclear chains of arbitrary length and branching

    International Nuclear Information System (INIS)

    Chirkov, V.A.; Mishanin, B.V.

    1994-01-01

    An effective algorithm for calculation of the isotope concentration in the spent nuclear fuel when it is kept in storage, is presented. Using the superposition principle and representing the transfer function in a rather compact form it becomes possible achieve high calculation speed and a moderate computer code size. The algorithm is applied for the calculation of activity, energy release and toxicity of heavy nuclides and products of their decay when the fuel is kept in storage. (authors). 1 ref., 4 tabs

  8. “Stringy” coherent states inspired by generalized uncertainty principle

    Science.gov (United States)

    Ghosh, Subir; Roy, Pinaki

    2012-05-01

    Coherent States with Fractional Revival property, that explicitly satisfy the Generalized Uncertainty Principle (GUP), have been constructed in the context of Generalized Harmonic Oscillator. The existence of such states is essential in motivating the GUP based phenomenological results present in the literature which otherwise would be of purely academic interest. The effective phase space is Non-Canonical (or Non-Commutative in popular terminology). Our results have a smooth commutative limit, equivalent to Heisenberg Uncertainty Principle. The Fractional Revival time analysis yields an independent bound on the GUP parameter. Using this and similar bounds obtained here, we derive the largest possible value of the (GUP induced) minimum length scale. Mandel parameter analysis shows that the statistics is Sub-Poissonian. Correspondence Principle is deformed in an interesting way. Our computational scheme is very simple as it requires only first order corrected energy values and undeformed basis states.

  9. “Stringy” coherent states inspired by generalized uncertainty principle

    International Nuclear Information System (INIS)

    Ghosh, Subir; Roy, Pinaki

    2012-01-01

    Coherent States with Fractional Revival property, that explicitly satisfy the Generalized Uncertainty Principle (GUP), have been constructed in the context of Generalized Harmonic Oscillator. The existence of such states is essential in motivating the GUP based phenomenological results present in the literature which otherwise would be of purely academic interest. The effective phase space is Non-Canonical (or Non-Commutative in popular terminology). Our results have a smooth commutative limit, equivalent to Heisenberg Uncertainty Principle. The Fractional Revival time analysis yields an independent bound on the GUP parameter. Using this and similar bounds obtained here, we derive the largest possible value of the (GUP induced) minimum length scale. Mandel parameter analysis shows that the statistics is Sub-Poissonian. Correspondence Principle is deformed in an interesting way. Our computational scheme is very simple as it requires only first order corrected energy values and undeformed basis states.

  10. A simple stationary semi-analytical wake model

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.

    We present an idealized simple, but fast, semi-analytical algorithm for computation of stationary wind farm wind fields with a possible potential within a multi-fidelity strategy for wind farm topology optimization. Basically, the model considers wakes as linear perturbations on the ambient non......-linear. With each of these approached, a parabolic system are described, which is initiated by first considering the most upwind located turbines and subsequently successively solved in the downstream direction. Algorithms for the resulting wind farm flow fields are proposed, and it is shown that in the limit......-uniform mean wind field, although the modelling of the individual stationary wake flow fields includes non-linear terms. The simulation of the individual wake contributions are based on an analytical solution of the thin shear layer approximation of the NS equations. The wake flow fields are assumed...

  11. Performance Analysis of Binary Search Algorithm in RFID

    Directory of Open Access Journals (Sweden)

    Xiangmei SONG

    2014-12-01

    Full Text Available Binary search algorithm (BS is a kind of important anti-collision algorithm in the Radio Frequency Identification (RFID, is also one of the key technologies which determine whether the information in the tag is identified by the reader-writer fast and reliably. The performance of BS directly affects the quality of service in Internet of Things. This paper adopts an automated formal technology: probabilistic model checking to analyze the performance of BS algorithm formally. Firstly, according to the working principle of BS algorithm, its dynamic behavior is abstracted into a Discrete Time Markov Chains which can describe deterministic, discrete time and the probability selection. And then on the model we calculate the probability of the data sent successfully and the expected time of tags completing the data transmission. Compared to the another typical anti-collision protocol S-ALOHA in RFID, experimental results show that with an increase in the number of tags the BS algorithm has a less space and time consumption, the average number of conflicts increases slower than the S-ALOHA protocol standard, BS algorithm needs fewer expected time to complete the data transmission, and the average speed of the data transmission in BS is as 1.6 times as the S-ALOHA protocol.

  12. Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm

    International Nuclear Information System (INIS)

    Zu Yun-Xiao; Zhou Jie

    2012-01-01

    Multi-user cognitive radio network resource allocation based on the adaptive niche immune genetic algorithm is proposed, and a fitness function is provided. Simulations are conducted using the adaptive niche immune genetic algorithm, the simulated annealing algorithm, the quantum genetic algorithm and the simple genetic algorithm, respectively. The results show that the adaptive niche immune genetic algorithm performs better than the other three algorithms in terms of the multi-user cognitive radio network resource allocation, and has quick convergence speed and strong global searching capability, which effectively reduces the system power consumption and bit error rate. (geophysics, astronomy, and astrophysics)

  13. Successive approximation algorithm for cancellation of artifacts in DSA images

    International Nuclear Information System (INIS)

    Funakami, Raiko; Hiroshima, Kyoichi; Nishino, Junji

    2000-01-01

    In this paper, we propose an algorithm for cancellation of artifacts in DSA images. We have already proposed an automatic registration method based on the detection of local movements. When motion of the object is large, it is difficult to estimate the exact movement, and the cancellation of artifacts may therefore fail. The algorithm we propose here is based on a simple rigid model. We present the results of applying the proposed method to a series of experimental X-ray images, as well as the results of applying the algorithm as preprocessing for a registration method based on local movement. (author)

  14. Genetic algorithms applied to nuclear reactor design optimization

    International Nuclear Information System (INIS)

    Pereira, C.M.N.A.; Schirru, R.; Martinez, A.S.

    2000-01-01

    A genetic algorithm is a powerful search technique that simulates natural evolution in order to fit a population of computational structures to the solution of an optimization problem. This technique presents several advantages over classical ones such as linear programming based techniques, often used in nuclear engineering optimization problems. However, genetic algorithms demand some extra computational cost. Nowadays, due to the fast computers available, the use of genetic algorithms has increased and its practical application has become a reality. In nuclear engineering there are many difficult optimization problems related to nuclear reactor design. Genetic algorithm is a suitable technique to face such kind of problems. This chapter presents applications of genetic algorithms for nuclear reactor core design optimization. A genetic algorithm has been designed to optimize the nuclear reactor cell parameters, such as array pitch, isotopic enrichment, dimensions and cells materials. Some advantages of this genetic algorithm implementation over a classical method based on linear programming are revealed through the application of both techniques to a simple optimization problem. In order to emphasize the suitability of genetic algorithms for design optimization, the technique was successfully applied to a more complex problem, where the classical method is not suitable. Results and comments about the applications are also presented. (orig.)

  15. The Quantitative Analysis of User Behavior Online - Data, Models and Algorithms

    Science.gov (United States)

    Raghavan, Prabhakar

    By blending principles from mechanism design, algorithms, machine learning and massive distributed computing, the search industry has become good at optimizing monetization on sound scientific principles. This represents a successful and growing partnership between computer science and microeconomics. When it comes to understanding how online users respond to the content and experiences presented to them, we have more of a lacuna in the collaboration between computer science and certain social sciences. We will use a concrete technical example from image search results presentation, developing in the process some algorithmic and machine learning problems of interest in their own right. We then use this example to motivate the kinds of studies that need to grow between computer science and the social sciences; a critical element of this is the need to blend large-scale data analysis with smaller-scale eye-tracking and "individualized" lab studies.

  16. Fourier Transform Infrared Spectroscopy (FT-IR) and Simple Algorithm Analysis for Rapid and Non-Destructive Assessment of Developmental Cotton Fibers.

    Science.gov (United States)

    Liu, Yongliang; Kim, Hee-Jin

    2017-06-22

    With cotton fiber growth or maturation, cellulose content in cotton fibers markedly increases. Traditional chemical methods have been developed to determine cellulose content, but it is time-consuming and labor-intensive, mostly owing to the slow hydrolysis process of fiber cellulose components. As one approach, the attenuated total reflection Fourier transform infrared (ATR FT-IR) spectroscopy technique has also been utilized to monitor cotton cellulose formation, by implementing various spectral interpretation strategies of both multivariate principal component analysis (PCA) and 1-, 2- or 3-band/-variable intensity or intensity ratios. The main objective of this study was to compare the correlations between cellulose content determined by chemical analysis and ATR FT-IR spectral indices acquired by the reported procedures, among developmental Texas Marker-1 (TM-1) and immature fiber ( im ) mutant cotton fibers. It was observed that the R value, CI IR , and the integrated intensity of the 895 cm -1 band exhibited strong and linear relationships with cellulose content. The results have demonstrated the suitability and utility of ATR FT-IR spectroscopy, combined with a simple algorithm analysis, in assessing cotton fiber cellulose content, maturity, and crystallinity in a manner which is rapid, routine, and non-destructive.

  17. Balancing Inverted Pendulum by Angle Sensing Using Fuzzy Logic Supervised PID Controller Optimized by Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Ashutosh K. AGARWAL

    2011-10-01

    Full Text Available Genetic algorithms are robust search techniques based on the principles of evolution. A genetic algorithm maintains a population of encoded solutions and guides the population towards the optimum solution. This important property of genetic algorithm is used in this paper to stabilize the Inverted pendulum system. This paper highlights the application and stability of inverted pendulum using PID controller with fuzzy logic genetic algorithm supervisor . There are a large number of well established search techniques in use within the information technology industry. We propose a method to control inverted pendulum steady state error and overshoot using genetic algorithm technique.

  18. An improved harmony search algorithm for power economic load dispatch

    Energy Technology Data Exchange (ETDEWEB)

    Santos Coelho, Leandro dos [Pontifical Catholic University of Parana, PUCPR, Industrial and Systems Engineering Graduate Program, PPGEPS, Imaculada Conceicao, 1155, 80215-901 Curitiba, PR (Brazil)], E-mail: leandro.coelho@pucpr.br; Mariani, Viviana Cocco [Pontifical Catholic University of Parana, PUCPR, Department of Mechanical Engineering, PPGEM, Imaculada Conceicao, 1155, 80215-901 Curitiba, PR (Brazil)], E-mail: viviana.mariani@pucpr.br

    2009-10-15

    A meta-heuristic algorithm called harmony search (HS), mimicking the improvisation process of music players, has been recently developed. The HS algorithm has been successful in several optimization problems. The HS algorithm does not require derivative information and uses stochastic random search instead of a gradient search. In addition, the HS algorithm is simple in concept, few in parameters, and easy in implementation. This paper presents an improved harmony search (IHS) algorithm based on exponential distribution for solving economic dispatch problems. A 13-unit test system with incremental fuel cost function taking into account the valve-point loading effects is used to illustrate the effectiveness of the proposed IHS method. Numerical results show that the IHS method has good convergence property. Furthermore, the generation costs of the IHS method are lower than those of the classical HS and other optimization algorithms reported in recent literature.

  19. An improved harmony search algorithm for power economic load dispatch

    Energy Technology Data Exchange (ETDEWEB)

    Coelho, Leandro dos Santos [Pontifical Catholic Univ. of Parana, PUCPR, Industrial and Systems Engineering Graduate Program, PPGEPS, Imaculada Conceicao, 1155, 80215-901 Curitiba, PR (Brazil); Mariani, Viviana Cocco [Pontifical Catholic Univ. of Parana, PUCPR, Dept. of Mechanical Engineering, PPGEM, Imaculada Conceicao, 1155, 80215-901 Curitiba, PR (Brazil)

    2009-10-15

    A meta-heuristic algorithm called harmony search (HS), mimicking the improvisation process of music players, has been recently developed. The HS algorithm has been successful in several optimization problems. The HS algorithm does not require derivative information and uses stochastic random search instead of a gradient search. In addition, the HS algorithm is simple in concept, few in parameters, and easy in implementation. This paper presents an improved harmony search (IHS) algorithm based on exponential distribution for solving economic dispatch problems. A 13-unit test system with incremental fuel cost function taking into account the valve-point loading effects is used to illustrate the effectiveness of the proposed IHS method. Numerical results show that the IHS method has good convergence property. Furthermore, the generation costs of the IHS method are lower than those of the classical HS and other optimization algorithms reported in recent literature. (author)

  20. An improved harmony search algorithm for power economic load dispatch

    International Nuclear Information System (INIS)

    Santos Coelho, Leandro dos; Mariani, Viviana Cocco

    2009-01-01

    A meta-heuristic algorithm called harmony search (HS), mimicking the improvisation process of music players, has been recently developed. The HS algorithm has been successful in several optimization problems. The HS algorithm does not require derivative information and uses stochastic random search instead of a gradient search. In addition, the HS algorithm is simple in concept, few in parameters, and easy in implementation. This paper presents an improved harmony search (IHS) algorithm based on exponential distribution for solving economic dispatch problems. A 13-unit test system with incremental fuel cost function taking into account the valve-point loading effects is used to illustrate the effectiveness of the proposed IHS method. Numerical results show that the IHS method has good convergence property. Furthermore, the generation costs of the IHS method are lower than those of the classical HS and other optimization algorithms reported in recent literature.

  1. New algorithms and methods to estimate maximum-likelihood phylogenies: assessing the performance of PhyML 3.0.

    Science.gov (United States)

    Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier

    2010-05-01

    PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.

  2. An enhanced deterministic K-Means clustering algorithm for cancer subtype prediction from gene expression data.

    Science.gov (United States)

    Nidheesh, N; Abdul Nazeer, K A; Ameer, P M

    2017-12-01

    Clustering algorithms with steps involving randomness usually give different results on different executions for the same dataset. This non-deterministic nature of algorithms such as the K-Means clustering algorithm limits their applicability in areas such as cancer subtype prediction using gene expression data. It is hard to sensibly compare the results of such algorithms with those of other algorithms. The non-deterministic nature of K-Means is due to its random selection of data points as initial centroids. We propose an improved, density based version of K-Means, which involves a novel and systematic method for selecting initial centroids. The key idea of the algorithm is to select data points which belong to dense regions and which are adequately separated in feature space as the initial centroids. We compared the proposed algorithm to a set of eleven widely used single clustering algorithms and a prominent ensemble clustering algorithm which is being used for cancer data classification, based on the performances on a set of datasets comprising ten cancer gene expression datasets. The proposed algorithm has shown better overall performance than the others. There is a pressing need in the Biomedical domain for simple, easy-to-use and more accurate Machine Learning tools for cancer subtype prediction. The proposed algorithm is simple, easy-to-use and gives stable results. Moreover, it provides comparatively better predictions of cancer subtypes from gene expression data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Answer Markup Algorithms for Southeast Asian Languages.

    Science.gov (United States)

    Henry, George M.

    1991-01-01

    Typical markup methods for providing feedback to foreign language learners are not applicable to languages not written in a strictly linear fashion. A modification of Hart's edit markup software is described, along with a second variation based on a simple edit distance algorithm adapted to a general Southeast Asian font system. (10 references)…

  4. An Implementable First-Order Primal-Dual Algorithm for Structured Convex Optimization

    Directory of Open Access Journals (Sweden)

    Feng Ma

    2014-01-01

    Full Text Available Many application problems of practical interest can be posed as structured convex optimization models. In this paper, we study a new first-order primaldual algorithm. The method can be easily implementable, provided that the resolvent operators of the component objective functions are simple to evaluate. We show that the proposed method can be interpreted as a proximal point algorithm with a customized metric proximal parameter. Convergence property is established under the analytic contraction framework. Finally, we verify the efficiency of the algorithm by solving the stable principal component pursuit problem.

  5. Researching on YH100 Numerical Control Servo Press Hydraulic Control System and Control Algorithm

    Directory of Open Access Journals (Sweden)

    Kai LI

    2014-09-01

    Full Text Available In order to study the numerical control (NC servo press hydraulic control system and its control algorithm. The numerical control servo press performance and control principle of hydraulic control system are analyzed. According to the flow equation of the hydraulic control valve, hydraulic cylinder flow continuity equation and the force balance equation of the hydraulic cylinder with load press, the mathematical model of hydraulic control system is established. And the servo press hydraulic system transfer function is deduced. Introducing the suitable immune particle swarm control algorithm for servo press hydraulic system, and the control system block diagram is established. Immune algorithm is used to optimize new control parameters of the system and adopt the new optimization results to optimize the system simulation. The simulation result shows that the hydraulic system’s transition time controlled by the immune particle swarm algorithm is shorter than traditional ones, and the control performance is obviously improved. Finally it can be concluded that immune particle swarm PID control have these characteristics such as quickness, stability and accuracy. Applying this principle into application, the obtained YH100 numerical control servo press hydraulic control system meets the requirement.

  6. Fast weighted centroid algorithm for single particle localization near the information limit.

    Science.gov (United States)

    Fish, Jeremie; Scrimgeour, Jan

    2015-07-10

    A simple weighting scheme that enhances the localization precision of center of mass calculations for radially symmetric intensity distributions is presented. The algorithm effectively removes the biasing that is common in such center of mass calculations. Localization precision compares favorably with other localization algorithms used in super-resolution microscopy and particle tracking, while significantly reducing the processing time and memory usage. We expect that the algorithm presented will be of significant utility when fast computationally lightweight particle localization or tracking is desired.

  7. Improved hybrid optimization algorithm for 3D protein structure prediction.

    Science.gov (United States)

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.

  8. Image quality evaluation of full reference algorithm

    Science.gov (United States)

    He, Nannan; Xie, Kai; Li, Tong; Ye, Yushan

    2018-03-01

    Image quality evaluation is a classic research topic, the goal is to design the algorithm, given the subjective feelings consistent with the evaluation value. This paper mainly introduces several typical reference methods of Mean Squared Error(MSE), Peak Signal to Noise Rate(PSNR), Structural Similarity Image Metric(SSIM) and feature similarity(FSIM) of objective evaluation methods. The different evaluation methods are tested by Matlab, and the advantages and disadvantages of these methods are obtained by analyzing and comparing them.MSE and PSNR are simple, but they are not considered to introduce HVS characteristics into image quality evaluation. The evaluation result is not ideal. SSIM has a good correlation and simple calculation ,because it is considered to the human visual effect into image quality evaluation,However the SSIM method is based on a hypothesis,The evaluation result is limited. The FSIM method can be used for test of gray image and color image test, and the result is better. Experimental results show that the new image quality evaluation algorithm based on FSIM is more accurate.

  9. Soft magnetic tweezers: a proof of principle.

    Science.gov (United States)

    Mosconi, Francesco; Allemand, Jean François; Croquette, Vincent

    2011-03-01

    We present here the principle of soft magnetic tweezers which improve the traditional magnetic tweezers allowing the simultaneous application and measurement of an arbitrary torque to a deoxyribonucleic acid (DNA) molecule. They take advantage of a nonlinear coupling regime that appears when a fast rotating magnetic field is applied to a superparamagnetic bead immersed in a viscous fluid. In this work, we present the development of the technique and we compare it with other techniques capable of measuring the torque applied to the DNA molecule. In this proof of principle, we use standard electromagnets to achieve our experiments. Despite technical difficulties related to the present implementation of these electromagnets, the agreement of measurements with previous experiments is remarkable. Finally, we propose a simple way to modify the experimental design of electromagnets that should bring the performances of the device to a competitive level.

  10. Mapping robust parallel multigrid algorithms to scalable memory architectures

    Science.gov (United States)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  11. Children, algorithm and the decimal numeral system

    Directory of Open Access Journals (Sweden)

    Clélia Maria Ignatius Nogueira

    2010-08-01

    Full Text Available A large number of studies in Mathematics Education approach some possible problems in the study of algorithms in the early school years of arithmetic teaching. However, this discussion is not exhausted. In this feature, this article presents the results of a research which proposed to investigate if the arithmetic’s teaching, with emphasis in the fundamental operation’s algorithm, cooperate to build the mathematics knowledge, specifically of the Decimal Numeral System. In order to achieve this purpose, we interviewed, using the Piaget Critique Clinical Method, twenty students from a public school. The result’s analysis indicates that they mechanically reproduce the regular algorithm’s techniques without notice the relations between the techniques and the principle and the Decimal Numeral System’s properties.

  12. Maximal frustration as an immunological principle.

    Science.gov (United States)

    de Abreu, F Vistulo; Mostardinha, P

    2009-03-06

    A fundamental problem in immunology is that of understanding how the immune system selects promptly which cells to kill without harming the body. This problem poses an apparent paradox. Strong reactivity against pathogens seems incompatible with perfect tolerance towards self. We propose a different view on cellular reactivity to overcome this paradox: effector functions should be seen as the outcome of cellular decisions which can be in conflict with other cells' decisions. We argue that if cellular systems are frustrated, then extensive cross-reactivity among the elements in the system can decrease the reactivity of the system as a whole and induce perfect tolerance. Using numerical and mathematical analyses, we discuss two simple models that perform optimal pathogenic detection with no autoimmunity if cells are maximally frustrated. This study strongly suggests that a principle of maximal frustration could be used to build artificial immune systems. It would be interesting to test this principle in the real adaptive immune system.

  13. Simple models of equilibrium and nonequilibrium phenomena

    International Nuclear Information System (INIS)

    Lebowitz, J.L.

    1987-01-01

    This volume consists of two chapters of particular interest to researchers in the field of statistical mechanics. The first chapter is based on the premise that the best way to understand the qualitative properties that characterize many-body (i.e. macroscopic) systems is to study 'a number of the more significant model systems which, at least in principle are susceptible of complete analysis'. The second chapter deals exclusively with nonequilibrium phenomena. It reviews the theory of fluctuations in open systems to which they have made important contributions. Simple but interesting model examples are emphasised

  14. A Qualitative Comparison between the Proportional Navigation and Differential Geometry Guidance Algorithms

    Directory of Open Access Journals (Sweden)

    Yunes Sh. ALQUDSI

    2018-06-01

    Full Text Available This paper discusses and presents an overview of the proportional navigation (PN guidance law as well as the differential geometry (DG guidance algorithm that are used to develop the intercept course of a certain target. The intent of this study is to illustrate the advantages of the guidance algorithm generated based on the concepts of differential geometry against the well-known PN guidance law. The basic principles behind the both algorithms are mentioned. Moreover, the different versions of the PN approach is briefly clarified to show the essential improvement from one version to the other. The paper terminated with numerous two-dimension simulation figures to give a great value of visual aids, illustrating the significant relations and main features and properties of both algorithms.

  15. The Generalized Principle of the Golden Section and its applications in mathematics, science, and engineering

    Energy Technology Data Exchange (ETDEWEB)

    Stakhov, A.P. [International Club of the Golden Section, 6 McCreary Trail, Bolton, ON, L7E 2C8 (Canada)] e-mail: goldenmuseum@rogers.com

    2005-10-01

    The 'Dichotomy Principle' and the classical 'Golden Section Principle' are two of the most important principles of Nature, Science and also Art. The Generalized Principle of the Golden Section that follows from studying the diagonal sums of the Pascal triangle is a sweeping generalization of these important principles. This underlies the foundation of 'Harmony Mathematics', a new proposed mathematical direction. Harmony Mathematics includes a number of new mathematical theories: an algorithmic measurement theory, a new number theory, a new theory of hyperbolic functions based on Fibonacci and Lucas numbers, and a theory of the Fibonacci and 'Golden' matrices. These mathematical theories are the source of many new ideas in mathematics, philosophy, botanic and biology, electrical and computer science and engineering, communication systems, mathematical education as well as theoretical physics and physics of high energy particles.

  16. Algorithms for Calculating Alternating Infinite Series

    International Nuclear Information System (INIS)

    Garcia, Hector Luna; Garcia, Luz Maria

    2015-01-01

    This paper are presented novel algorithms for exact limits of a broad class of infinite alternating series. Many of these series are found in physics and other branches of science and their exact values found for us are in complete agreement with the values obtained by other authors. Finally, these simple methods are very powerful in calculating the limits of many series as shown by the examples

  17. Use of ECG and Other Simple Non-Invasive Tools to Assess Pulmonary Hypertension.

    Directory of Open Access Journals (Sweden)

    Gabor Kovacs

    Full Text Available There is a broad consensus that pulmonary hypertension (PH is to be diagnosed by right heart catheterization (RHC and that the most important non-invasive tool is echocardiography. However, the role of simple non-invasive tools in the work-up of PH is not clearly defined. We hypothesized that the use of simple non-invasive techniques may help to guide important decisions in the diagnostics of pulmonary hypertension.We aimed to develop an algorithm with the use of simple, non-invasive tools in order to identify patients with very high or very low likelihood of PH.We retrospectively analyzed all consecutive patients undergoing RHC between 2005 and 2010 in our center and performed logistic regression of simple non-invasive parameters regarding detection and exclusion of PH and derived a two-step algorithm. In a prospective study we evaluated this algorithm between 2011 and 2013.The retrospective cohort consisted of n = 394 patients of which 49% presented with PH. Right axis deviation in the ECG was present in 90/394 patients and had a positive predictive value (PPV of 93% for PH. The combination of non-right axis deviation, N-terminal pro brain natriuretic peptide (NT-proBNP<333pg/ml, arterial oxygen saturation (SO2≥95.5% and WHO functional class I-II was present in 69/394 patients and excluded PH with a negative predictive value (NPV of 96%. The prospective study confirmed these results in a cohort of n = 168 patients (PPV:92%, NPV:97%. Taken together, simple non-invasive tools allowed a prediction regarding the presence or absence of PH in 42% of patients with suspected PH.ECG, NT-proBNP, SO2 and WHO functional class may predict the presence or absence of PH in almost half of the patients with suspected PH, suggesting an important role for these variables in the work-up of patients at risk for PH.NCT01607502.

  18. Cell light scattering characteristic numerical simulation research based on FDTD algorithm

    Science.gov (United States)

    Lin, Xiaogang; Wan, Nan; Zhu, Hao; Weng, Lingdong

    2017-01-01

    In this study, finite-difference time-domain (FDTD) algorithm has been used to work out the cell light scattering problem. Before beginning to do the simulation contrast, finding out the changes or the differences between normal cells and abnormal cells which may be cancerous or maldevelopment is necessary. The preparation of simulation are building up the simple cell model of cell which consists of organelles, nucleus and cytoplasm and setting up the suitable precision of mesh. Meanwhile, setting up the total field scattering field source as the excitation source and far field projection analysis group is also important. Every step need to be explained by the principles of mathematic such as the numerical dispersion, perfect matched layer boundary condition and near-far field extrapolation. The consequences of simulation indicated that the position of nucleus changed will increase the back scattering intensity and the significant difference on the peak value of scattering intensity may result from the changes of the size of cytoplasm. The study may help us find out the regulations based on the simulation consequences and the regulations can be meaningful for early diagnosis of cancers.

  19. Speckle imaging using the principle value decomposition method

    International Nuclear Information System (INIS)

    Sherman, J.W.

    1978-01-01

    Obtaining diffraction-limited images in the presence of atmospheric turbulence is a topic of current interest. Two types of approaches have evolved: real-time correction and speckle imaging. A speckle imaging reconstruction method was developed by use of an ''optimal'' filtering approach. This method is based on a nonlinear integral equation which is solved by principle value decomposition. The method was implemented on a CDC 7600 for study. The restoration algorithm is discussed and its performance is illustrated. 7 figures

  20. Exponential-Time Algorithms and Complexity of NP-Hard Graph Problems

    DEFF Research Database (Denmark)

    Taslaman, Nina Sofia

    of algorithms, as well as investigations into how far such improvements can get under reasonable assumptions.      The first part is concerned with detection of cycles in graphs, especially parameterized generalizations of Hamiltonian cycles. A remarkably simple Monte Carlo algorithm is presented......NP-hard problems are deemed highly unlikely to be solvable in polynomial time. Still, one can often find algorithms that are substantially faster than brute force solutions. This thesis concerns such algorithms for problems from graph theory; techniques for constructing and improving this type......, and with high probability any found solution is shortest possible. Moreover, the algorithm can be used to find a cycle of given parity through the specified elements.      The second part concerns the hardness of problems encoded as evaluations of the Tutte polynomial at some fixed point in the rational plane...

  1. The study of Kruskal's and Prim's algorithms on the Multiple Instruction and Single Data stream computer system

    Directory of Open Access Journals (Sweden)

    A. Yu. Popov

    2015-01-01

    Full Text Available Bauman Moscow State Technical University is implementing a project to develop operating principles of computer system having radically new architecture. A developed working model of the system allowed us to evaluate an efficiency of developed hardware and software. The experimental results presented in previous studies, as well as the analysis of operating principles of new computer system permit to draw conclusions regarding its efficiency in solving discrete optimization problems related to processing of sets.The new architecture is based on a direct hardware support of operations of discrete mathematics, which is reflected in using the special facilities for processing of sets and data structures. Within the framework of the project a special device was designed, i.e. a structure processor (SP, which improved the performance, without limiting the scope of applications of such a computer system.The previous works presented the basic principles of the computational process organization in MISD (Multiple Instructions, Single Data system, showed the structure and features of the structure processor and the general principles to solve discrete optimization problems on graphs.This paper examines two search algorithms of the minimum spanning tree, namely Kruskal's and Prim's algorithms. It studies the implementations of algorithms for two SP operation modes: coprocessor mode and MISD one. The paper presents results of experimental comparison of MISD system performance in coprocessor mode with mainframes.

  2. A Faster Algorithm for Computing Motorcycle Graphs

    KAUST Repository

    Vigneron, Antoine E.; Yan, Lie

    2014-01-01

    We present a new algorithm for computing motorcycle graphs that runs in (Formula presented.) time for any (Formula presented.), improving on all previously known algorithms. The main application of this result is to computing the straight skeleton of a polygon. It allows us to compute the straight skeleton of a non-degenerate polygon with (Formula presented.) holes in (Formula presented.) expected time. If all input coordinates are (Formula presented.)-bit rational numbers, we can compute the straight skeleton of a (possibly degenerate) polygon with (Formula presented.) holes in (Formula presented.) expected time. In particular, it means that we can compute the straight skeleton of a simple polygon in (Formula presented.) expected time if all input coordinates are (Formula presented.)-bit rationals, while all previously known algorithms have worst-case running time (Formula presented.). © 2014 Springer Science+Business Media New York.

  3. A Faster Algorithm for Computing Motorcycle Graphs

    KAUST Repository

    Vigneron, Antoine E.

    2014-08-29

    We present a new algorithm for computing motorcycle graphs that runs in (Formula presented.) time for any (Formula presented.), improving on all previously known algorithms. The main application of this result is to computing the straight skeleton of a polygon. It allows us to compute the straight skeleton of a non-degenerate polygon with (Formula presented.) holes in (Formula presented.) expected time. If all input coordinates are (Formula presented.)-bit rational numbers, we can compute the straight skeleton of a (possibly degenerate) polygon with (Formula presented.) holes in (Formula presented.) expected time. In particular, it means that we can compute the straight skeleton of a simple polygon in (Formula presented.) expected time if all input coordinates are (Formula presented.)-bit rationals, while all previously known algorithms have worst-case running time (Formula presented.). © 2014 Springer Science+Business Media New York.

  4. [Algorithms for treatment of complex hand injuries].

    Science.gov (United States)

    Pillukat, T; Prommersberger, K-J

    2011-07-01

    The primary treatment strongly influences the course and prognosis of hand injuries. Complex injuries which compromise functional recovery are especially challenging. Despite an apparently unlimited number of injury patterns it is possible to develop strategies which facilitate a standardized approach to operative treatment. In this situation algorithms can be important guidelines for a rational approach. The following algorithms have been proven in the treatment of complex injuries of the hand by our own experience. They were modified according to the current literature and refer to prehospital care, emergency room management, basic strategy in general and reconstruction of bone and joints, vessels, nerves, tendons and soft tissue coverage in detail. Algorithms facilitate the treatment of severe hand injuries. Applying simple yes/no decisions complex injury patterns are split into distinct partial problems which can be managed step by step.

  5. A Simple Apparatus for Demonstrating Fluid Forces and Newton's Third Law

    Science.gov (United States)

    Mohazzabi, Pirooz; James, Mark C.

    2012-01-01

    Over 2200 years ago, in order to determine the purity of a golden crown of the king of Syracuse, Archimedes submerged the crown in water and determined its volume by measuring the volume of the displaced water. This simple experiment became the foundation of what eventually became known as Archimedes' principle: An object fully or partially…

  6. General Quantum Interference Principle and Duality Computer

    International Nuclear Information System (INIS)

    Long Guilu

    2006-01-01

    In this article, we propose a general principle of quantum interference for quantum system, and based on this we propose a new type of computing machine, the duality computer, that may outperform in principle both classical computer and the quantum computer. According to the general principle of quantum interference, the very essence of quantum interference is the interference of the sub-waves of the quantum system itself. A quantum system considered here can be any quantum system: a single microscopic particle, a composite quantum system such as an atom or a molecule, or a loose collection of a few quantum objects such as two independent photons. In the duality computer, the wave of the duality computer is split into several sub-waves and they pass through different routes, where different computing gate operations are performed. These sub-waves are then re-combined to interfere to give the computational results. The quantum computer, however, has only used the particle nature of quantum object. In a duality computer, it may be possible to find a marked item from an unsorted database using only a single query, and all NP-complete problems may have polynomial algorithms. Two proof-of-the-principle designs of the duality computer are presented: the giant molecule scheme and the nonlinear quantum optics scheme. We also propose thought experiment to check the related fundamental issues, the measurement efficiency of a partial wave function.

  7. Remote sensing of suspended sediment water research: principles, methods, and progress

    Science.gov (United States)

    Shen, Ping; Zhang, Jing

    2011-12-01

    In this paper, we reviewed the principle, data, methods and steps in suspended sediment research by using remote sensing, summed up some representative models and methods, and analyzes the deficiencies of existing methods. Combined with the recent progress of remote sensing theory and application in water suspended sediment research, we introduced in some data processing methods such as atmospheric correction method, adjacent effect correction, and some intelligence algorithms such as neural networks, genetic algorithms, support vector machines into the suspended sediment inversion research, combined with other geographic information, based on Bayesian theory, we improved the suspended sediment inversion precision, and aim to give references to the related researchers.

  8. Principle and realization of segmenting contour series algorithm in reverse engineering based on X-ray computerized tomography

    International Nuclear Information System (INIS)

    Wang Yanfang; Liu Li; Yan Yonglian; Shan Baoci; Tang Xiaowei

    2007-01-01

    A new algorithm of segmenting contour series of images is presented, which can achieve three dimension reconstruction with parametric recognition in Reverse Engineering based on X-ray CT. First, in order to get the nested relationship between contours, a method of a certain angle ray is used. Second, for realizing the contour location in one slice, another approach is presented to generate the contour tree by scanning the relevant vector only once. Last, a judge algorithm is put forward to accomplish the contour match between slices by adopting the qualitative and quantitative properties. The example shows that this algorithm can segment contour series of CT parts rapidly and precisely. (authors)

  9. Complex-based OCT angiography algorithm recovers microvascular information better than amplitude- or phase-based algorithms in phase-stable systems.

    Science.gov (United States)

    Xu, Jingjiang; Song, Shaozhen; Li, Yuandong; Wang, Ruikang K

    2017-12-19

    Optical coherence tomography angiography (OCTA) is increasingly becoming a popular inspection tool for biomedical imaging applications. By exploring the amplitude, phase and complex information available in OCT signals, numerous algorithms have been proposed that contrast functional vessel networks within microcirculatory tissue beds. However, it is not clear which algorithm delivers optimal imaging performance. Here, we investigate systematically how amplitude and phase information have an impact on the OCTA imaging performance, to establish the relationship of amplitude and phase stability with OCT signal-to-noise ratio (SNR), time interval and particle dynamics. With either repeated A-scan or repeated B-scan imaging protocols, the amplitude noise increases with the increase of OCT SNR; however, the phase noise does the opposite, i.e. it increases with the decrease of OCT SNR. Coupled with experimental measurements, we utilize a simple Monte Carlo (MC) model to simulate the performance of amplitude-, phase- and complex-based algorithms for OCTA imaging, the results of which suggest that complex-based algorithms deliver the best performance when the phase noise is  algorithm delivers better performance than either the amplitude- or phase-based algorithms for both the repeated A-scan and the B-scan imaging protocols, which agrees well with the conclusion drawn from the MC simulations.

  10. A Hybrid Optimization Algorithm for Low RCS Antenna Design

    Directory of Open Access Journals (Sweden)

    W. Shao

    2012-12-01

    Full Text Available In this article, a simple and efficient method is presented to design low radar cross section (RCS patch antennas. This method consists of a hybrid optimization algorithm, which combines a genetic algorithm (GA with tabu search algorithm (TSA, and electromagnetic field solver. The TSA, embedded into the GA frame, defines the acceptable neighborhood region of parameters and screens out the poor-scoring individuals. Thus, the repeats of search are avoided and the amount of time-consuming electromagnetic simulations is largely reduced. Moreover, the whole design procedure is auto-controlled by programming the VBScript language. A slot patch antenna example is provided to verify the accuracy and efficiency of the proposed method.

  11. Novel prediction- and subblock-based algorithm for fractal image compression

    International Nuclear Information System (INIS)

    Chung, K.-L.; Hsu, C.-H.

    2006-01-01

    Fractal encoding is the most consuming part in fractal image compression. In this paper, a novel two-phase prediction- and subblock-based fractal encoding algorithm is presented. Initially the original gray image is partitioned into a set of variable-size blocks according to the S-tree- and interpolation-based decomposition principle. In the first phase, each current block of variable-size range block tries to find the best matched domain block based on the proposed prediction-based search strategy which utilizes the relevant neighboring variable-size domain blocks. The first phase leads to a significant computation-saving effect. If the domain block found within the predicted search space is unacceptable, in the second phase, a subblock strategy is employed to partition the current variable-size range block into smaller blocks to improve the image quality. Experimental results show that our proposed prediction- and subblock-based fractal encoding algorithm outperforms the conventional full search algorithm and the recently published spatial-correlation-based algorithm by Truong et al. in terms of encoding time and image quality. In addition, the performance comparison among our proposed algorithm and the other two algorithms, the no search-based algorithm and the quadtree-based algorithm, are also investigated

  12. [Orthogonal Vector Projection Algorithm for Spectral Unmixing].

    Science.gov (United States)

    Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li

    2015-12-01

    Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.

  13. Cluster algorithms with empahsis on quantum spin systems

    International Nuclear Information System (INIS)

    Gubernatis, J.E.; Kawashima, Naoki

    1995-01-01

    The purpose of this lecture is to discuss in detail the generalized approach of Kawashima and Gubernatis for the construction of cluster algorithms. We first present a brief refresher on the Monte Carlo method, describe the Swendsen-Wang algorithm, show how this algorithm follows from the Fortuin-Kastelyn transformation, and re=interpret this transformation in a form which is the basis of the generalized approach. We then derive the essential equations of the generalized approach. This derivation is remarkably simple if done from the viewpoint of probability theory, and the essential assumptions will be clearly stated. These assumptions are implicit in all useful cluster algorithms of which we are aware. They lead to a quite different perspective on cluster algorithms than found in the seminal works and in Ising model applications. Next, we illustrate how the generalized approach leads to a cluster algorithm for world-line quantum Monte Carlo simulations of Heisenberg models with S = 1/2. More succinctly, we also discuss the generalization of the Fortuin- Kasetelyn transformation to higher spin models and illustrate the essential steps for a S = 1 Heisenberg model. Finally, we summarize how to go beyond S = 1 to a general spin, XYZ model

  14. STREAMFINDER I: A New Algorithm for detecting Stellar Streams

    Science.gov (United States)

    Malhan, Khyati; Ibata, Rodrigo A.

    2018-04-01

    We have designed a powerful new algorithm to detect stellar streams in an automated and systematic way. The algorithm, which we call the STREAMFINDER, is well suited for finding dynamically cold and thin stream structures that may lie along any simple or complex orbits in Galactic stellar surveys containing any combination of positional and kinematic information. In the present contribution we introduce the algorithm, lay out the ideas behind it, explain the methodology adopted to detect streams and detail its workings by running it on a suite of simulations of mock Galactic survey data of similar quality to that expected from the ESA/Gaia mission. We show that our algorithm is able to detect even ultra-faint stream features lying well below previous detection limits. Tests show that our algorithm will be able to detect distant halo stream structures >10° long containing as few as ˜15 members (ΣG ˜ 33.6 mag arcsec-2) in the Gaia dataset.

  15. New Algorithm of Automatic Complex Password Generator Employing Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Sura Jasim Mohammed

    2018-01-01

    Full Text Available Due to the occurred increasing in information sharing, internet popularization, E-commerce transactions, and data transferring, security and authenticity become an important and necessary subject. In this paper an automated schema was proposed to generate a strong and complex password which is based on entering initial data such as text (meaningful and simple information or not, with the concept of encoding it, then employing the Genetic Algorithm by using its operations crossover and mutation to generated different data from the entered one. The generated password is non-guessable and can be used in many and different applications and internet services like social networks, secured system, distributed systems, and online services. The proposed password generator achieved diffusion, randomness, and confusions, which are very necessary, required and targeted in the resulted password, in addition to the notice that the length of the generated password differs from the length of initial data, and any simple changing and modification in the initial data produces more and clear modification in the generated password. The proposed work was done using visual basic programing language.

  16. On integral formulation of the Mach principle in a conformally flat space

    International Nuclear Information System (INIS)

    Mal'tsev, V.K.

    1976-01-01

    The integral formulation of the Mach principle represents a rather complicated mathematical formalism in which many aspects of the physical content of theory are not clear. Below an attempt is made to consider the integral representation for the most simple case of conformally flat spaces. The fact that this formalism there is only one scalar function makes it possible to analyse in more detail many physical peculiarities of this representation of the Mach principle: the absence of asymptotically flat spaces, problems of inertia and gravity, constraints on state equations, etc

  17. An implementation of the maximum-caliber principle by replica-averaged time-resolved restrained simulations.

    Science.gov (United States)

    Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo

    2018-05-14

    Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.

  18. Few group collapsing of covariance matrix data based on a conservation principle

    International Nuclear Information System (INIS)

    Hiruta, H.; Palmiotti, G.; Salvatores, M.; Arcilla, R. Jr.; Oblozinsky, P.; McKnight, R.D.

    2008-01-01

    A new algorithm for a rigorous collapsing of covariance data is proposed, derived, implemented, and tested. The method is based on a conservation principle that allows preserving at a broad energy group structure the uncertainty calculated in a fine group energy structure for a specific integral parameter, using as weights the associated sensitivity coefficients

  19. A feature extraction algorithm based on corner and spots in self-driving vehicles

    Directory of Open Access Journals (Sweden)

    Yupeng FENG

    2017-06-01

    Full Text Available To solve the poor real-time performance problem of the visual odometry based on embedded system with limited computing resources, an image matching method based on Harris and SIFT is proposed, namely the Harris-SIFT algorithm. On the basis of the review of SIFT algorithm, the principle of Harris-SIFT algorithm is provided. First, Harris algorithm is used to extract the corners of the image as candidate feature points, and scale invariant feature transform (SIFT features are extracted from those candidate feature points. At last, through an example, the algorithm is simulated by Matlab, then the complexity and other performance of the algorithm are analyzed. The experimental results show that the proposed method reduces the computational complexity and improves the speed of feature extraction. Harris-SIFT algorithm can be used in the real-time vision odometer system, and will bring about a wide application of visual odometry in embedded navigation system.

  20. Practical boundary surveying legal and technical principles

    CERN Document Server

    Gay, Paul

    2015-01-01

    This guide to boundary surveying provides landowners, land surveyors, students and others with the necessary foundation to understand boundary surveying techniques and the common legal issues that govern boundary establishment.  Boundary surveying is sometimes mistakenly considered a strictly technical discipline with simple and straightforward technical solutions.  In reality, boundary establishment is often a difficult and complex matter, requiring years of experience and a thorough understanding of boundary law.  This book helps readers to understand the challenges often encountered by boundary surveyors and some of the available solutions. Using only simple and logically explained mathematics, the principles and practice of boundary surveying are demystified for those without prior experience, and the focused coverage of pivotal issues such as easements and setting lot corners will aid even licensed practitioners in untangling thorny cases. Practical advice on using both basic and advanced instruments ...

  1. Modeling Algorithms in SystemC and ACL2

    Directory of Open Access Journals (Sweden)

    John W. O'Leary

    2014-06-01

    Full Text Available We describe the formal language MASC, based on a subset of SystemC and intended for modeling algorithms to be implemented in hardware. By means of a special-purpose parser, an algorithm coded in SystemC is converted to a MASC model for the purpose of documentation, which in turn is translated to ACL2 for formal verification. The parser also generates a SystemC variant that is suitable as input to a high-level synthesis tool. As an illustration of this methodology, we describe a proof of correctness of a simple 32-bit radix-4 multiplier.

  2. Fully multidimensional flux-corrected transport algorithms for fluids

    International Nuclear Information System (INIS)

    Zalesak, S.T.

    1979-01-01

    The theory of flux-corrected transport (FCT) developed by Boris and Book is placed in a simple, generalized format, and a new algorithm for implementing the critical flux limiting stage in multidimensions without resort to time splitting is presented. The new flux limiting algorithm allows the use of FCT techniques in multidimensional fluid problems for which time splitting would produce unacceptable numerical results, such as those involving incompressible or nearly incompressible flow fields. The 'clipping' problem associated with the original one dimensional flux limiter is also eliminated or alleviated. Test results and applications to a two dimensional fluid plasma problem are presented

  3. Effect of threshold quantization in opportunistic splitting algorithm

    KAUST Repository

    Nam, Haewoon

    2011-12-01

    This paper discusses algorithms to find the optimal threshold and also investigates the impact of threshold quantization on the scheduling outage performance of the opportunistic splitting scheduling algorithm. Since this algorithm aims at finding the user with the highest channel quality within the minimal number of mini-slots by adjusting the threshold every mini-slot, optimizing the threshold is of paramount importance. Hence, in this paper we first discuss how to compute the optimal threshold along with two tight approximations for the optimal threshold. Closed-form expressions are provided for those approximations for simple calculations. Then, we consider linear quantization of the threshold to take the limited number of bits for signaling messages in practical systems into consideration. Due to the limited granularity for the quantized threshold value, an irreducible scheduling outage floor is observed. The numerical results show that the two approximations offer lower scheduling outage probability floors compared to the conventional algorithm when the threshold is quantized. © 2006 IEEE.

  4. A finite state, finite memory minimum principle, part 2. [a discussion of game theory, signaling, stochastic processes, and control theory

    Science.gov (United States)

    Sandell, N. R., Jr.; Athans, M.

    1975-01-01

    The development of the theory of the finite - state, finite - memory (FSFM) stochastic control problem is discussed. The sufficiency of the FSFM minimum principle (which is in general only a necessary condition) was investigated. By introducing the notion of a signaling strategy as defined in the literature on games, conditions under which the FSFM minimum principle is sufficient were determined. This result explicitly interconnects the information structure of the FSFM problem with its optimality conditions. The min-H algorithm for the FSFM problem was studied. It is demonstrated that a version of the algorithm always converges to a particular type of local minimum termed a person - by - person extremal.

  5. A Supervised Classification Algorithm for Note Onset Detection

    Directory of Open Access Journals (Sweden)

    Douglas Eck

    2007-01-01

    Full Text Available This paper presents a novel approach to detecting onsets in music audio files. We use a supervised learning algorithm to classify spectrogram frames extracted from digital audio as being onsets or nononsets. Frames classified as onsets are then treated with a simple peak-picking algorithm based on a moving average. We present two versions of this approach. The first version uses a single neural network classifier. The second version combines the predictions of several networks trained using different hyperparameters. We describe the details of the algorithm and summarize the performance of both variants on several datasets. We also examine our choice of hyperparameters by describing results of cross-validation experiments done on a custom dataset. We conclude that a supervised learning approach to note onset detection performs well and warrants further investigation.

  6. Mi Segundo Libro de Maquinas Simples: Las Palancas. Escuela Intermedia Grados 7, 8 y 9 (My Second Book of Simple Machines: Levers. Intermediate School Grades 7, 8, and 9).

    Science.gov (United States)

    Alvarado, Patricio R.; Montalvo, Luis

    This is the second book in a five-book physical science series on simple machines. The books are designed for Spanish-speaking junior high school students. By suggesting experiments and posing questions concerning drawings in the book which illustrate the scientific principles, this book explains the workings of three types of levers. Resistance…

  7. Uncertainty principles for inverse source problems for electromagnetic and elastic waves

    Science.gov (United States)

    Griesmaier, Roland; Sylvester, John

    2018-06-01

    In isotropic homogeneous media, far fields of time-harmonic electromagnetic waves radiated by compactly supported volume currents, and elastic waves radiated by compactly supported body force densities can be modelled in very similar fashions. Both are projected restricted Fourier transforms of vector-valued source terms. In this work we generalize two types of uncertainty principles recently developed for far fields of scalar-valued time-harmonic waves in Griesmaier and Sylvester (2017 SIAM J. Appl. Math. 77 154–80) to this vector-valued setting. These uncertainty principles yield stability criteria and algorithms for splitting far fields radiated by collections of well-separated sources into the far fields radiated by individual source components, and for the restoration of missing data segments. We discuss proper regularization strategies for these inverse problems, provide stability estimates based on the new uncertainty principles, and comment on reconstruction schemes. A numerical example illustrates our theoretical findings.

  8. Algorithm for prevention of molten steel sticking onto mold in continous casting process

    Directory of Open Access Journals (Sweden)

    Blažević, D.

    2008-01-01

    Full Text Available In continuous casting steel production a significant loss reduction – in terms of scrap material, time and money – can be achieved by developing an appropriate algorithm for the prevention of molten steel sticking onto mould. The logic of such algorithm should be simple and manageable to ensure its practical implementation on a computer system via the usage of thermo sensors. This suggests that both the algorithm and the automated data collection can be implemented by means of applicative software. Despite its simplicity, the algorithm should accurately trace physical phenomena in molten steel.

  9. Practical Constraint K-Segment Principal Curve Algorithms for Generating Railway GPS Digital Map

    Directory of Open Access Journals (Sweden)

    Dewang Chen

    2013-01-01

    Full Text Available In order to obtain a decent trade-off between the low-cost, low-accuracy Global Positioning System (GPS receivers and the requirements of high-precision digital maps for modern railways, using the concept of constraint K-segment principal curves (CKPCS and the expert knowledge on railways, we propose three practical CKPCS generation algorithms with reduced computational complexity, and thereafter more suitable for engineering applications. The three algorithms are named ALLopt, MPMopt, and DCopt, in which ALLopt exploits global optimization and MPMopt and DCopt apply local optimization with different initial solutions. We compare the three practical algorithms according to their performance on average projection error, stability, and the fitness for simple and complex simulated trajectories with noise data. It is found that ALLopt only works well for simple curves and small data sets. The other two algorithms can work better for complex curves and large data sets. Moreover, MPMopt runs faster than DCopt, but DCopt can work better for some curves with cross points. The three algorithms are also applied in generating GPS digital maps for two railway GPS data sets measured in Qinghai-Tibet Railway (QTR. Similar results like the ones in synthetic data are obtained. Because the trajectory of a railway is relatively simple and straight, we conclude that MPMopt works best according to the comprehensive considerations on the speed of computation and the quality of generated CKPCS. MPMopt can be used to obtain some key points to represent a large amount of GPS data. Hence, it can greatly reduce the data storage requirements and increase the positioning speed for real-time digital map applications.

  10. Parallel algorithms on the ASTRA SIMD machine

    International Nuclear Information System (INIS)

    Odor, G.; Rohrbach, F.; Vesztergombi, G.; Varga, G.; Tatrai, F.

    1996-01-01

    In view of the tremendous computing power jump of modern RISC processors the interest in parallel computing seems to be thinning out. Why use a complicated system of parallel processors, if the problem can be solved by a single powerful micro-chip. It is a general law, however, that exponential growth will always end by some kind of a saturation, and then parallelism will again become a hot topic. We try to prepare ourselves for this eventuality. The MPPC project started in 1990 in the keydeys of parallelism and produced four ASTRA machines (presented at CHEP's 92) with 4k processors (which are expandable to 16k) based on yesterday's chip-technology (chip presented at CHEP'91). These machines now provide excellent test-beds for algorithmic developments in a complete, real environment. We are developing for example fast-pattern recognition algorithms which could be used in high-energy physics experiments at the LHC (planned to be operational after 2004 at CERN) for triggering and data reduction. The basic feature of our ASP (Associate String Processor) approach is to use extremely simple (thus very cheap) processor elements but in huge quantities (up to millions of processors) connected together by a very simple string-like communication chain. In this paper we present powerful algorithms based on this architecture indicating the performance perspectives if the hardware quality reaches present or even future technology levels. (author)

  11. Trans-algorithmic nature of learning in biological systems.

    Science.gov (United States)

    Shimansky, Yury P

    2018-05-02

    Learning ability is a vitally important, distinctive property of biological systems, which provides dynamic stability in non-stationary environments. Although several different types of learning have been successfully modeled using a universal computer, in general, learning cannot be described by an algorithm. In other words, algorithmic approach to describing the functioning of biological systems is not sufficient for adequate grasping of what is life. Since biosystems are parts of the physical world, one might hope that adding some physical mechanisms and principles to the concept of algorithm could provide extra possibilities for describing learning in its full generality. However, a straightforward approach to that through the so-called physical hypercomputation so far has not been successful. Here an alternative approach is proposed. Biosystems are described as achieving enumeration of possible physical compositions though random incremental modifications inflicted on them by active operating resources (AORs) in the environment. Biosystems learn through algorithmic regulation of the intensity of the above modifications according to a specific optimality criterion. From the perspective of external observers, biosystems move in the space of different algorithms driven by random modifications imposed by the environmental AORs. A particular algorithm is only a snapshot of that motion, while the motion itself is essentially trans-algorithmic. In this conceptual framework, death of unfit members of a population, for example, is viewed as a trans-algorithmic modification made in the population as a biosystem by environmental AORs. Numerous examples of AOR utilization in biosystems of different complexity, from viruses to multicellular organisms, are provided.

  12. A Faster Algorithm to Recognize Even-Hole-Free Graphs

    OpenAIRE

    Chang, Hsien-Chih; Lu, Hsueh-I

    2013-01-01

    We study the problem of determining whether an $n$-node graph $G$ has an even hole, i.e., an induced simple cycle consisting of an even number of nodes. Conforti, Cornu\\'ejols, Kapoor, and Vu\\v{s}kovi\\'c gave the first polynomial-time algorithm for the problem, which runs in $O(n^{40})$ time. Later, Chudnovsky, Kawarabayashi, and Seymour reduced the running time to $O(n^{31})$. The best previously known algorithm for the problem, due to da Silva and Vu\\v{s}kovi\\'c, runs in $O(n^{19})$ time. I...

  13. Annealed Demon Algorithms Solving the Environmental / Economic Dispatch Problem

    Directory of Open Access Journals (Sweden)

    Aristidis VLACHOS

    2013-06-01

    Full Text Available This paper presents an efficient and reliable Annealed Demon (AD algorithm for the Environmental/Economic Dispatch (EEB problem. The EED problem is a multi-objective non-linear optimization problem with constraints. This problem is one of the fundamentals issues in power system operation. The system of generation associates thermal generators and emissions which involves sulphur oxides (SO2 and nitrogen oxides (NOx. The aim is to minimize total fuel cost of the system and control emission. The proposed AD algorithm is applied for EED of a simple power system.

  14. Le Chatelier's Principle in Sensation and Perception: Fractal-Like Enfolding at Different Scales

    Science.gov (United States)

    Norwich, Kenneth H.

    2010-01-01

    Le Chatelier's principle asserts that a disturbance, when applied to a resting system may drive the system away from its equilibrium state, but will invoke a countervailing influence that will counteract the effect of the disturbance. When applied to the field of sensation and perception, a generalized stimulus will displace the system from equilibrium, and a generalized adaptation process will serve as the countervailing influence tending to reduce the impact of the stimulus. The principle applies at all levels, from the behavioral to the neural, the larger enfolding the smaller in fractal-like form. Le Chatelier's principle, so applied, leads to the unification of many concepts in sensory science. Ideas as diverse as sensory adaptation, reflex arcs, and simple deductive logic can be brought under the umbrella of a single orienting principle. Beyond unification, this principle allows us to approach many questions in pathophysiology from a different perspective. For example, we find new direction toward the reduction of phantom-limb pain and possibly of vertigo. PMID:21423359

  15. Alanna Conners and the Origins of Principled Data Analysis

    Science.gov (United States)

    Scargle, Jeffrey D.

    2013-01-01

    Alanna was one of the most important pioneers in the development of not just sophisticated algorithms for analyzing astronomical data but more importantly an overall viewpoint emphasizing the use of statistically sound principles in place of blind application of cook-book recipes, or black boxes. I will outline some of the threads of this viewpoint, emphasizing time series data, with a focus on the importance of these developments for the Age of Digital Astronomy that we are entering.

  16. System principles, mathematical models and methods to ensure high reliability of safety systems

    Science.gov (United States)

    Zaslavskyi, V.

    2017-04-01

    Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.

  17. Study on solitary word based on HMM model and Baum-Welch algorithm

    Directory of Open Access Journals (Sweden)

    Junxia CHEN

    Full Text Available This paper introduces the principle of Hidden Markov Model, which is used to describe the Markov process with unknown parameters, is a probability model to describe the statistical properties of the random process. On this basis, designed a solitary word detection experiment based on HMM model, by optimizing the experimental model, Using Baum-Welch algorithm for training the problem of solving the HMM model, HMM model to estimate the parameters of the λ value is found, in this view of mathematics equivalent to other linear prediction coefficient. This experiment in reducing unnecessary HMM training at the same time, reduced the algorithm complexity. In order to test the effectiveness of the Baum-Welch algorithm, The simulation of experimental data, the results show that the algorithm is effective.

  18. On the dosimetric behaviour of photon dose calculation algorithms in the presence of simple geometric heterogeneities: comparison with Monte Carlo calculations

    DEFF Research Database (Denmark)

    Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk

    2007-01-01

    A comparative study was performed to reveal differences and relative figures of merit of seven different calculation algorithms for photon beams when applied to inhomogeneous media. The following algorithms were investigated: Varian Eclipse: the anisotropic analytical algorithm, and the pencil beam...... a systematic deficiency in managing the presence of heterogeneous media. In contrast, complicated patterns were observed for the advanced algorithms with significant discrepancies observed between algorithms in the lighter materials (rho = 0.035 g cm(-3)), enhanced for the most energetic beam. For denser...

  19. Crane Double Cycling in Container Ports: Algorithms, Evaluation, and Planning

    OpenAIRE

    Goodchild, Anne Victoria

    2005-01-01

    Loading ships as they are unloaded (double-cycling) can improve the efficiency of a quay crane and container port. This dissertation describes the double-cycling problem, and presents solution algorithms and simple formulae to estimate benefits. In Chapter 2 we focus on reducing the number of operations necessary to turn around a ship. First an intuitive lower bound is developed. We then present a greedy algorithm that was developed based on the physical properties of the problem and yields a...

  20. A novel approach in recognizing magnetic material with simplified algorithm

    KAUST Repository

    Talukdar, Abdul Hafiz Ibne

    2011-04-01

    In this article a cost-effective and simple system (circuit and algorithm) which allows recognizing different kinds of films by their magneto-field conductive properties is demonstrated. The studied signals are generated by a proposed circuit. This signal was further analyzed (recognized) in frequency domain creating the Fourier frequency spectrum which is easily used to detect the response of magnetic sample. The novel algorithm in detecting magnetic field is presented here with both simulation and experimental results. © 2011 IEEE.