WorldWideScience

Sample records for simple algorithmic principles

  1. What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm.

    Science.gov (United States)

    Raykov, Yordan P; Boukouvalas, Alexis; Baig, Fahd; Little, Max A

    The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.

  2. Algorithmic Principles of Mathematical Programming

    NARCIS (Netherlands)

    Faigle, Ulrich; Kern, Walter; Still, Georg

    2002-01-01

    Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear

  3. Graphics and visualization principles & algorithms

    CERN Document Server

    Theoharis, T; Platis, Nikolaos; Patrikalakis, Nicholas M

    2008-01-01

    Computer and engineering collections strong in applied graphics and analysis of visual data via computer will find Graphics & Visualization: Principles and Algorithms makes an excellent classroom text as well as supplemental reading. It integrates coverage of computer graphics and other visualization topics, from shadow geneeration and particle tracing to spatial subdivision and vector data visualization, and it provides a thorough review of literature from multiple experts, making for a comprehensive review essential to any advanced computer study.-California Bookw

  4. Simple Activity Demonstrates Wind Energy Principles

    Science.gov (United States)

    Roman, Harry T.

    2012-01-01

    Wind energy is an exciting and clean energy option often described as the fastest-growing energy system on the planet. With some simple materials, teachers can easily demonstrate its key principles in their classroom. (Contains 1 figure and 2 tables.)

  5. [A simple algorithm for anemia].

    Science.gov (United States)

    Egyed, Miklós

    2014-03-09

    The author presents a novel algorithm for anaemia based on the erythrocyte haemoglobin content. The scheme is based on the aberrations of erythropoiesis and not on the pathophysiology of anaemia. The hemoglobin content of one erytrocyte is between 28-35 picogram. Any disturbance in hemoglobin synthesis can lead to a lower than 28 picogram hemoglobin content of the erythrocyte which will lead to hypochromic anaemia. In contrary, disturbances of nucleic acid metabolism will result in a hemoglobin content greater than 36 picogram, and this will result in hyperchromic anaemia. Normochromic anemia, characterised by hemoglobin content of erythrocytes between 28 and 35 picogram, is the result of alteration in the proliferation of erythropoeisis. Based on these three categories of anaemia, a unique system can be constructed, which can be used as a model for basic laboratory investigations and work-up of anaemic patients.

  6. Simple Obstacle Avoidance Algorithm for Rehabilitation Robots

    NARCIS (Netherlands)

    Stuyt, Floran H.A.; Römer, GertWillem R.B.E.; Stuyt, Harry .J.A.

    2007-01-01

    The efficiency of a rehabilitation robot is improved by offering record-and-replay to operate the robot. While automatically moving to a stored target (replay) collisions of the robot with obstacles in its work space must be avoided. A simple, though effective, generic and deterministic algorithm

  7. Training nuclei detection algorithms with simple annotations

    Directory of Open Access Journals (Sweden)

    Henning Kost

    2017-01-01

    Full Text Available Background: Generating good training datasets is essential for machine learning-based nuclei detection methods. However, creating exhaustive nuclei contour annotations, to derive optimal training data from, is often infeasible. Methods: We compared different approaches for training nuclei detection methods solely based on nucleus center markers. Such markers contain less accurate information, especially with regard to nuclear boundaries, but can be produced much easier and in greater quantities. The approaches use different automated sample extraction methods to derive image positions and class labels from nucleus center markers. In addition, the approaches use different automated sample selection methods to improve the detection quality of the classification algorithm and reduce the run time of the training process. We evaluated the approaches based on a previously published generic nuclei detection algorithm and a set of Ki-67-stained breast cancer images. Results: A Voronoi tessellation-based sample extraction method produced the best performing training sets. However, subsampling of the extracted training samples was crucial. Even simple class balancing improved the detection quality considerably. The incorporation of active learning led to a further increase in detection quality. Conclusions: With appropriate sample extraction and selection methods, nuclei detection algorithms trained on the basis of simple center marker annotations can produce comparable quality to algorithms trained on conventionally created training sets.

  8. Genetic Algorithm for Solving Simple Mathematical Equality Problem

    OpenAIRE

    Hermawanto, Denny

    2013-01-01

    This paper explains genetic algorithm for novice in this field. Basic philosophy of genetic algorithm and its flowchart are described. Step by step numerical computation of genetic algorithm for solving simple mathematical equality problem will be briefly explained

  9. The Analysis of a Simple k-Means Clustering Algorithm

    National Research Council Canada - National Science Library

    Kanungo, T; Mount, D. M; Netanyahu, N. S; Piatko, C; Silverman, R; Wu, A. Y

    2000-01-01

    .... A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm...

  10. simple algorithm in the management of fetal sacroccocygeal

    African Journals Online (AJOL)

    developing fetus with SCT is prone to high output cardiac failure, hydrops, placentomegaly and complications of delivery. Understanding a simple way of managing this condition is important in developing countries to prevent morbidities and mortalities. A simplified algorithm of. SIMPLE ALGORITHM IN THE MANAGEMENT ...

  11. Fully Consistent SIMPLE-Like Algorithms on Collocated Grids

    DEFF Research Database (Denmark)

    Kolmogorov, Dmitry; Shen, Wen Zhong; Sørensen, Niels N.

    2015-01-01

    To increase the convergence rate of SIMPLE-like algorithms on collocated grids, a compatibility condition between mass flux interpolation methods and SIMPLE-like algorithms is presented. Results of unsteady flow computations show that the SIMPLEC algorithm, when obeying the compatibility condition......, may obtain up to 35% higher convergence rate as compared to the standard SIMPLEC algorithm. Two new interpolation methods, fully compatible with the SIMPLEC algorithm, are presented and compared with some existing interpolation methods, including the standard methods of Choi [9] and Shen et al. [8...

  12. Timescape: a simple space-time interpolation geostatistical Algorithm

    Science.gov (United States)

    Ciolfi, Marco; Chiocchini, Francesca; Gravichkova, Olga; Pisanelli, Andrea; Portarena, Silvia; Scartazza, Andrea; Brugnoli, Enrico; Lauteri, Marco

    2016-04-01

    Environmental sciences include both time and space variability in their datasets. Some established tools exist for both spatial interpolation and time series analysis alone, but mixing space and time variability calls for compromise: Researchers are often forced to choose which is the main source of variation, neglecting the other. We propose a simple algorithm, which can be used in many fields of Earth and environmental sciences when both time and space variability must be considered on equal grounds. The algorithm has already been implemented in Java language and the software is currently available at https://sourceforge.net/projects/timescapeglobal/ (it is published under GNU-GPL v3.0 Free Software License). The published version of the software, Timescape Global, is focused on continent- to Earth-wide spatial domains, using global longitude-latitude coordinates for samples localization. The companion Timescape Local software is currently under development ad will be published with an open license as well; it will use projected coordinates for a local to regional space scale. The basic idea of the Timescape Algorithm consists in converting time into a sort of third spatial dimension, with the addition of some causal constraints, which drive the interpolation including or excluding observations according to some user-defined rules. The algorithm is applicable, as a matter of principle, to anything that can be represented with a continuous variable (a scalar field, technically speaking). The input dataset should contain position, time and observed value of all samples. Ancillary data can be included in the interpolation as well. After the time-space conversion, Timescape follows basically the old-fashioned IDW (Inverse Distance Weighted) interpolation Algorithm, although users have a wide choice of customization options that, at least partially, overcome some of the known issues of IDW. The three-dimensional model produced by the Timescape Algorithm can be

  13. A simple algorithm for computing the smallest enclosing circle

    DEFF Research Database (Denmark)

    Skyum, Sven

    1991-01-01

    Presented is a simple O(n log n) algorithm for computing the smallest enclosing circle of a convex polygon. It can be easily extended to algorithms that compute the farthest-and the closest-point Voronoi diagram of a convex polygon within the same time bound.......Presented is a simple O(n log n) algorithm for computing the smallest enclosing circle of a convex polygon. It can be easily extended to algorithms that compute the farthest-and the closest-point Voronoi diagram of a convex polygon within the same time bound....

  14. Improved time complexity analysis of the Simple Genetic Algorithm

    DEFF Research Database (Denmark)

    Oliveto, Pietro S.; Witt, Carsten

    2015-01-01

    A runtime analysis of the Simple Genetic Algorithm (SGA) for the OneMax problem has recently been presented proving that the algorithm with population size μ≤n1/8−ε requires exponential time with overwhelming probability. This paper presents an improved analysis which overcomes some limitations...

  15. A simple consensus algorithm for distributed averaging in random ...

    Indian Academy of Sciences (India)

    guaranteed convergence with this simple algorithm. Keywords. Sensor networks; random geographical networks; distributed averaging; consensus algorithms. PACS Nos 89.75.Hc; 89.75.Fb; 89.20.Ff. 1. Introduction. Wireless sensor networks are increasingly used in many applications ranging from envi- ronmental to ...

  16. Inverse synthetic aperture radar imaging principles, algorithms and applications

    CERN Document Server

    Chen , Victor C

    2014-01-01

    Inverse Synthetic Aperture Radar Imaging: Principles, Algorithms and Applications is based on the latest research on ISAR imaging of moving targets and non-cooperative target recognition (NCTR). With a focus on the advances and applications, this book will provide readers with a working knowledge on various algorithms of ISAR imaging of targets and implementation with MATLAB. These MATLAB algorithms will prove useful in order to visualize and manipulate some simulated ISAR images.

  17. Automatic modulation classification principles, algorithms and applications

    CERN Document Server

    Zhu, Zhechen

    2014-01-01

    Automatic Modulation Classification (AMC) has been a key technology in many military, security, and civilian telecommunication applications for decades. In military and security applications, modulation often serves as another level of encryption; in modern civilian applications, multiple modulation types can be employed by a signal transmitter to control the data rate and link reliability. This book offers comprehensive documentation of AMC models, algorithms and implementations for successful modulation recognition. It provides an invaluable theoretical and numerical comparison of AMC algo

  18. Genetic Algorithms Principles Towards Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Nabil M. Hewahi

    2011-10-01

    Full Text Available In this paper we propose a general approach based on Genetic Algorithms (GAs to evolve Hidden Markov Models (HMM. The problem appears when experts assign probability values for HMM, they use only some limited inputs. The assigned probability values might not be accurate to serve in other cases related to the same domain. We introduce an approach based on GAs to find
    out the suitable probability values for the HMM to be mostly correct in more cases than what have been used to assign the probability values.

  19. Architectures of soft robotic locomotion enabled by simple mechanical principles.

    Science.gov (United States)

    Zhu, Liangliang; Cao, Yunteng; Liu, Yilun; Yang, Zhe; Chen, Xi

    2017-06-28

    In nature, a variety of limbless locomotion patterns flourish, from the small or basic life forms (Escherichia coli, amoebae, etc.) to the large or intelligent creatures (e.g., slugs, starfishes, earthworms, octopuses, jellyfishes, and snakes). Many bioinspired soft robots based on locomotion have been developed in the past few decades. In this work, based on the kinematics and dynamics of two representative locomotion modes (i.e., worm-like crawling and snake-like slithering), we propose a broad set of innovative designs for soft mobile robots through simple mechanical principles. Inspired by and going beyond the existing biological systems, these designs include 1-D (dimensional), 2-D, and 3-D robotic locomotion patterns enabled by the simple actuation of continuous beams. We report herein over 20 locomotion modes achieving various locomotion functions, including crawling, rising, running, creeping, squirming, slithering, swimming, jumping, turning, turning over, helix rolling, wheeling, etc. Some are able to reach high speed, high efficiency, and overcome obstacles. All these locomotion strategies and functions can be integrated into a simple beam model. The proposed simple and robust models are adaptive for severe and complex environments. These elegant designs for diverse robotic locomotion patterns are expected to underpin future deployments of soft robots and to inspire a series of advanced designs.

  20. On the runtime analysis of the Simple Genetic Algorithm

    DEFF Research Database (Denmark)

    Oliveto, Pietro S.; Witt, Carsten

    2014-01-01

    For many years it has been a challenge to analyze the time complexity of Genetic Algorithms (GAs) using stochastic selection together with crossover and mutation. This paper presents a rigorous runtime analysis of the well-known Simple Genetic Algorithm (SGA) for OneMax. It is proved that the SGA...... for a standard benchmark function. The presented techniques might serve as a first basis towards systematic runtime analyses of GAs....

  1. On the Analysis of the Simple Genetic Algorithm

    DEFF Research Database (Denmark)

    Oliveto, Pietro S.; Witt, Carsten

    2012-01-01

    For many years it has been a challenge to analyze the time complexity of Genetic Algorithms (GAs) using stochastic selection together with crossover and mutation. This paper presents a rigorous runtime analysis of the well-known Simple Genetic Algorithm (SGA) for OneMax. It is proved that the SGA...... benchmark function. The presented techniques might serve as a first basis towards systematic runtime analyses of GAs....

  2. Nodal algorithm derived from a new variational principle

    International Nuclear Information System (INIS)

    Watson, Fernando V.

    1995-01-01

    As a by-product of the research being carried on by the author on methods of recovering pin power distribution of PWR cores, a nodal algorithm based on a modified variational principle for the two group diffusion equations has been obtained. The main feature of the new algorithm is the low dimensionality achieved by the reduction of the original diffusion equations to a system of algebraic Eigen equations involving the average sources only, instead of sources and interface group currents used in conventional nodal methods. The advantage of this procedure is discussed and results generated by the new algorithm and by a finite difference code are compared. (author). 2 refs, 7 tabs

  3. Improved Runtime Analysis of the Simple Genetic Algorithm

    DEFF Research Database (Denmark)

    Oliveto, Pietro S.; Witt, Carsten

    2013-01-01

    A runtime analysis of the Simple Genetic Algorithm (SGA) for the OneMax problem has recently been presented proving that the algorithm requires exponential time with overwhelming probability. This paper presents an improved analysis which overcomes some limitations of our previous one. Firstly......, the new result holds for population sizes up to mu = n1/4-epsilon which is an improvement up to a power of 2 larger. Secondly, we present a technique to bound the diversity of the population that does not require a bound on its bandwidth. Apart from allowing a stronger result, we believe this is a major...... improvement towards the reusability of the techniques in future systematic analyses of GAs. Finally, we consider the more natural SGA using selection with replacement rather than without replacement although the results hold for both algorithmic versions. Experiments are presented to explore the limits...

  4. The algorithms and principles of non-photorealistic graphics

    CERN Document Server

    Geng, Weidong

    2011-01-01

    ""The Algorithms and Principles of Non-photorealistic Graphics: Artistic Rendering and Cartoon Animation"" provides a conceptual framework for and comprehensive and up-to-date coverage of research on non-photorealistic computer graphics including methodologies, algorithms and software tools dedicated to generating artistic and meaningful images and animations. This book mainly discusses how to create art from a blank canvas, how to convert the source images into pictures with the desired visual effects, how to generate artistic renditions from 3D models, how to synthesize expressive pictures f

  5. Insights: Simple Models for Teaching Equilibrium and Le Chatelier's Principle.

    Science.gov (United States)

    Russell, Joan M.

    1988-01-01

    Presents three models that have been effective for teaching chemical equilibrium and Le Chatelier's principle: (1) the liquid transfer model, (2) the fish model, and (3) the teeter-totter model. Explains each model and its relation to Le Chatelier's principle. (MVL)

  6. A Simple Model of Entrepreneurship for Principles of Economics Courses

    Science.gov (United States)

    Gunter, Frank R.

    2012-01-01

    The critical roles of entrepreneurs in creating, operating, and destroying markets, as well as their importance in driving long-term economic growth are still generally either absent from principles of economics texts or relegated to later chapters. The primary difficulties in explaining entrepreneurship at the principles level are the lack of a…

  7. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    International Nuclear Information System (INIS)

    Rolland, Joran; Simonnet, Eric

    2015-01-01

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations

  8. Simple algorithm for improved security in the FDDI protocol

    Science.gov (United States)

    Lundy, G. M.; Jones, Benjamin

    1993-02-01

    We propose a modification to the Fiber Distributed Data Interface (FDDI) protocol based on a simple algorithm which will improve confidential communication capability. This proposed modification provides a simple and reliable system which exploits some of the inherent security properties in a fiber optic ring network. This method differs from conventional methods in that end to end encryption can be facilitated at the media access control sublayer of the data link layer in the OSI network model. Our method is based on a variation of the bit stream cipher method. The transmitting station takes the intended confidential message and uses a simple modulo two addition operation against an initialization vector. The encrypted message is virtually unbreakable without the initialization vector. None of the stations on the ring will have access to both the encrypted message and the initialization vector except the transmitting and receiving stations. The generation of the initialization vector is unique for each confidential transmission and thus provides a unique approach to the key distribution problem. The FDDI protocol is of particular interest to the military in terms of LAN/MAN implementations. Both the Army and the Navy are considering the standard as the basis for future network systems. A simple and reliable security mechanism with the potential to support realtime communications is a necessary consideration in the implementation of these systems. The proposed method offers several advantages over traditional methods in terms of speed, reliability, and standardization.

  9. The principles of uncomplicated exodontia: simple steps for safe extractions.

    Science.gov (United States)

    Sullivan, S M

    1999-01-01

    This article reviews the basic principles of patient evaluation and surgical techniques to accomplish extraction of teeth in an uncomplicated manner. Also presented are techniques for extraction-site grafting with bioactive glass.

  10. Design principles and algorithms for automated air traffic management

    Science.gov (United States)

    Erzberger, Heinz

    1995-01-01

    This paper presents design principles and algorithm for building a real time scheduler. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high altitude airspace far from the airport and low altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time.

  11. Design Principles and Algorithms for Air Traffic Arrival Scheduling

    Science.gov (United States)

    Erzberger, Heinz; Itoh, Eri

    2014-01-01

    This report presents design principles and algorithms for building a real-time scheduler of arrival aircraft based on a first-come-first-served (FCFS) scheduling protocol. The algorithms provide the conceptual and computational foundation for the Traffic Management Advisor (TMA) of the Center/terminal radar approach control facilities (TRACON) automation system, which comprises a set of decision support tools for managing arrival traffic at major airports in the United States. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high-altitude airspace far away from the airport and low-altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time. This report is a revision of an earlier paper first presented as part of an Advisory Group for Aerospace Research and Development (AGARD) lecture series in September 1995. The authors, during vigorous discussions over the details of this paper, felt it was important to the air-trafficmanagement (ATM) community to revise and extend the original 1995 paper, providing more detail and clarity and thereby allowing future researchers to understand this foundational work as the basis for the TMA's scheduling algorithms.

  12. Parallelization of MCNP4 code by using simple FORTRAN algorithms

    International Nuclear Information System (INIS)

    Yazid, P.I.; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka.

    1993-12-01

    Simple FORTRAN algorithms, that rely only on open, close, read and write statements, together with disk files and some UNIX commands have been applied to parallelization of MCNP4. The code, named MCNPNFS, maintains almost all capabilities of MCNP4 in solving shielding problems. It is able to perform parallel computing on a set of any UNIX workstations connected by a network, regardless of the heterogeneity in hardware system, provided that all processors produce a binary file in the same format. Further, it is confirmed that MCNPNFS can be executed also on Monte-4 vector-parallel computer. MCNPNFS has been tested intensively by executing 5 photon-neutron benchmark problems, a spent fuel cask problem and 17 sample problems included in the original code package of MCNP4. Three different workstations, connected by a network, have been used to execute MCNPNFS in parallel. By measuring CPU time, the parallel efficiency is determined to be 58% to 99% and 86% in average. On Monte-4, MCNPNFS has been executed using 4 processors concurrently and has achieved the parallel efficiency of 79% in average. (author)

  13. A Simple Linear Ranking Algorithm Using Query Dependent Intercept Variables

    OpenAIRE

    Ailon, Nir

    2008-01-01

    The LETOR website contains three information retrieval datasets used as a benchmark for testing machine learning ideas for ranking. Algorithms participating in the challenge are required to assign score values to search results for a collection of queries, and are measured using standard IR ranking measures (NDCG, precision, MAP) that depend only the relative score-induced order of the results. Similarly to many of the ideas proposed in the participating algorithms, we train a linear classifi...

  14. A simple and efficient parallel FFT algorithm using the BSP model

    NARCIS (Netherlands)

    Bisseling, R.H.; Inda, M.A.

    2000-01-01

    In this paper we present a new parallel radix FFT algorithm based on the BSP model Our parallel algorithm uses the groupcyclic distribution family which makes it simple to understand and easy to implement We show how to reduce the com munication cost of the algorithm by a factor of three in the case

  15. A Simple and Efficient Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Yunfeng Xu

    2013-01-01

    Full Text Available Artificial bee colony (ABC is a new population-based stochastic algorithm which has shown good search abilities on many optimization problems. However, the original ABC shows slow convergence speed during the search process. In order to enhance the performance of ABC, this paper proposes a new artificial bee colony (NABC algorithm, which modifies the search pattern of both employed and onlooker bees. A solution pool is constructed by storing some best solutions of the current swarm. New candidate solutions are generated by searching the neighborhood of solutions randomly chosen from the solution pool. Experiments are conducted on a set of twelve benchmark functions. Simulation results show that our approach is significantly better or at least comparable to the original ABC and seven other stochastic algorithms.

  16. A simple algorithm for the identification of clinical COPD phenotypes

    DEFF Research Database (Denmark)

    Burgel, Pierre-Régis; Paillasseur, Jean-Louis; Janssens, Wim

    2017-01-01

    International Assessment (3CIA) initiative. Cluster analysis identified five subgroups of COPD patients with different clinical characteristics (especially regarding severity of respiratory disease and the presence of cardiovascular comorbidities and diabetes). The CART-based algorithm indicated...... that the variables relevant for patient grouping differed markedly between patients with isolated respiratory disease (FEV1, dyspnoea grade) and those with multi-morbidity (dyspnoea grade, age, FEV1 and body mass index). Application of this algorithm to the 3CIA cohorts confirmed that it identified subgroups...

  17. A simple consensus algorithm for distributed averaging in random ...

    Indian Academy of Sciences (India)

    http://www.ias.ac.in/article/fulltext/pram/079/03/0493-0499. Keywords. Sensor networks; random geographical networks; distributed averaging; consensus algorithms. Abstract. Random geographical networks are realistic models for wireless sensor networks which are used in many applications. Achieving average ...

  18. Genetic algorithms principles and perspectives : a guide to GA theory

    CERN Document Server

    Reeves, Colin R; Reeves, Colin R

    2002-01-01

    Genetic Algorithms (GAs) have become a highly effective tool for solving hard optimization problems. This text provides a survey of some important theoretical contributions, many of which have been proposed and developed in the Foundations of Genetic Algorithms series of workshops.

  19. A simple algorithm for the identification of clinical COPD phenotypes

    NARCIS (Netherlands)

    Burgel, Pierre-Régis; Paillasseur, Jean-Louis; Janssens, Wim; Piquet, Jacques; ter Riet, Gerben; Garcia-Aymerich, Judith; Cosio, Borja; Bakke, Per; Puhan, Milo A.; Langhammer, Arnulf; Alfageme, Inmaculada; Almagro, Pere; Ancochea, Julio; Celli, Bartolome R.; Casanova, Ciro; de-Torres, Juan P.; Decramer, Marc; Echazarreta, Andrés; Esteban, Cristobal; Gomez Punter, Rosa Mar; Han, MeiLan K.; Johannessen, Ane; Kaiser, Bernhard; Lamprecht, Bernd; Lange, Peter; Leivseth, Linda; Marin, Jose M.; Martin, Francis; Martinez-Camblor, Pablo; Miravitlles, Marc; Oga, Toru; Sofia Ramírez, Ana; Sin, Don D.; Sobradillo, Patricia; Soler-Cataluña, Juan J.; Turner, Alice M.; Verdu Rivera, Francisco Javier; Soriano, Joan B.; Roche, Nicolas

    2017-01-01

    This study aimed to identify simple rules for allocating chronic obstructive pulmonary disease (COPD) patients to clinical phenotypes identified by cluster analyses. Data from 2409 COPD patients of French/Belgian COPD cohorts were analysed using cluster analysis resulting in the identification of

  20. Branch and peg algorithms for the simple plant location problem

    NARCIS (Netherlands)

    Goldengorin, Boris; Ghosh, Diptesh; Sierksma, Gerard

    2001-01-01

    The simple plant location problem is a well-studied problem in combinatorial optimization. It is one of deciding where to locate a set of plants so that a set of clients can be supplied by them at the minimum cost. This problem of ten appears as a subproblem in other combinatorial problems. Several

  1. Branch and peg algorithms for the simple plant location problem

    NARCIS (Netherlands)

    Goldengorin, B.; Ghosh, D.; Sierksma, G.

    The simple plant location problem is a well-studied problem in combinatorial optimization. It is one of deciding where to locate a set of plants so that a set of clients can be supplied by them at the minimum cost. This problem often appears as a subproblem in other combinatorial problems. Several

  2. A Simple FDTD Algorithm for Simulating EM-Wave Propagation in General Dispersive Anisotropic Material

    KAUST Repository

    Al-Jabr, Ahmad Ali

    2013-03-01

    In this paper, an finite-difference time-domain (FDTD) algorithm for simulating propagation of EM waves in anisotropic material is presented. The algorithm is based on the auxiliary differential equation and the general polarization formulation. In anisotropic materials, electric fields are coupled and elements in the permittivity tensor are, in general, multiterm dispersive. The presented algorithm resolves the field coupling using a formulation based on electric polarizations. It also offers a simple procedure for the treatment of multiterm dispersion in the FDTD scheme. The algorithm is tested by simulating wave propagation in 1-D magnetized plasma showing excellent agreement with analytical solutions. Extension of the algorithm to multidimensional structures is straightforward. The presented algorithm is efficient and simple compared to other algorithms found in the literature. © 2012 IEEE.

  3. Modified SIMPLE algorithm for the numerical analysis of incompressible flows with free surface

    International Nuclear Information System (INIS)

    Mok, Jin Ho; Hong, Chun Pyo; Lee, Jin Ho

    2005-01-01

    While the SIMPLE algorithm is most widely used for the simulations of flow phenomena that take place in the industrial equipment or the manufacturing processes, it is less adopted for the simulations of the free surface flow. Though the SIMPLE algorithm is free from the limitation of time step, the free surface behavior imposes the restriction on the time step. As a result, the explicit schemes are faster than the implicit scheme in terms of computation time when the same time step is applied to, since the implicit scheme includes the numerical method to solve the simultaneous equations in its procedure. If the computation time of SIMPLE algorithm can be reduced when it is applied to the unsteady free surface flow problems, the calculation can be carried out in the more stable way and, in the design process, the process variables can be controlled based on the more accurate data base. In this study, a modified SIMPLE algorithm is presented for the free surface flow. The broken water column problem is adopted for the validation of the modified algorithm (MoSIMPLE) and for comparison to the conventional SIMPLE algorithm

  4. Pareto Principle in Datamining: an Above-Average Fencing Algorithm

    Directory of Open Access Journals (Sweden)

    K. Macek

    2008-01-01

    Full Text Available This paper formulates a new datamining problem: which subset of input space has the relatively highest output where the minimal size of this subset is given. This can be useful where usual datamining methods fail because of error distribution asymmetry. The paper provides a novel algorithm for this datamining problem, and compares it with clustering of above-average individuals.

  5. A Simple But Effective Canonical Dual Theory Unified Algorithm for Global Optimization

    OpenAIRE

    Zhang, Jiapu

    2011-01-01

    Numerical global optimization methods are often very time consuming and could not be applied for high-dimensional nonconvex/nonsmooth optimization problems. Due to the nonconvexity/nonsmoothness, directly solving the primal problems sometimes is very difficult. This paper presents a very simple but very effective canonical duality theory (CDT) unified global optimization algorithm. This algorithm has convergence is proved in this paper. More important, for this CDT-unified algorithm, numerous...

  6. Calculation of propellant gas pressure by simple extended corresponding state principle

    OpenAIRE

    Bin Xu; San-jiu Ying; Xin Liao

    2016-01-01

    The virial equation can well describe gas state at high temperature and pressure, but the difficulties in virial coefficient calculation limit the use of virial equation. Simple extended corresponding state principle (SE-CSP) is introduced in virial equation. Based on a corresponding state equation, including three characteristic parameters, an extended parameter is introduced to describe the second virial coefficient expressions of main products of propellant gas. The modified SE-CSP second ...

  7. Flux-corrected transport principles, algorithms, and applications

    CERN Document Server

    Kuzmin, Dmitri; Turek, Stefan

    2005-01-01

    Addressing students and researchers as well as CFD practitioners, this book describes the state of the art in the development of high-resolution schemes based on the Flux-Corrected Transport (FCT) paradigm. Intended for readers who have a solid background in Computational Fluid Dynamics, the book begins with historical notes by J.P. Boris and D.L. Book. Review articles that follow describe recent advances in the design of FCT algorithms as well as various algorithmic aspects. The topics addressed in the book and its main highlights include: the derivation and analysis of classical FCT schemes with special emphasis on the underlying physical and mathematical constraints; flux limiting for hyperbolic systems; generalization of FCT to implicit time-stepping and finite element discretizations on unstructured meshes and its role as a subgrid scale model for Monotonically Integrated Large Eddy Simulation (MILES) of turbulent flows. The proposed enhancements of the FCT methodology also comprise the prelimiting and '...

  8. ANNIT - An Efficient Inversion Algorithm based on Prediction Principles

    Science.gov (United States)

    Růžek, B.; Kolář, P.

    2009-04-01

    Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good

  9. Flux-corrected transport principles, algorithms, and applications

    CERN Document Server

    Löhner, Rainald; Turek, Stefan

    2012-01-01

    Many modern high-resolution schemes for Computational Fluid Dynamics trace their origins to the Flux-Corrected Transport (FCT) paradigm. FCT maintains monotonicity using a nonoscillatory low-order scheme to determine the bounds for a constrained high-order approximation. This book begins with historical notes by J.P. Boris and D.L. Book who invented FCT in the early 1970s. The chapters that follow describe the design of fully multidimensional FCT algorithms for structured and unstructured grids, limiting for systems of conservation laws, and the use of FCT as an implicit subgrid scale model. The second edition presents 200 pages of additional material. The main highlights of the three new chapters include: FCT-constrained interpolation for Arbitrary Lagrangian-Eulerian methods, an optimization-based approach to flux correction, and FCT simulations of high-speed flows on overset grids. Addressing students and researchers, as well as CFD practitioners, the book is focused on computational aspects and contains m...

  10. A Simple Sizing Algorithm for Stand-Alone PV/Wind/Battery Hybrid Microgrids

    Directory of Open Access Journals (Sweden)

    Jing Li

    2012-12-01

    Full Text Available In this paper, we develop a simple algorithm to determine the required number of generating units of wind-turbine generator and photovoltaic array, and the associated storage capacity for stand-alone hybrid microgrid. The algorithm is based on the observation that the state of charge of battery should be periodically invariant. The optimal sizing of hybrid microgrid is given in the sense that the life cycle cost of system is minimized while the given load power demand can be satisfied without load rejection. We also report a case study to show the efficacy of the developed algorithm.

  11. Performance evaluation of simple linear iterative clustering algorithm on medical image processing.

    Science.gov (United States)

    Cong, Jinyu; Wei, Benzheng; Yin, Yilong; Xi, Xiaoming; Zheng, Yuanjie

    2014-01-01

    Simple Linear Iterative Clustering (SLIC) algorithm is increasingly applied to different kinds of image processing because of its excellent perceptually meaningful characteristics. In order to better meet the needs of medical image processing and provide technical reference for SLIC on the application of medical image segmentation, two indicators of boundary accuracy and superpixel uniformity are introduced with other indicators to systematically analyze the performance of SLIC algorithm, compared with Normalized cuts and Turbopixels algorithm. The extensive experimental results show that SLIC is faster and less sensitive to the image type and the setting superpixel number than other similar algorithms such as Turbopixels and Normalized cuts algorithms. And it also has a great benefit to the boundary recall, the robustness of fuzzy boundary, the setting superpixel size and the segmentation performance on medical image segmentation.

  12. On the Runtime of Randomized Local Search and Simple Evolutionary Algorithms for Dynamic Makespan Scheduling

    DEFF Research Database (Denmark)

    Neumann, Frank; Witt, Carsten

    2015-01-01

    combinatorial optimization problem, namely makespan scheduling. We study the model of a strong adversary which is allowed to change one job at regular intervals. Furthermore, we investigate the setting of random changes. Our results show that randomized local search and a simple evolutionary algorithm are very...

  13. A simple dual ascent algorithm for the multilevel facility location problem

    NARCIS (Netherlands)

    Bumb, A.F.; Kern, Walter

    2001-01-01

    We present a simple dual ascent method for the multilevel facility location problem which finds a solution within $6$ times the optimum for the uncapacitated case and within $12$ times the optimum for the capacitated one. The algorithm is deterministic and based on the primal-dual technique.

  14. A simple algorithm for measuring particle size distributions on an uneven background from TEM images

    DEFF Research Database (Denmark)

    Gontard, Lionel Cervera; Ozkaya, Dogan; Dunin-Borkowski, Rafal E.

    2011-01-01

    Nanoparticles have a wide range of applications in science and technology. Their sizes are often measured using transmission electron microscopy (TEM) or X-ray diffraction. Here, we describe a simple computer algorithm for measuring particle size distributions from TEM images in the presence...... application to images of heterogeneous catalysts is presented....

  15. Passification based simple adaptive control of quadrotor attitude: Algorithms and testbed results

    Science.gov (United States)

    Tomashevich, Stanislav; Belyavskyi, Andrey; Andrievsky, Boris

    2017-01-01

    In the paper, the results of the Passification Method with the Implicit Reference Model (IRM) approach are applied for designing the simple adaptive controller for quadrotor attitude. The IRM design technique makes it possible to relax the matching condition, known for habitual MRAC systems, and leads to simple adaptive controllers, ensuring fast tuning the controller gains, high robustness with respect to nonlinearities in the control loop, to the external disturbances and the unmodeled plant dynamics. For experimental evaluation of the adaptive systems performance, the 2DOF laboratory setup has been created. The testbed allows to safely test new control algorithms in the laboratory area with a small space and promptly make changes in cases of failure. The testing results of simple adaptive control of quadrotor attitude are presented, demonstrating efficacy of the applied simple adaptive control method. The experiments demonstrate good performance quality and high adaptation rate of the simple adaptive control system.

  16. Maximum Principles and Application to the Analysis of An Explicit Time Marching Algorithm

    Science.gov (United States)

    LeTallec, Patrick; Tidriri, Moulay D.

    1996-01-01

    In this paper we develop local and global estimates for the solution of convection-diffusion problems. We then study the convergence properties of a Time Marching Algorithm solving Advection-Diffusion problems on two domains using incompatible discretizations. This study is based on a De-Giorgi-Nash maximum principle.

  17. A Simple Density with Distance Based Initial Seed Selection Technique for K Means Algorithm

    Directory of Open Access Journals (Sweden)

    Sajidha Syed Azimuddin

    2017-01-01

    Full Text Available Open issues with respect to K means algorithm are identifying the number of clusters, initial seed concept selection, clustering tendency, handling empty clusters, identifying outliers etc. In this paper we propose a novel and a simple technique considering both density and distance of the concepts in a dataset to identify initial seed concepts for clustering. Many authors have proposed different techniques to identify initial seed concepts; but our method ensures that the initial seed concepts are chosen from different clusters that are to be generated by the clustering solution. The hallmark of our algorithm is that it is a single pass algorithm that does not require any extra parameters to be estimated. Further, our seed concepts are one among the actual concepts and not the mean of representative concepts as is the case in many other algorithms. We have implemented our proposed algorithm and compared the results with the interval based technique of Fouad Khan. We see that our method outperforms the interval based method. We have also compared our method with the original random K means and K Means++ algorithms.

  18. A simple and efficient algorithm to estimate daily global solar radiation from geostationary satellite data

    International Nuclear Information System (INIS)

    Lu, Ning; Qin, Jun; Yang, Kun; Sun, Jiulin

    2011-01-01

    Surface global solar radiation (GSR) is the primary renewable energy in nature. Geostationary satellite data are used to map GSR in many inversion algorithms in which ground GSR measurements merely serve to validate the satellite retrievals. In this study, a simple algorithm with artificial neural network (ANN) modeling is proposed to explore the non-linear physical relationship between ground daily GSR measurements and Multi-functional Transport Satellite (MTSAT) all-channel observations in an effort to fully exploit information contained in both data sets. Singular value decomposition is implemented to extract the principal signals from satellite data and a novel method is applied to enhance ANN performance at high altitude. A three-layer feed-forward ANN model is trained with one year of daily GSR measurements at ten ground sites. This trained ANN is then used to map continuous daily GSR for two years, and its performance is validated at all 83 ground sites in China. The evaluation result demonstrates that this algorithm can quickly and efficiently build the ANN model that estimates daily GSR from geostationary satellite data with good accuracy in both space and time. -- Highlights: → A simple and efficient algorithm to estimate GSR from geostationary satellite data. → ANN model fully exploits both the information from satellite and ground measurements. → Good performance of the ANN model is comparable to that of the classical models. → Surface elevation and infrared information enhance GSR inversion.

  19. A simple principled approach for modeling and understanding uniform color metrics.

    Science.gov (United States)

    Smet, Kevin A G; Webster, Michael A; Whitehead, Lorne A

    2016-03-01

    An important goal in characterizing human color vision is to order color percepts in a way that captures their similarities and differences. This has resulted in the continuing evolution of "uniform color spaces," in which the distances within the space represent the perceptual differences between the stimuli. While these metrics are now very successful in predicting how color percepts are scaled, they do so in largely empirical, ad hoc ways, with limited reference to actual mechanisms of color vision. In this article our aim is to instead begin with general and plausible assumptions about color coding, and then develop a model of color appearance that explicitly incorporates them. We show that many of the features of empirically defined color order systems (those of Munsell, Pantone, NCS, and others) as well as many of the basic phenomena of color perception, emerge naturally from fairly simple principles of color information encoding in the visual system and how it can be optimized for the spectral characteristics of the environment.

  20. Calculation of propellant gas pressure by simple extended corresponding state principle

    Directory of Open Access Journals (Sweden)

    Bin Xu

    2016-04-01

    Full Text Available The virial equation can well describe gas state at high temperature and pressure, but the difficulties in virial coefficient calculation limit the use of virial equation. Simple extended corresponding state principle (SE-CSP is introduced in virial equation. Based on a corresponding state equation, including three characteristic parameters, an extended parameter is introduced to describe the second virial coefficient expressions of main products of propellant gas. The modified SE-CSP second virial coefficient expression was extrapolated based on the virial coefficients experimental temperature, and the second virial coefficients obtained are in good agreement with the experimental data at a low temperature and the theoretical values at high temperature. The maximum pressure in the closed bomb test was calculated with modified SE-CSP virial coefficient expressions with the calculated error of less than 2%, and the error was smaller than the result calculated with the reported values under the same calculation conditions. The modified SE-CSP virial coefficient expression provides a convenient and efficient method for practical virial coefficient calculation without resorting to complicated molecular model design and integral calculation.

  1. Simple approach to sediment provenance tracing using element analysis and fundamental principles

    Science.gov (United States)

    Matys Grygar, Tomas; Elznicova, Jitka; Popelka, Jan

    2016-04-01

    Common sediment fingerprinting techniques use either (1) extensive analytical datasets, sometimes nearly complete with respect to accessible characterization techniques; they are processed by multidimensional statistics based on certain statistical assumptions on distribution functions of analytical results and conservativeness/additivity of some components, or (2) analytically demanding characteristics such as isotope ratios assumed to be unequivocal "labels" on the parent material unaltered by any catchment process. The inherent problem of the approach ad (1) is that interpretation of statistical components ("sources") is done ex post and remains purely formal. The problem of the approach ad (2) is that catchment processes (weathering, transport, deposition) can modify most geochemical parameters of soils and sediments, in other words, that the idea that some geochemistry parameters are "conservative" may be idealistic. Grain-size effects and sediment provenance have a joint influence on chemical composition of fluvial sediments that is indeed not easy to distinguish. Attempts to separate those two main components using only statistics seem risky and equivocal, because grain-size dependence of element composition is nearly individual for each element and reflects sediment maturity and catchment-specific formation transport processes. We suppose that the use of less extensive datasets of analytical results and their interpretation respecting fundamental principles should be more robust than only statistic tools applied to overwhelming datasets. We examined sediment composition, both published by other researchers and gathered by us, and we found some general principles, which are in our opinion relevant for fingerprinting: (1) Concentrations of all elements are grain-size sensitive, i.e. there are no "conservative" elements in conventional sense of provenance- or transport-pathways tracing, (2) fractionation by catchment processes and fluvial transport changes

  2. Simple but novel test method for quantitatively comparing robot mapping algorithms using SLAM and dead reckoning

    Science.gov (United States)

    Davey, Neil S.; Godil, Haris

    2013-05-01

    This article presents a comparative study between a well-known SLAM (Simultaneous Localization and Mapping) algorithm, called Gmapping, and a standard Dead-Reckoning algorithm; the study is based on experimental results of both approaches by using a commercial skid-based turning robot, P3DX. Five main base-case scenarios are conducted to evaluate and test the effectiveness of both algorithms. The results show that SLAM outperformed the Dead Reckoning in terms of map-making accuracy in all scenarios but one, since SLAM did not work well in a rapidly changing environment. Although the main conclusion about the excellence of SLAM is not surprising, the presented test method is valuable to professionals working in this area of mobile robots, as it is highly practical, and provides solid and valuable results. The novelty of this study lies in its simplicity. The simple but novel test method for quantitatively comparing robot mapping algorithms using SLAM and Dead Reckoning and some applications using autonomous robots are being patented by the authors in U.S. Patent Application Nos. 13/400,726 and 13/584,862.

  3. Inversion of self-potential anomalies caused by simple-geometry bodies using global optimization algorithms

    International Nuclear Information System (INIS)

    Göktürkler, G; Balkaya, Ç

    2012-01-01

    Three naturally inspired meta-heuristic algorithms—the genetic algorithm (GA), simulated annealing (SA) and particle swarm optimization (PSO)—were used to invert some of the self-potential (SP) anomalies originated by some polarized bodies with simple geometries. Both synthetic and field data sets were considered. The tests with the synthetic data comprised of the solutions with both noise-free and noisy data; in the tests with the field data some SP anomalies observed over a copper belt (India), graphite deposits (Germany) and metallic sulfide (Turkey) were inverted. The model parameters included the electric dipole moment, polarization angle, depth, shape factor and origin of the anomaly. The estimated parameters were compared with those from previous studies using various optimization algorithms, mainly least-squares approaches, on the same data sets. During the test studies the solutions by GA, PSO and SA were characterized as being consistent with each other; a good starting model was not a requirement to reach the global minimum. It can be concluded that the global optimization algorithms considered in this study were able to yield compatible solutions with those from widely used local optimization algorithms. (paper)

  4. The Wang Landau parallel algorithm for the simple grids. Optimizing OpenMPI parallel implementation

    Science.gov (United States)

    Kussainov, A. S.

    2017-12-01

    The Wang Landau Monte Carlo algorithm to calculate density of states for the different simple spin lattices was implemented. The energy space was split between the individual threads and balanced according to the expected runtime for the individual processes. Custom spin clustering mechanism, necessary for overcoming of the critical slowdown in the certain energy subspaces, was devised. Stable reconstruction of the density of states was of primary importance. Some data post-processing techniques were involved to produce the expected smooth density of states.

  5. Pulse filtering and correction for CZT detectors using simple digital algorithms based on the wavelet transform

    International Nuclear Information System (INIS)

    Perez, J.M.; Garcia-Belmonte, G.

    1998-01-01

    The authors report an approach to double gaussian filtering used in classical works as dual parameter pulse processing. This technique has been implemented by creating a bank of gaussian-like digital filters based on wavelet transforms. A simple method to correct for the charge loss inherent to room temperature semiconductor gamma detectors has been developed. This method is based on multi-resolution signal analysis. Results are reported from tests of these algorithms on commercial CZT detectors and two trapped hole charge correction levels are compared. Finally, the advantages and limitations of this new approach to detector pulse processing are discussed

  6. MAJOR PRINCIPLES OF EPILEPSY TREATMENT. ALGORITHM OF SELECTION OF ANTIEPILEPTIC DRUGS

    Directory of Open Access Journals (Sweden)

    K. Yu. Mukhin

    2014-01-01

    Full Text Available The authors reviewed general principles of epilepsy treatment in details as well as provided their proprietary algorithm of selection of antiepileptic drugs developed Svt. Luka's Institute of Child Neurology and Epilepsy. This algorithm is designed for general practitioners that deal with treatment of epilepsy. In the course of selection of the first antiepileptic drug, the doctor must take into consideration the age of the patient, assess their level of development, clinical manifestations of seizures, data of electroencephalography and magnetic resonance imaging. The data received allows determination of the type of seizures, supposing of the syndrome-related diagnosis, and selection of the most appropriate antiepileptic drug of first choice in each specific case. There are also recommendations for further examination of patients and monitoring of efficacy of therapy.

  7. A Simple Fatigue Life Prediction Algorithm Using the Modified NASGRO Equation

    Directory of Open Access Journals (Sweden)

    Wei Zhang

    2016-01-01

    Full Text Available A simple fatigue life prediction algorithm using the modified NASGRO equation is proposed in this paper. The NASGRO equation is modified by introducing the concept of intrinsic effective threshold stress intensity factor (SIF range ΔKeffth. One advantage of the proposed method is that the complex growth behavior analysis of small cracks can be avoided, and then the fatigue life can be calculated by directly integrating the crack growth model from the initial defect size to the critical crack size. The fatigue limit and the intrinsic effective threshold SIF range ΔKeffth are used to calculate the initial defect size or initial flaw size. The value of ΔKeffth is determined by extrapolating the crack propagation rate curves. Instead of using the fatigue limit determined by the fatigue strength at the specific fatigue life, the fatigue limit is selected based on the horizontal tendency of the S-N curve. The calculated fatigue lives are compared to the experimental data of two different alloys. The predicted S-N curves agree with the test data well. Besides, the prediction results are compared with that calculated using the FASTRAN code. Results indicate that the proposed life prediction algorithm is simple and efficient.

  8. A simple algorithm for subregional striatal uptake analysis with partial volume correction in dopaminergic PET imaging

    International Nuclear Information System (INIS)

    Lue Kunhan; Lin Hsinhon; Chuang Kehshih; Kao Chihhao, K.; Hsieh Hungjen; Liu Shuhsin

    2014-01-01

    In positron emission tomography (PET) of the dopaminergic system, quantitative measurements of nigrostriatal dopamine function are useful for differential diagnosis. A subregional analysis of striatal uptake enables the diagnostic performance to be more powerful. However, the partial volume effect (PVE) induces an underestimation of the true radioactivity concentration in small structures. This work proposes a simple algorithm for subregional analysis of striatal uptake with partial volume correction (PVC) in dopaminergic PET imaging. The PVC algorithm analyzes the separate striatal subregions and takes into account the PVE based on the recovery coefficient (RC). The RC is defined as the ratio of the PVE-uncorrected to PVE-corrected radioactivity concentration, and is derived from a combination of the traditional volume of interest (VOI) analysis and the large VOI technique. The clinical studies, comprising 11 patients with Parkinson's disease (PD) and 6 healthy subjects, were used to assess the impact of PVC on the quantitative measurements. Simulations on a numerical phantom that mimicked realistic healthy and neurodegenerative situations were used to evaluate the performance of the proposed PVC algorithm. In both the clinical and the simulation studies, the striatal-to-occipital ratio (SOR) values for the entire striatum and its subregions were calculated with and without PVC. In the clinical studies, the SOR values in each structure (caudate, anterior putamen, posterior putamen, putamen, and striatum) were significantly higher by using PVC in contrast to those without. Among the PD patients, the SOR values in each structure and quantitative disease severity ratings were shown to be significantly related only when PVC was used. For the simulation studies, the average absolute percentage error of the SOR estimates before and after PVC were 22.74% and 1.54% in the healthy situation, respectively; those in the neurodegenerative situation were 20.69% and 2

  9. Multiway simple cycle separators and I/O-efficient algorithms for planar graphs

    DEFF Research Database (Denmark)

    Arge, L.; Walderveen, Freek van; Zeh, Norbert

    2013-01-01

    in internal memory, thereby completely negating the performance gain achieved by minimizing the number of disk accesses. In this paper, we show how to make these algorithms simultaneously efficient in internal and external memory so they achieve I/O complexity O(sort(N)) and take O(N log N) time in internal......We revisit I/O-efficient solutions to a number of fundamental problems on planar graphs: single-source shortest paths, topological sorting, and computing strongly connected components. Existing I/O-efficient solutions to these problems pay for I/O efficiency using excessive computation time...... memory, where sort(N) is the number of I/Os needed to sort N items in external memory. The key, and the main technical contribution of this paper, is a multiway version of Miller's simple cycle separator theorem. We show how to compute these separators in linear time in internal memory, and using O(sort...

  10. Simple Signal Detection Algorithm for 4+12+16 APSK in Satellite and Space Communications

    Directory of Open Access Journals (Sweden)

    Jaeyoon Lee

    2010-09-01

    Full Text Available A 4+12+16 amplitude phase shift keying (APSK modulation outperforms other 32-APSK modulations in a nonlinear additive white Gaussian noise (AWGN channel because of its intrinsic robustness against AM/AM and AM/PM distortions caused by the nonlinear characteristics of a high-power amplifier. Thus, this modulation scheme has been adopted in the digital video broadcasting-satellite2 European standard. And it has been considered for high rate transmission of telemetry data on deep space communications in consultative committee for space data systems which provides a forum for discussion of common problems in the development and operation of space data systems. In this paper, we present an improved bits-to-symbol mapping scheme with a better bit error rate for a 4+12+16 APSK signal in a nonlinear AWGN channel and propose a simple signal detection algorithm for the 4+12+16 APSK from the presented bit mapping.

  11. A phenomenological model for the structure-composition relationship of the high Tc cuprates based on simple chemical principles

    International Nuclear Information System (INIS)

    Alarco, J.A.; Talbot, P.C.

    2012-01-01

    A simple phenomenological model for the relationship between structure and composition of the high Tc cuprates is presented. The model is based on two simple crystal chemistry principles: unit cell doping and charge balance within unit cells. These principles are inspired by key experimental observations of how the materials accommodate large deviations from stoichiometry. Consistent explanations for significant HTSC properties can be explained without any additional assumptions while retaining valuable insight for geometric interpretation. Combining these two chemical principles with a review of Crystal Field Theory (CFT) or Ligand Field Theory (LFT), it becomes clear that the two oxidation states in the conduction planes (typically d 8 and d 9 ) belong to the most strongly divergent d-levels as a function of deformation from regular octahedral coordination. This observation offers a link to a range of coupling effects relating vibrations and spin waves through application of Hund’s rules. An indication of this model’s capacity to predict physical properties for HTSC is provided and will be elaborated in subsequent publications. Simple criteria for the relationship between structure and composition in HTSC systems may guide chemical syntheses within new material systems.

  12. Using a Simple Neural Network to Delineate Some Principles of Distributed Economic Choice

    Directory of Open Access Journals (Sweden)

    Pragathi P. Balasubramani

    2018-03-01

    Full Text Available The brain uses a mixture of distributed and modular organization to perform computations and generate appropriate actions. While the principles under which the brain might perform computations using modular systems have been more amenable to modeling, the principles by which the brain might make choices using distributed principles have not been explored. Our goal in this perspective is to delineate some of those distributed principles using a neural network method and use its results as a lens through which to reconsider some previously published neurophysiological data. To allow for direct comparison with our own data, we trained the neural network to perform binary risky choices. We find that value correlates are ubiquitous and are always accompanied by non-value information, including spatial information (i.e., no pure value signals. Evaluation, comparison, and selection were not distinct processes; indeed, value signals even in the earliest stages contributed directly, albeit weakly, to action selection. There was no place, other than at the level of action selection, at which dimensions were fully integrated. No units were specialized for specific offers; rather, all units encoded the values of both offers in an anti-correlated format, thus contributing to comparison. Individual network layers corresponded to stages in a continuous rotation from input to output space rather than to functionally distinct modules. While our network is likely to not be a direct reflection of brain processes, we propose that these principles should serve as hypotheses to be tested and evaluated for future studies.

  13. Using a Simple Neural Network to Delineate Some Principles of Distributed Economic Choice.

    Science.gov (United States)

    Balasubramani, Pragathi P; Moreno-Bote, Rubén; Hayden, Benjamin Y

    2018-01-01

    The brain uses a mixture of distributed and modular organization to perform computations and generate appropriate actions. While the principles under which the brain might perform computations using modular systems have been more amenable to modeling, the principles by which the brain might make choices using distributed principles have not been explored. Our goal in this perspective is to delineate some of those distributed principles using a neural network method and use its results as a lens through which to reconsider some previously published neurophysiological data. To allow for direct comparison with our own data, we trained the neural network to perform binary risky choices. We find that value correlates are ubiquitous and are always accompanied by non-value information, including spatial information (i.e., no pure value signals). Evaluation, comparison, and selection were not distinct processes; indeed, value signals even in the earliest stages contributed directly, albeit weakly, to action selection. There was no place, other than at the level of action selection, at which dimensions were fully integrated. No units were specialized for specific offers; rather, all units encoded the values of both offers in an anti-correlated format, thus contributing to comparison. Individual network layers corresponded to stages in a continuous rotation from input to output space rather than to functionally distinct modules. While our network is likely to not be a direct reflection of brain processes, we propose that these principles should serve as hypotheses to be tested and evaluated for future studies.

  14. A simple and efficient algorithm operating with linear time for MCEEG data compression.

    Science.gov (United States)

    Titus, Geevarghese; Sudhakar, M S

    2017-09-01

    Popularisation of electroencephalograph (EEG) signals in diversified fields have increased the need for devices capable of operating at lower power and storage requirements. This has led to a great deal of research in data compression, that can address (a) low latency in the coding of the signal, (b) reduced hardware and software dependencies, (c) quantify the system anomalies, and (d) effectively reconstruct the compressed signal. This paper proposes a computationally simple and novel coding scheme named spatial pseudo codec (SPC), to achieve lossy to near lossless compression of multichannel EEG (MCEEG). In the proposed system, MCEEG signals are initially normalized, followed by two parallel processes: one operating on integer part and the other, on fractional part of the normalized data. The redundancies in integer part are exploited using spatial domain encoder, and the fractional part is coded as pseudo integers. The proposed method has been tested on a wide range of databases having variable sampling rates and resolutions. Results indicate that the algorithm has a good recovery performance with an average percentage root mean square deviation (PRD) of 2.72 for an average compression ratio (CR) of 3.16. Furthermore, the algorithm has a complexity of only O(n) with an average encoding and decoding time per sample of 0.3 ms and 0.04 ms respectively. The performance of the algorithm is comparable with recent methods like fast discrete cosine transform (fDCT) and tensor decomposition methods. The results validated the feasibility of the proposed compression scheme for practical MCEEG recording, archiving and brain computer interfacing systems.

  15. A Simple and Universal Aerosol Retrieval Algorithm for Landsat Series Images Over Complex Surfaces

    Science.gov (United States)

    Wei, Jing; Huang, Bo; Sun, Lin; Zhang, Zhaoyang; Wang, Lunche; Bilal, Muhammad

    2017-12-01

    Operational aerosol optical depth (AOD) products are available at coarse spatial resolutions from several to tens of kilometers. These resolutions limit the application of these products for monitoring atmospheric pollutants at the city level. Therefore, a simple, universal, and high-resolution (30 m) Landsat aerosol retrieval algorithm over complex urban surfaces is developed. The surface reflectance is estimated from a combination of top of atmosphere reflectance at short-wave infrared (2.22 μm) and Landsat 4-7 surface reflectance climate data records over densely vegetated areas and bright areas. The aerosol type is determined using the historical aerosol optical properties derived from the local urban Aerosol Robotic Network (AERONET) site (Beijing). AERONET ground-based sun photometer AOD measurements from five sites located in urban and rural areas are obtained to validate the AOD retrievals. Terra MODerate resolution Imaging Spectrometer Collection (C) 6 AOD products (MOD04) including the dark target (DT), the deep blue (DB), and the combined DT and DB (DT&DB) retrievals at 10 km spatial resolution are obtained for comparison purposes. Validation results show that the Landsat AOD retrievals at a 30 m resolution are well correlated with the AERONET AOD measurements (R2 = 0.932) and that approximately 77.46% of the retrievals fall within the expected error with a low mean absolute error of 0.090 and a root-mean-square error of 0.126. Comparison results show that Landsat AOD retrievals are overall better and less biased than MOD04 AOD products, indicating that the new algorithm is robust and performs well in AOD retrieval over complex surfaces. The new algorithm can provide continuous and detailed spatial distributions of AOD during both low and high aerosol loadings.

  16. Implementation and testing of a simple data assimilation algorithm in the regional air pollution forecast model, DEOM

    DEFF Research Database (Denmark)

    Frydendall, Jan; Brandt, J.; Christensen, J. H.

    2009-01-01

    A simple data assimilation algorithm based on statistical interpolation has been developed and coupled to a long-range chemistry transport model, the Danish Eulerian Operational Model (DEOM), applied for air pollution forecasting at the National Environmental Research Institute (NERI), Denmark....... In this paper, the algorithm and the results from experiments designed to find the optimal setup of the algorithm are described. The algorithm has been developed and optimized via eight different experiments where the results from different model setups have been tested against measurements from the EMEP...... configuration of the data assimilation algorithm, were found. The data assimilation algorithm will in the future be used in the operational THOR integrated air pollution forecast system, which includes the DEOM....

  17. CHESS-changing horizon efficient set search: A simple principle for multiobjective optimization

    DEFF Research Database (Denmark)

    Borges, Pedro Manuel F. C.

    2000-01-01

    This paper presents a new concept for generating approximations to the non-dominated set in multiobjective optimization problems. The approximation set A is constructed by solving several single-objective minimization problems in which a particular function D(A, z) is minimized. A new algorithm t...

  18. Principles of Stagewise Separation Process Calculations: A Simple Algebraic Approach Using Solvent Extraction.

    Science.gov (United States)

    Crittenden, Barry D.

    1991-01-01

    A simple liquid-liquid equilibrium (LLE) system involving a constant partition coefficient based on solute ratios is used to develop an algebraic understanding of multistage contacting in a first-year separation processes course. This algebraic approach to the LLE system is shown to be operable for the introduction of graphical techniques…

  19. A Simple Algorithm for Predicting Bacteremia Using Food Consumption and Shaking Chills: A Prospective Observational Study.

    Science.gov (United States)

    Komatsu, Takayuki; Takahashi, Erika; Mishima, Kentaro; Toyoda, Takeo; Saitoh, Fumihiro; Yasuda, Akari; Matsuoka, Joe; Sugita, Manabu; Branch, Joel; Aoki, Makoto; Tierney, Lawrence; Inoue, Kenji

    2017-07-01

    Predicting the presence of true bacteremia based on clinical examination is unreliable. We aimed to construct a simple algorithm for predicting true bacteremia by using food consumption and shaking chills. A prospective multicenter observational study. Three hospital centers in a large Japanese city. In total, 1,943 hospitalized patients aged 14 to 96 years who underwent blood culture acquisitions between April 2013 and August 2014 were enrolled. Patients with anorexia-inducing conditions were excluded. We assessed the patients' oral food intake based on the meal immediately prior to the blood culture with definition as "normal food consumption" when >80% of a meal was consumed and "poor food consumption" when <80% was consumed. We also concurrently evaluated for a history of shaking chills. We calculated the statistical characteristics of food consumption and shaking chills for the presence of true bacteremia, and subsequently built the algorithm by using recursive partitioning analysis. Among 1,943 patients, 223 cases were true bacteremia. Among patients with normal food consumption, without shaking chills, the incidence of true bacteremia was 2.4% (13/552). Among patients with poor food consumption and shaking chills, the incidence of true bacteremia was 47.7% (51/107). The presence of poor food consumption had a sensitivity of 93.7% (95% confidence interval [CI], 89.4%-97.9%) for true bacteremia, and the absence of poor food consumption (ie, normal food consumption) had a negative likelihood ratio (LR) of 0.18 (95% CI, 0.17-0.19) for excluding true bacteremia, respectively. Conversely, the presence of the shaking chills had a specificity of 95.1% (95% CI, 90.7%-99.4%) and a positive LR of 4.78 (95% CI, 4.56-5.00) for true bacteremia. A 2-item screening checklist for food consumption and shaking chills had excellent statistical properties as a brief screening instrument for predicting true bacteremia. © 2017 Society of Hospital Medicine

  20. Exploring Simple Algorithms for Estimating Gross Primary Production in Forested Areas from Satellite Data

    Directory of Open Access Journals (Sweden)

    Ramakrishna R. Nemani

    2012-01-01

    Full Text Available Algorithms that use remotely-sensed vegetation indices to estimate gross primary production (GPP, a key component of the global carbon cycle, have gained a lot of popularity in the past decade. Yet despite the amount of research on the topic, the most appropriate approach is still under debate. As an attempt to address this question, we compared the performance of different vegetation indices from the Moderate Resolution Imaging Spectroradiometer (MODIS in capturing the seasonal and the annual variability of GPP estimates from an optimal network of 21 FLUXNET forest towers sites. The tested indices include the Normalized Difference Vegetation Index (NDVI, Enhanced Vegetation Index (EVI, Leaf Area Index (LAI, and Fraction of Photosynthetically Active Radiation absorbed by plant canopies (FPAR. Our results indicated that single vegetation indices captured 50–80% of the variability of tower-estimated GPP, but no one index performed universally well in all situations. In particular, EVI outperformed the other MODIS products in tracking seasonal variations in tower-estimated GPP, but annual mean MODIS LAI was the best estimator of the spatial distribution of annual flux-tower GPP (GPP = 615 × LAI − 376, where GPP is in g C/m2/year. This simple algorithm rehabilitated earlier approaches linking ground measurements of LAI to flux-tower estimates of GPP and produced annual GPP estimates comparable to the MODIS 17 GPP product. As such, remote sensing-based estimates of GPP continue to offer a useful alternative to estimates from biophysical models, and the choice of the most appropriate approach depends on whether the estimates are required at annual or sub-annual temporal resolution.

  1. A Simple MPPT Algorithm for Novel PV Power Generation System by High Output Voltage DC-DC Boost Converter

    DEFF Research Database (Denmark)

    Sanjeevikumar, Padmanaban; Grandi, Gabriele; Wheeler, Patrick

    2015-01-01

    This paper presents the novel topology of Photo Voltaic (PV) power generation system with simple Maximum Power Point Tracking (MPPT) algorithm in voltage operating mode. Power circuit consists of high output voltage DC-DC boost converter which maximizes the output of PV panel. Usually traditional...... substantially improves the high output-voltage by a simple MPPT closed loop proportional-integral (P-I) controller, and requires only two sensor for feedback needs. The complete numerical model of the converter circuit along with PV MPPT algorithm is developed in numerical simulation (Matlab/Simulink) software...

  2. A simple biota removal algorithm for 35 GHz cloud radar measurements

    Science.gov (United States)

    Kalapureddy, Madhu Chandra R.; Sukanya, Patra; Das, Subrata K.; Deshpande, Sachin M.; Pandithurai, Govindan; Pazamany, Andrew L.; Ambuj K., Jha; Chakravarty, Kaustav; Kalekar, Prasad; Krishna Devisetty, Hari; Annam, Sreenivas

    2018-03-01

    promisingly simple in realization but powerful in performance due to the flexibility in constraining, identifying and filtering out the biota and screening out the true cloud content, especially the CBL clouds. Therefore, the TEST algorithm is superior for screening out the low-level clouds that are strongly linked to the rainmaking mechanism associated with the Indian Summer Monsoon region's CVS.

  3. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures

    Directory of Open Access Journals (Sweden)

    Dong-Sup Lee

    2015-01-01

    Full Text Available Independent Component Analysis (ICA, one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: insta- bility and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to vali- date the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.

  4. Simple Algorithms for Distributed Leader Election in Anonymous Synchronous Rings and Complete Networks Inspired by Neural Development in Fruit Flies.

    Science.gov (United States)

    Xu, Lei; Jeavons, Peter

    2015-11-01

    Leader election in anonymous rings and complete networks is a very practical problem in distributed computing. Previous algorithms for this problem are generally designed for a classical message passing model where complex messages are exchanged. However, the need to send and receive complex messages makes such algorithms less practical for some real applications. We present some simple synchronous algorithms for distributed leader election in anonymous rings and complete networks that are inspired by the development of the neural system of the fruit fly. Our leader election algorithms all assume that only one-bit messages are broadcast by nodes in the network and processors are only able to distinguish between silence and the arrival of one or more messages. These restrictions allow implementations to use a simpler message-passing architecture. Even with these harsh restrictions our algorithms are shown to achieve good time and message complexity both analytically and experimentally.

  5. Principle governing the simple determination of characteristic parameters for the supervision of vibratory behaviour

    International Nuclear Information System (INIS)

    Carre, J.C.; Epstein, A.

    1984-10-01

    The aim of supervision of the vibratory behaviour is the early detection of an anomaly, i.e. of any deformation liable to appear on the resonances of interest: rise or fall in maximum power, change in central frequency, widening or narrowing of half-width. This study shows that it is possible to detect easily with a spectral power density noise and a limited number of values and variances calculated on a few bands, the eventual deformation of the frequency curve. This simple method is easy to install with an on-line system and is able to process a great number of signals [fr

  6. A simple algorithm for large-scale mapping of evergreen forests in tropical America, Africa and Asia

    Science.gov (United States)

    Xiangming Xiao; Chandrashekhar M. Biradar; Christina Czarnecki; Tunrayo Alabi; Michael Keller

    2009-01-01

    The areal extent and spatial distribution of evergreen forests in the tropical zones are important for the study of climate, carbon cycle and biodiversity. However, frequent cloud cover in the tropical regions makes mapping evergreen forests a challenging task. In this study we developed a simple and novel mapping algorithm that is based on the temporal profile...

  7. Derivation and validation of a simple exercise-based algorithm for prediction of genetic testing in relatives of LQTS probands

    NARCIS (Netherlands)

    Sy, Raymond W.; van der Werf, Christian; Chattha, Ishvinder S.; Chockalingam, Priya; Adler, Arnon; Healey, Jeffrey S.; Perrin, Mark; Gollob, Michael H.; Skanes, Allan C.; Yee, Raymond; Gula, Lorne J.; Leong-Sit, Peter; Viskin, Sami; Klein, George J.; Wilde, Arthur A.; Krahn, Andrew D.

    2011-01-01

    Genetic testing can diagnose long-QT syndrome (LQTS) in asymptomatic relatives of patients with an identified mutation; however, it is costly and subject to availability. The accuracy of a simple algorithm that incorporates resting and exercise ECG parameters for screening LQTS in asymptomatic

  8. GAtor: A First-Principles Genetic Algorithm for Molecular Crystal Structure Prediction.

    Science.gov (United States)

    Curtis, Farren; Li, Xiayue; Rose, Timothy; Vázquez-Mayagoitia, Álvaro; Bhattacharya, Saswata; Ghiringhelli, Luca M; Marom, Noa

    2018-04-10

    We present the implementation of GAtor, a massively parallel, first-principles genetic algorithm (GA) for molecular crystal structure prediction. GAtor is written in Python and currently interfaces with the FHI-aims code to perform local optimizations and energy evaluations using dispersion-inclusive density functional theory (DFT). GAtor offers a variety of fitness evaluation, selection, crossover, and mutation schemes. Breeding operators designed specifically for molecular crystals provide a balance between exploration and exploitation. Evolutionary niching is implemented in GAtor by using machine learning to cluster the dynamically updated population by structural similarity and then employing a cluster-based fitness function. Evolutionary niching promotes uniform sampling of the potential energy surface by evolving several subpopulations, which helps overcome initial pool biases and selection biases (genetic drift). The various settings offered by GAtor increase the likelihood of locating numerous low-energy minima, including those located in disconnected, hard to reach regions of the potential energy landscape. The best structures generated are re-relaxed and re-ranked using a hierarchy of increasingly accurate DFT functionals and dispersion methods. GAtor is applied to a chemically diverse set of four past blind test targets, characterized by different types of intermolecular interactions. The experimentally observed structures and other low-energy structures are found for all four targets. In particular, for Target II, 5-cyano-3-hydroxythiophene, the top ranked putative crystal structure is a Z' = 2 structure with P1̅ symmetry and a scaffold packing motif, which has not been reported previously.

  9. Implementation and testing of a simple data assimilation algorithm in the regional air pollution forecast model, DEOM

    Directory of Open Access Journals (Sweden)

    J. Frydendall

    2009-08-01

    Full Text Available A simple data assimilation algorithm based on statistical interpolation has been developed and coupled to a long-range chemistry transport model, the Danish Eulerian Operational Model (DEOM, applied for air pollution forecasting at the National Environmental Research Institute (NERI, Denmark. In this paper, the algorithm and the results from experiments designed to find the optimal setup of the algorithm are described. The algorithm has been developed and optimized via eight different experiments where the results from different model setups have been tested against measurements from the EMEP (European Monitoring and Evaluation Programme network covering a half-year period, April–September 1999. The best performing setup of the data assimilation algorithm for surface ozone concentrations has been found, including the combination of determining the covariances using the Hollingsworth method, varying the correlation length according to the number of adjacent observation stations and applying the assimilation routine at three successive hours during the morning. Improvements in the correlation coefficient in the range of 0.1 to 0.21 between the results from the reference and the optimal configuration of the data assimilation algorithm, were found. The data assimilation algorithm will in the future be used in the operational THOR integrated air pollution forecast system, which includes the DEOM.

  10. A Cubature-Principle-Assisted IMM-Adaptive UKF Algorithm for Maneuvering Target Tracking Caused by Sensor Faults

    Directory of Open Access Journals (Sweden)

    Huan Zhou

    2017-09-01

    Full Text Available Aimed at solving the problem of decreased filtering precision while maneuvering target tracking caused by non-Gaussian distribution and sensor faults, we developed an efficient interacting multiple model-unscented Kalman filter (IMM-UKF algorithm. By dividing the IMM-UKF into two links, the algorithm introduces the cubature principle to approximate the probability density of the random variable, after the interaction, by considering the external link of IMM-UKF, which constitutes the cubature-principle-assisted IMM method (CPIMM for solving the non-Gaussian problem, and leads to an adaptive matrix to balance the contribution of the state. The algorithm provides filtering solutions by considering the internal link of IMM-UKF, which is called a new adaptive UKF algorithm (NAUKF to address sensor faults. The proposed CPIMM-NAUKF is evaluated in a numerical simulation and two practical experiments including one navigation experiment and one maneuvering target tracking experiment. The simulation and experiment results show that the proposed CPIMM-NAUKF has greater filtering precision and faster convergence than the existing IMM-UKF. The proposed algorithm achieves a very good tracking performance, and will be effective and applicable in the field of maneuvering target tracking.

  11. A simple algorithm for calculating the area of an arbitrary polygon

    Directory of Open Access Journals (Sweden)

    K.R. Wijeweera

    2017-06-01

    Full Text Available Computing the area of an arbitrary polygon is a popular problem in pure mathematics. The two methods used are Shoelace Method (SM and Orthogonal Trapezoids Method (OTM. In OTM, the polygon is partitioned into trapezoids by drawing either horizontal or vertical lines through its vertices. The area of each trapezoid is computed and the resultant areas are added up. In SM, a formula which is a generalization of Green’s Theorem for the discrete case is used. The most of the available systems is based on SM. Since an algorithm for OTM is not available in literature, this paper proposes an algorithm for OTM along with efficient implementation. Conversion of a pure mathematical method into an efficient computer program is not straightforward. In order to reduce the run time, minimal computation needs to be achieved. Handling of indeterminate forms and special cases separately can support this. On the other hand, precision error should also be avoided. Salient feature of the proposed algorithm is that it successfully handles these situations achieving minimum run time. Experimental results of the proposed method are compared against that of the existing algorithm. However, the proposed algorithm suggests a way to partition a polygon into orthogonal trapezoids which is not an easy task. Additionally, the proposed algorithm uses only basic mathematical concepts while the Green’s theorem uses complicated mathematical concepts. The proposed algorithm can be used when the simplicity is important than the speed.

  12. Implementation on Landsat Data of a Simple Cloud Mask Algorithm Developed for MODIS Land Bands

    Science.gov (United States)

    Oreopoulos, Lazaros; Wilson, Michael J.; Varnai, Tamas

    2010-01-01

    This letter assesses the performance on Landsat-7 images of a modified version of a cloud masking algorithm originally developed for clear-sky compositing of Moderate Resolution Imaging Spectroradiometer (MODIS) images at northern mid-latitudes. While data from recent Landsat missions include measurements at thermal wavelengths, and such measurements are also planned for the next mission, thermal tests are not included in the suggested algorithm in its present form to maintain greater versatility and ease of use. To evaluate the masking algorithm we take advantage of the availability of manual (visual) cloud masks developed at USGS for the collection of Landsat scenes used here. As part of our evaluation we also include the Automated Cloud Cover Assesment (ACCA) algorithm that includes thermal tests and is used operationally by the Landsat-7 mission to provide scene cloud fractions, but no cloud masks. We show that the suggested algorithm can perform about as well as ACCA both in terms of scene cloud fraction and pixel-level cloud identification. Specifically, we find that the algorithm gives an error of 1.3% for the scene cloud fraction of 156 scenes, and a root mean square error of 7.2%, while it agrees with the manual mask for 93% of the pixels, figures very similar to those from ACCA (1.2%, 7.1%, 93.7%).

  13. A new model and simple algorithms for multi-label mumford-shah problems

    KAUST Repository

    Hong, Byungwoo

    2013-06-01

    In this work, we address the multi-label Mumford-Shah problem, i.e., the problem of jointly estimating a partitioning of the domain of the image, and functions defined within regions of the partition. We create algorithms that are efficient, robust to undesirable local minima, and are easy-to-implement. Our algorithms are formulated by slightly modifying the underlying statistical model from which the multi-label Mumford-Shah functional is derived. The advantage of this statistical model is that the underlying variables: the labels and the functions are less coupled than in the original formulation, and the labels can be computed from the functions with more global updates. The resulting algorithms can be tuned to the desired level of locality of the solution: from fully global updates to more local updates. We demonstrate our algorithm on two applications: joint multi-label segmentation and denoising, and joint multi-label motion segmentation and flow estimation. We compare to the state-of-the-art in multi-label Mumford-Shah problems and show that we achieve more promising results. © 2013 IEEE.

  14. The Simple Mono-Canal Algorithm for the Temperature Estimating of ...

    African Journals Online (AJOL)

    The knowledge of the surface temperature is strongly required in several applications, for instance in agrometeorology, climatology and environmental studies. In this study we have developed an algorithm mono-canal to estimate land surface temperature (Ts) in spectral band as the infrared channel (IR) of METEOSAT-7.

  15. A Scalable O(N) Algorithm for Large-Scale Parallel First-Principles Molecular Dynamics Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Osei-Kuffuor, Daniel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fattebert, Jean-Luc [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-01-01

    Traditional algorithms for first-principles molecular dynamics (FPMD) simulations only gain a modest capability increase from current petascale computers, due to their O(N3) complexity and their heavy use of global communications. To address this issue, we are developing a truly scalable O(N) complexity FPMD algorithm, based on density functional theory (DFT), which avoids global communications. The computational model uses a general nonorthogonal orbital formulation for the DFT energy functional, which requires knowledge of selected elements of the inverse of the associated overlap matrix. We present a scalable algorithm for approximately computing selected entries of the inverse of the overlap matrix, based on an approximate inverse technique, by inverting local blocks corresponding to principal submatrices of the global overlap matrix. The new FPMD algorithm exploits sparsity and uses nearest neighbor communication to provide a computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic orbitals are confined, and a cutoff beyond which the entries of the overlap matrix can be omitted when computing selected entries of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to O(100K) atoms on O(100K) processors, with a wall-clock time of O(1) minute per molecular dynamics time step.

  16. Genetic Algorithms and Local Search

    Science.gov (United States)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  17. First Principles and Genetic Algorithm Studies of Lanthanide Metal Oxides for Optimal Fuel Cell Electrolyte Design

    Science.gov (United States)

    Ismail, Arif

    As the demand for clean and renewable energy sources continues to grow, much attention has been given to solid oxide fuel cells (SOFCs) due to their efficiency and low operating temperature. However, the components of SOFCs must still be improved before commercialization can be reached. Of particular interest is the solid electrolyte, which conducts oxygen ions from the cathode to the anode. Samarium-doped ceria (SDC) is the electrolyte of choice in most SOFCs today, due mostly to its high ionic conductivity at low temperatures. However, the underlying principles that contribute to high ionic conductivity in doped ceria remain unknown, and so it is difficult to improve upon the design of SOFCs. This thesis focuses on identifying the atomistic interactions in SDC which contribute to its favourable performance in the fuel cell. Unfortunately, information as basic as the structure of SDC has not yet been found due to the difficulty in experimentally characterizing and computationally modelling the system. For instance, to evaluate 10.3% SDC, which is close to the 11.1% concentration used in fuel cells, one must investigate 194 trillion configurations, due to the numerous ways of arranging the Sm ions and oxygen vacancies in the simulation cell. As an exhaustive search method is clearly unfeasible, we develop a genetic algorithm (GA) to search the vast potential energy surface for the low-energy configurations, which will be most prevalent in the real material. With the GA, we investigate the structure of SDC for the first time at the DFT+U level of theory. Importantly, we find key differences in our results from prior calculations of this system which used less accurate methods, which demonstrate the importance of accurately modelling the system. Overall, our simulation results of the structure of SDC agree with experimental measurements. We identify the structural significance of defects in the doped ceria lattice which contribute to oxygen ion conductivity. Thus

  18. A simple algorithm to estimate genetic variance in an animal threshold model using Bayesian inference

    Directory of Open Access Journals (Sweden)

    Heringstad Bjørg

    2010-07-01

    Full Text Available Abstract Background In the genetic analysis of binary traits with one observation per animal, animal threshold models frequently give biased heritability estimates. In some cases, this problem can be circumvented by fitting sire- or sire-dam models. However, these models are not appropriate in cases where individual records exist on parents. Therefore, the aim of our study was to develop a new Gibbs sampling algorithm for a proper estimation of genetic (covariance components within an animal threshold model framework. Methods In the proposed algorithm, individuals are classified as either "informative" or "non-informative" with respect to genetic (covariance components. The "non-informative" individuals are characterized by their Mendelian sampling deviations (deviance from the mid-parent mean being completely confounded with a single residual on the underlying liability scale. For threshold models, residual variance on the underlying scale is not identifiable. Hence, variance of fully confounded Mendelian sampling deviations cannot be identified either, but can be inferred from the between-family variation. In the new algorithm, breeding values are sampled as in a standard animal model using the full relationship matrix, but genetic (covariance components are inferred from the sampled breeding values and relationships between "informative" individuals (usually parents only. The latter is analogous to a sire-dam model (in cases with no individual records on the parents. Results When applied to simulated data sets, the standard animal threshold model failed to produce useful results since samples of genetic variance always drifted towards infinity, while the new algorithm produced proper parameter estimates essentially identical to the results from a sire-dam model (given the fact that no individual records exist for the parents. Furthermore, the new algorithm showed much faster Markov chain mixing properties for genetic parameters (similar to

  19. Connecting the dots : analysis, development and applications of the SimpleX algorithm

    NARCIS (Netherlands)

    Kruip, Chael

    2011-01-01

    The SimpleX radiative transfer method is based on the interpretation of photons as particles interacting on a natural scale: the local mean free path. In our method, light is transported along the lines of an unstructured Delaunay mesh that encodes this natural distance and represents the physical

  20. The Cardiac Safety Research Consortium electrocardiogram warehouse: thorough QT database specifications and principles of use for algorithm development and testing.

    Science.gov (United States)

    Kligfield, Paul; Green, Cynthia L; Mortara, Justin; Sager, Philip; Stockbridge, Norman; Li, Michael; Zhang, Joanne; George, Samuel; Rodriguez, Ignacio; Bloomfield, Daniel; Krucoff, Mitchell W

    2010-12-01

    This document examines the formation, structure, and principles guiding the use of electrocardiogram (ECG) data sets obtained during thorough QT studies that have been derived from the ECG Warehouse of the Cardiac Safety Research Consortium (CSRC). These principles are designed to preserve the fairness and public interest of access to these data, commensurate with the mission of the CSRC. The data sets comprise anonymized XML formatted digitized ECGs and descriptive variables from placebo and positive control arms of individual studies previously submitted on a proprietary basis to the US Food and Drug Administration by pharmaceutical sponsors. Sponsors permit the release of these studies into the public domain through the CSRC on behalf of the Food and Drug Administration's Critical Path Initiative and public health interest. For algorithm research protocols submitted to and approved by CSRC, unblinded "training" ECG data sets are provided for algorithm development and for initial evaluation, whereas separate blinded "testing" data sets are used for formal algorithm evaluation in cooperation with the CSRC according to methods detailed in this document. Copyright © 2010 Mosby, Inc. All rights reserved.

  1. Energy Management through Heat Integration: a Simple Algorithmic Approach for Introducing Pinch Analysis

    Directory of Open Access Journals (Sweden)

    Nasser A. Al-Azri

    2015-12-01

    Full Text Available Pinch analysis is a methodology used for minimizing energy and material consumption in engineering processes. It features the identification of the pinch point and minimum external resources. Two common established approaches are used to identify these features: the graphical approach and the algebraic method, which are time-consuming and susceptible to human and calculation errors when used for a large number of process streams. This paper presents an algorithmic procedure to heat integration based on the algebraic approach. The algorithmic procedure is explained in a didactical manner to introduce pinch analysis for students and novice researchers in the field. Matlab code is presented, which is also intended for developing a Matlab toolbox for process integration.

  2. A simple but efficient voice activity detection algorithm through Hilbert transform and dynamic threshold for speech pathologies

    Science.gov (United States)

    Ortiz P., D.; Villa, Luisa F.; Salazar, Carlos; Quintero, O. L.

    2016-04-01

    A simple but efficient voice activity detector based on the Hilbert transform and a dynamic threshold is presented to be used on the pre-processing of audio signals. The algorithm to define the dynamic threshold is a modification of a convex combination found in literature. This scheme allows the detection of prosodic and silence segments on a speech in presence of non-ideal conditions like a spectral overlapped noise. The present work shows preliminary results over a database built with some political speech. The tests were performed adding artificial noise to natural noises over the audio signals, and some algorithms are compared. Results will be extrapolated to the field of adaptive filtering on monophonic signals and the analysis of speech pathologies on futures works.

  3. Predicting the onset of hazardous alcohol drinking in primary care: development and validation of a simple risk algorithm.

    Science.gov (United States)

    Bellón, Juan Ángel; de Dios Luna, Juan; King, Michael; Nazareth, Irwin; Motrico, Emma; GildeGómez-Barragán, María Josefa; Torres-González, Francisco; Montón-Franco, Carmen; Sánchez-Celaya, Marta; Díaz-Barreiros, Miguel Ángel; Vicens, Catalina; Moreno-Peral, Patricia

    2017-04-01

    Little is known about the risk of progressing to hazardous alcohol use in abstinent or low-risk drinkers. To develop and validate a simple brief risk algorithm for the onset of hazardous alcohol drinking (HAD) over 12 months for use in primary care. Prospective cohort study in 32 health centres from six Spanish provinces, with evaluations at baseline, 6 months, and 12 months. Forty-one risk factors were measured and multilevel logistic regression and inverse probability weighting were used to build the risk algorithm. The outcome was new occurrence of HAD during the study, as measured by the AUDIT. From the lists of 174 GPs, 3954 adult abstinent or low-risk drinkers were recruited. The 'predictAL-10' risk algorithm included just nine variables (10 questions): province, sex, age, cigarette consumption, perception of financial strain, having ever received treatment for an alcohol problem, childhood sexual abuse, AUDIT-C, and interaction AUDIT-C*Age. The c-index was 0.886 (95% CI = 0.854 to 0.918). The optimal cutoff had a sensitivity of 0.83 and specificity of 0.80. Excluding childhood sexual abuse from the model (the 'predictAL-9'), the c-index was 0.880 (95% CI = 0.847 to 0.913), sensitivity 0.79, and specificity 0.81. There was no statistically significant difference between the c-indexes of predictAL-10 and predictAL-9. The predictAL-10/9 is a simple and internally valid risk algorithm to predict the onset of hazardous alcohol drinking over 12 months in primary care attendees; it is a brief tool that is potentially useful for primary prevention of hazardous alcohol drinking. © British Journal of General Practice 2017.

  4. A simple but usually fast branch-and-bound algorithm for the capacitated facility location problem

    DEFF Research Database (Denmark)

    Görtz, Simon; Klose, Andreas

    2012-01-01

    This paper presents a simple branch-and-bound method based on Lagrangean relaxation and subgradient optimization for solving large instances of the capacitated facility location problem (CFLP) to optimality. To guess a primal solution to the Lagrangean dual, we average solutions to the Lagrangean...... subproblem. Branching decisions are then based on this estimated (fractional) primal solution. Extensive numerical results reveal that the method is much faster and more robust than other state-of-the-art methods for solving the CFLP exactly....

  5. Accurate and Scalable O(N) Algorithm for First-Principles Molecular-Dynamics Computations on Large Parallel Computers

    Energy Technology Data Exchange (ETDEWEB)

    Osei-Kuffuor, Daniel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fattebert, Jean-Luc [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-01-01

    We present the first truly scalable first-principles molecular dynamics algorithm with O(N) complexity and controllable accuracy, capable of simulating systems with finite band gaps of sizes that were previously impossible with this degree of accuracy. By avoiding global communications, we provide a practical computational scheme capable of extreme scalability. Accuracy is controlled by the mesh spacing of the finite difference discretization, the size of the localization regions in which the electronic wave functions are confined, and a cutoff beyond which the components of the overlap matrix can be omitted when computing selected elements of its inverse. We demonstrate the algorithm's excellent parallel scaling for up to 101 952 atoms on 23 328 processors, with a wall-clock time of the order of 1 min per molecular dynamics time step and numerical error on the forces of less than 7x10-4 Ha/Bohr.

  6. A simple algorithm to estimate the effective regional atmospheric parameters for thermal-inertia mapping

    Science.gov (United States)

    Watson, K.; Hummer-Miller, S.

    1981-01-01

    A method based solely on remote sensing data has been developed to estimate those meteorological effects which are required for thermal-inertia mapping. It assumes that the atmospheric fluxes are spatially invariant and that the solar, sky, and sensible heat fluxes can be approximated by a simple mathematical form. Coefficients are determined from least-squares method by fitting observational data to our thermal model. A comparison between field measurements and the model-derived flux shows the type of agreement which can be achieved. An analysis of the limitations of the method is also provided. ?? 1981.

  7. Absorption cooling sources atmospheric emissions decrease by implementation of simple algorithm for limiting temperature of cooling water

    Science.gov (United States)

    Wojdyga, Krzysztof; Malicki, Marcin

    2017-11-01

    Constant strive to improve the energy efficiency forces carrying out activities aimed at reduction of energy consumption hence decreasing amount of contamination emissions to atmosphere. Cooling demand, both for air-conditioning and process cooling, plays an increasingly important role in the balance of Polish electricity generation and distribution system in summer. During recent years' demand for electricity during summer months has been steadily and significantly increasing leading to deficits of energy availability during particularly hot periods. This causes growing importance and interest in trigeneration power generation sources and heat recovery systems producing chilled water. Key component of such system is thermally driven chiller, mostly absorption, based on lithium-bromide and water mixture. Absorption cooling systems also exist in Poland as stand-alone systems, supplied with heating from various sources, generated solely for them or recovered as waste or useless energy. The publication presents a simple algorithm, designed to reduce the amount of heat for the supply of absorption chillers producing chilled water for the purposes of air conditioning by reducing the temperature of the cooling water, and its impact on decreasing emissions of harmful substances into the atmosphere. Scale of environmental advantages has been rated for specific sources what enabled evaluation and estimation of simple algorithm implementation to sources existing nationally.

  8. Absorption cooling sources atmospheric emissions decrease by implementation of simple algorithm for limiting temperature of cooling water

    Directory of Open Access Journals (Sweden)

    Wojdyga Krzysztof

    2017-01-01

    Full Text Available Constant strive to improve the energy efficiency forces carrying out activities aimed at reduction of energy consumption hence decreasing amount of contamination emissions to atmosphere. Cooling demand, both for air-conditioning and process cooling, plays an increasingly important role in the balance of Polish electricity generation and distribution system in summer. During recent years' demand for electricity during summer months has been steadily and significantly increasing leading to deficits of energy availability during particularly hot periods. This causes growing importance and interest in trigeneration power generation sources and heat recovery systems producing chilled water. Key component of such system is thermally driven chiller, mostly absorption, based on lithium-bromide and water mixture. Absorption cooling systems also exist in Poland as stand-alone systems, supplied with heating from various sources, generated solely for them or recovered as waste or useless energy. The publication presents a simple algorithm, designed to reduce the amount of heat for the supply of absorption chillers producing chilled water for the purposes of air conditioning by reducing the temperature of the cooling water, and its impact on decreasing emissions of harmful substances into the atmosphere. Scale of environmental advantages has been rated for specific sources what enabled evaluation and estimation of simple algorithm implementation to sources existing nationally.

  9. Enhancing a Simple MODIS Cloud Mask Algorithm for the Landsat Data Continuity Mission

    Science.gov (United States)

    Wilson, Michael J.; Oreopoulos, Lazarous

    2011-01-01

    The presence of clouds in images acquired by the Landsat series of satellites is usually an undesirable, but generally unavoidable fact. With the emphasis of the program being on land imaging, the suspended liquid/ice particles of which clouds are made of fully or partially obscure the desired observational target. Knowing the amount and location of clouds in a Landsat scene is therefore valuable information for scene selection, for making clear-sky composites from multiple scenes, and for scheduling future acquisitions. The two instruments in the upcoming Landsat Data Continuity Mission (LDCM) will include new channels that will enhance our ability to detect high clouds which are often also thin in the sense that a large fraction of solar radiation can pass through them. This work studies the potential impact of these new channels on enhancing LDCM's cloud detection capabilities compared to previous Landsat missions. We revisit a previously published scheme for cloud detection and add new tests to capture more of the thin clouds that are harder to detect with the more limited arsenal channels. Since there are no Landsat data yet that include the new LDCM channels, we resort to data from another instrument, MODIS, which has these bands, as well as the other bands of LDCM, to test the capabilities of our new algorithm. By comparing our revised scheme's performance against the performance of the official MODIS cloud detection scheme, we conclude that the new scheme performs better than the earlier scheme which was not very good at thin cloud detection.

  10. Unit commitment problem of thermal generation units for short term operational planning using simple genetic algorithm

    International Nuclear Information System (INIS)

    Ahmad, A.; Malik, T.N.; Ahmad, A.

    2006-01-01

    Unit Commitment problem plays a major role in power system since the improved UC schedules may save the electric utilities millions of dollars per year in production cost. The objective of the optimal commitment is to determine the on/off states of the units in the system to meet the load requirement and spinning reserve requirement at each time period such that the overall cost of generation is minimum, while satisfying various constraints. Several research have been done in this field for the past three decades. With the development of modern power system, it is not practical to use the classical approaches to solve large scale UC problem. Due to the limitations of the mathematical programming methods. Al based techniques are used GA is an adaptive search method for optimal or near optimal commitment order. GA can get a sub-optimal solution that is very close to the global optimal solution and can meet the demand of engineering application. A GA implementation using the standard reproduction. cross over and mutation has been used to get optimal solution. The approach of UC using GA consists of repeating the process of economic dispatch and minimizing the objective function for various unit combinations and over a population of feasible solution. UC solution based on GA has been programmed/ implemented in C++. The performance of GA approach is initially tested as a case study for 3-generator system and for a load pattern of 24 hours. This paper applies the GA to the UC scheduling problem and illustrates details of the performance of genetic algorithm. The aim of this paper is to propose the suitability of a new approach to the solution of the UC problem. (author)

  11. Development of a Two-Phase Flow Analysis Code based on a Unstructured-Mesh SIMPLE Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Tae; Park, Ik Kyu; Cho, Heong Kyu; Yoon, Han Young; Kim, Kyung Doo; Jeong, Jae Jun

    2008-09-15

    For analyses of multi-phase flows in a water-cooled nuclear power plant, a three-dimensional SIMPLE-algorithm based hydrodynamic solver CUPID-S has been developed. As governing equations, it adopts a two-fluid three-field model for the two-phase flows. The three fields represent a continuous liquid, a dispersed droplets, and a vapour field. The governing equations are discretized by a finite volume method on an unstructured grid to handle the geometrical complexity of the nuclear reactors. The phasic momentum equations are coupled and solved with a sparse block Gauss-Seidel matrix solver to increase a numerical stability. The pressure correction equation derived by summing the phasic volume fraction equations is applied on the unstructured mesh in the context of a cell-centered co-located scheme. This paper presents the numerical method and the preliminary results of the calculations.

  12. A simple algorithm to retrieve soil moisture and vegetation biomass using passive microwave measurements over crop fields

    International Nuclear Information System (INIS)

    Wigneron, J.P.; Chanzy, A.; Calvet, J.C.; Bruguier, N.

    1995-01-01

    A simple algorithm to retrieve sail moisture and vegetation water content from passive microwave measurements is analyzed in this study. The approach is based on a zeroth-order solution of the radiative transfer equations in a vegetation layer. In this study, the single scattering albedo accounts for scattering effects and two parameters account for the dependence of the optical thickness on polarization, incidence angle, and frequency. The algorithm requires only ancillary information about crop type and surface temperature. Retrievals of the surface parameters from two radiometric data sets acquired over a soybean and a wheat crop have been attempted. The model parameters have been fitted in order to achieve best match between measured and retrieved surface data. The results of the inversion are analyzed for different configurations of the radiometric observations: one or several look angles, L-band, C-band or (L-band and C-band). Sensitivity of the retrievals to the best fit values of the model parameters has also been investigated. The best configurations, requiring simultaneous measurements at L- and C-band, produce retrievals of soil moisture and biomass with a 15% estimated precision (about 0.06 m 3 /m 3 for soil moisture and 0.3 kg/m 2 for biomass) and exhibit a limited sensitivity to the best fit parameters. (author)

  13. Simple Algorithms to Calculate Asymptotic Null Distributions of Robust Tests in Case-Control Genetic Association Studies in R

    Directory of Open Access Journals (Sweden)

    Wing Kam Fung

    2010-02-01

    Full Text Available The case-control study is an important design for testing association between genetic markers and a disease. The Cochran-Armitage trend test (CATT is one of the most commonly used statistics for the analysis of case-control genetic association studies. The asymptotically optimal CATT can be used when the underlying genetic model (mode of inheritance is known. However, for most complex diseases, the underlying genetic models are unknown. Thus, tests robust to genetic model misspecification are preferable to the model-dependant CATT. Two robust tests, MAX3 and the genetic model selection (GMS, were recently proposed. Their asymptotic null distributions are often obtained by Monte-Carlo simulations, because they either have not been fully studied or involve multiple integrations. In this article, we study how components of each robust statistic are correlated, and find a linear dependence among the components. Using this new finding, we propose simple algorithms to calculate asymptotic null distributions for MAX3 and GMS, which greatly reduce the computing intensity. Furthermore, we have developed the R package Rassoc implementing the proposed algorithms to calculate the empirical and asymptotic p values for MAX3 and GMS as well as other commonly used tests in case-control association studies. For illustration, Rassoc is applied to the analysis of case-control data of 17 most significant SNPs reported in four genome-wide association studies.

  14. A simple calculation algorithm to separate high-resolution CH4 flux measurements into ebullition and diffusion-derived components

    Science.gov (United States)

    Hoffmann, Mathias; Schulz-Hanke, Maximilian; Garcia Alba, Joana; Jurisch, Nicole; Hagemann, Ulrike; Sachs, Torsten; Sommer, Michael; Augustin, Jürgen

    2016-04-01

    Processes driving methane (CH4) emissions in wetland ecosystems are highly complex. Especially, the separation of CH4 emissions into ebullition and diffusion derived flux components, a perquisite for the mechanistic process understanding and identification of potential environmental driver is rather challenging. We present a simple calculation algorithm, based on an adaptive R-script, which separates open-water, closed chamber CH4 flux measurements into diffusion- and ebullition-derived components. Hence, flux component specific dynamics are revealed and potential environmental driver identified. Flux separation is based on a statistical approach, using ebullition related sudden concentration changes obtained during high resolution CH4 concentration measurements. By applying the lower and upper quartile ± the interquartile range (IQR) as a variable threshold, diffusion dominated periods of the flux measurement are filtered. Subsequently, flux calculation and separation is performed. The algorithm was verified in a laboratory experiment and tested under field conditions, using flux measurement data (July to September 2013) from a flooded, former fen grassland site. Erratic ebullition events contributed 46% to total CH4 emissions, which is comparable to values reported by literature. Additionally, a shift in the diurnal trend of diffusive fluxes throughout the measurement period, driven by the water temperature gradient, was revealed.

  15. Algorithms

    Indian Academy of Sciences (India)

    have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming language Is called a program. From activities 1-3, we can observe that: • Each activity is a command.

  16. Algorithms

    Indian Academy of Sciences (India)

    algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language ... ·1 x:=sln(theta) x : = sm(theta) 1. ~. Idl d.t Read A.B,C. ~ lei ~ Print x.y.z. L;;;J. Figure 2 Symbols used In flowchart language to rep- resent Assignment, Read.

  17. Algorithms

    Indian Academy of Sciences (India)

    In the previous articles, we have discussed various common data-structures such as arrays, lists, queues and trees and illustrated the widely used algorithm design paradigm referred to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted ...

  18. Demonstrating Principles of Spectrophotometry by Constructing a Simple, Low-Cost, Functional Spectrophotometer Utilizing the Light Sensor on a Smartphone

    Science.gov (United States)

    Hosker, Bill S.

    2018-01-01

    A highly simplified variation on the do-it-yourself spectrophotometer using a smartphone's light sensor as a detector and an app to calculate and display absorbance values was constructed and tested. This simple version requires no need for electronic components or postmeasurement spectral analysis. Calibration graphs constructed from two…

  19. Management of temporary urinary retention after arthroscopic knee surgery in low-dose spinal anesthesia: development of a simple algorithm.

    Science.gov (United States)

    Luger, Thomas J; Garoscio, Ivo; Rehder, Peter; Oberladstätter, Jürgen; Voelckel, Wolfgang

    2008-06-01

    In practice, trauma and orthopedic surgery during spinal anesthesia are often performed with routine urethral catheterization of the bladder to prevent an overdistention of the bladder. However, use of a catheter has inherent risks. Ultrasound examination of the bladder (Bladderscan) can precisely determine the bladder volume. Thus, the aim of this study was to identify parameters indicative of urinary retention after low-dose spinal anesthesia and to develop a simple algorithm for patient care. This prospective pilot study approved by the Ethics Committee enrolled 45 patients after obtaining their written informed consent. Patients who underwent arthroscopic knee surgery received low-dose spinal anesthesia with 1.4 ml 0.5% bupivacaine at level L3/L4. Bladder volume was measured by urinary bladder scanning at baseline, at the end of surgery and up to 4 h later. The incidence of spontaneous urination versus catheterization was assessed and the relative risk for catheterization was calculated. Mann-Whitney test, chi(2) test with Fischer Exact test and the relative odds ratio were performed as appropriate. *P 300 ml postoperatively had a 6.5-fold greater likelihood for urinary retention. In the management of patients with short-lasting spinal anesthesia for arthroscopic knee surgery we recommend monitoring bladder volume by Bladderscan instead of routine catheterization. Anesthesiologists or nurses under protocol should assess bladder volume preoperatively and at the end of surgery. If bladder volume is >300 ml, catheterization should be performed in the OR. Patients with a bladder volume of 500 ml.

  20. A simple methodology for characterization of germanium coaxial detectors by using Monte Carlo simulation and evolutionary algorithms.

    Science.gov (United States)

    Guerra, J G; Rubiano, J G; Winter, G; Guerra, A G; Alonso, H; Arnedo, M A; Tejera, A; Gil, J M; Rodríguez, R; Martel, P; Bolivar, J P

    2015-11-01

    The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Algorithms

    Indian Academy of Sciences (India)

    In the previous article of this series, we looked at simple data types and their representation in computer memory. The notion of a simple data type can be extended to denote a set of elements corresponding to one data item at a higher level. The process of structuring or grouping of the basic data elements is often referred ...

  2. Simple prostatectomy

    Science.gov (United States)

    ... Han M, Partin AW. Simple prostatectomy: open and robot-assisted laparoscopic approaches. In: Wein AJ, Kavoussi LR, ... M. is also a founding member of Hi-Ethics and subscribes to the principles of the Health ...

  3. A simple methodology for characterization of germanium coaxial detectors by using Monte Carlo simulation and evolutionary algorithms

    International Nuclear Information System (INIS)

    Guerra, J.G.; Rubiano, J.G.; Winter, G.; Guerra, A.G.; Alonso, H.; Arnedo, M.A.; Tejera, A.; Gil, J.M.; Rodríguez, R.; Martel, P.; Bolivar, J.P.

    2015-01-01

    The determination in a sample of the activity concentration of a specific radionuclide by gamma spectrometry needs to know the full energy peak efficiency (FEPE) for the energy of interest. The difficulties related to the experimental calibration make it advisable to have alternative methods for FEPE determination, such as the simulation of the transport of photons in the crystal by the Monte Carlo method, which requires an accurate knowledge of the characteristics and geometry of the detector. The characterization process is mainly carried out by Canberra Industries Inc. using proprietary techniques and methodologies developed by that company. It is a costly procedure (due to shipping and to the cost of the process itself) and for some research laboratories an alternative in situ procedure can be very useful. The main goal of this paper is to find an alternative to this costly characterization process, by establishing a method for optimizing the parameters of characterizing the detector, through a computational procedure which could be reproduced at a standard research lab. This method consists in the determination of the detector geometric parameters by using Monte Carlo simulation in parallel with an optimization process, based on evolutionary algorithms, starting from a set of reference FEPEs determined experimentally or computationally. The proposed method has proven to be effective and simple to implement. It provides a set of characterization parameters which it has been successfully validated for different source-detector geometries, and also for a wide range of environmental samples and certified materials. - Highlights: • A computational method for characterizing an HPGe spectrometer has been developed. • Detector characterized using as reference photopeak efficiencies obtained experimentally or by Monte Carlo calibration. • The characterization obtained has been validated for samples with different geometries and composition. • Good agreement

  4. Improved hybridization of Fuzzy Analytic Hierarchy Process (FAHP) algorithm with Fuzzy Multiple Attribute Decision Making - Simple Additive Weighting (FMADM-SAW)

    Science.gov (United States)

    Zaiwani, B. E.; Zarlis, M.; Efendi, S.

    2018-03-01

    In this research, the improvement of hybridization algorithm of Fuzzy Analytic Hierarchy Process (FAHP) with Fuzzy Technique for Order Preference by Similarity to Ideal Solution (FTOPSIS) in selecting the best bank chief inspector based on several qualitative and quantitative criteria with various priorities. To improve the performance of the above research, FAHP algorithm hybridization with Fuzzy Multiple Attribute Decision Making - Simple Additive Weighting (FMADM-SAW) algorithm was adopted, which applied FAHP algorithm to the weighting process and SAW for the ranking process to determine the promotion of employee at a government institution. The result of improvement of the average value of Efficiency Rate (ER) is 85.24%, which means that this research has succeeded in improving the previous research that is equal to 77.82%. Keywords: Ranking and Selection, Fuzzy AHP, Fuzzy TOPSIS, FMADM-SAW.

  5. Algorithms

    Indian Academy of Sciences (India)

    In the program shown in Figure 1, we have repeated the algorithm. M times and we can make the following observations. Each block is essentially a different instance of "code"; that is, the objects differ by the value to which N is initialized before the execution of the. "code" block. Thus, we can now avoid the repetition of the ...

  6. Algorithms

    Indian Academy of Sciences (India)

    algorithms built into the computer corresponding to the logic- circuit rules that are used to .... For the purpose of carrying ou t ari thmetic or logical operations the memory is organized in terms .... In fixed point representation, one essentially uses integer arithmetic operators assuming the binary point to be at some point other ...

  7. New algorithms for motion error detection of numerical control machine tool by laser tracking measurement on the basis of GPS principle

    Science.gov (United States)

    Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie

    2018-01-01

    As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.

  8. Fractal Hypothesis of the Pelagic Microbial Ecosystem-Can Simple Ecological Principles Lead to Self-Similar Complexity in the Pelagic Microbial Food Web?

    Science.gov (United States)

    Våge, Selina; Thingstad, T Frede

    2015-01-01

    Trophic interactions are highly complex and modern sequencing techniques reveal enormous biodiversity across multiple scales in marine microbial communities. Within the chemically and physically relatively homogeneous pelagic environment, this calls for an explanation beyond spatial and temporal heterogeneity. Based on observations of simple parasite-host and predator-prey interactions occurring at different trophic levels and levels of phylogenetic resolution, we present a theoretical perspective on this enormous biodiversity, discussing in particular self-similar aspects of pelagic microbial food web organization. Fractal methods have been used to describe a variety of natural phenomena, with studies of habitat structures being an application in ecology. In contrast to mathematical fractals where pattern generating rules are readily known, however, identifying mechanisms that lead to natural fractals is not straight-forward. Here we put forward the hypothesis that trophic interactions between pelagic microbes may be organized in a fractal-like manner, with the emergent network resembling the structure of the Sierpinski triangle. We discuss a mechanism that could be underlying the formation of repeated patterns at different trophic levels and discuss how this may help understand characteristic biomass size-spectra that hint at scale-invariant properties of the pelagic environment. If the idea of simple underlying principles leading to a fractal-like organization of the pelagic food web could be formalized, this would extend an ecologists mindset on how biological complexity could be accounted for. It may furthermore benefit ecosystem modeling by facilitating adequate model resolution across multiple scales.

  9. Fractal hypothesis of the pelagic microbial ecosystem - Can simple ecological principles lead to self-similar complexity in the pelagic microbial food web?

    Directory of Open Access Journals (Sweden)

    Selina eVåge

    2015-12-01

    Full Text Available Trophic interactions are highly complex and modern sequencing techniques reveal enormous biodiversity across multiple scales in marine microbial communities . Within the chemically and physically relatively homogeneous pelagic environment, this calls for an explanation beyond spatial and temporal heterogeneity. Based on observations of simple parasite-host and predator-prey interactions occurring at different trophic levels and levels of phylogenetic resolution, we present a theoretical perspective on this enormous biodiversity, discussing in particular self-similar aspects of pelagic microbial food web organization. Fractal methods have been used to describe a variety of natural phenomena, with studies of habitat structures being an application in ecology. In contrast to mathematical fractals where pattern generating rules are readily known, however, identifying mechanisms that lead to natural fractals is not straight-forward. Here we put forward the hypothesis that trophic interactions between pelagic microbes may be organized in a fractal-like manner, with the emergent network resembling the structure of the Sierpinski triangle. We discuss a mechanism that could be underlying the formation of repeated patterns at different trophic levels and discuss how this may help understand characteristic biomass size-spectra that hint at scale-invariant properties of the pelagic environment. If the idea of simple underlying principles leading to a fractal-like organization of the pelagic food web could be formalized, this would extend an ecologists mindset on how biological complexity could be accounted for. It may furthermore benefit ecosystem modeling by facilitating adequate model resolution across multiple scales.

  10. Fractal Hypothesis of the Pelagic Microbial Ecosystem—Can Simple Ecological Principles Lead to Self-Similar Complexity in the Pelagic Microbial Food Web?

    Science.gov (United States)

    Våge, Selina; Thingstad, T. Frede

    2015-01-01

    Trophic interactions are highly complex and modern sequencing techniques reveal enormous biodiversity across multiple scales in marine microbial communities. Within the chemically and physically relatively homogeneous pelagic environment, this calls for an explanation beyond spatial and temporal heterogeneity. Based on observations of simple parasite-host and predator-prey interactions occurring at different trophic levels and levels of phylogenetic resolution, we present a theoretical perspective on this enormous biodiversity, discussing in particular self-similar aspects of pelagic microbial food web organization. Fractal methods have been used to describe a variety of natural phenomena, with studies of habitat structures being an application in ecology. In contrast to mathematical fractals where pattern generating rules are readily known, however, identifying mechanisms that lead to natural fractals is not straight-forward. Here we put forward the hypothesis that trophic interactions between pelagic microbes may be organized in a fractal-like manner, with the emergent network resembling the structure of the Sierpinski triangle. We discuss a mechanism that could be underlying the formation of repeated patterns at different trophic levels and discuss how this may help understand characteristic biomass size-spectra that hint at scale-invariant properties of the pelagic environment. If the idea of simple underlying principles leading to a fractal-like organization of the pelagic food web could be formalized, this would extend an ecologists mindset on how biological complexity could be accounted for. It may furthermore benefit ecosystem modeling by facilitating adequate model resolution across multiple scales. PMID:26648929

  11. Algorithmically-generated Corpora that use Serial Compositional Principles Can Contribute to the Modeling of Sequential Pitch Structure in Non-tonal Music

    Directory of Open Access Journals (Sweden)

    Roger Thornton Dean

    2016-07-01

    Full Text Available We investigate whether pitch sequences in non-tonal music can be modeled by an information-theoretic approach using algorithmically-generated melodic sequences, made according to 12-tone serial principles, as the training corpus. This is potentially useful, because symbolic corpora of non-tonal music are not readily available. A non-tonal corpus of serially-composed melodies was constructed algorithmically using classic principles of 12-tone music, including prime, inversion, retrograde and retrograde inversion transforms. A similar algorithm generated a tonal melodic corpus of tonal transformations, in each case based on a novel tonal melody and expressed in alternating major keys. A cognitive model of auditory expectation (IDyOM was used first to analyze the sequential pitch structure of the corpora, in some cases with pre-training on established tonal folk-song corpora (Essen, Schaffrath, 1995. The two algorithmic corpora can be distinguished in terms of their information content, and they were quite different from random corpora and from the folk-song corpus. We then demonstrate that the algorithmic serial corpora can assist modeling of canonical non-tonal compositions by Webern and Schoenberg, and also non-tonal segments of improvisations by skilled musicians. Separately, we developed the process of algorithmic melody composition into a software system (the Serial Collaborator capable of generating multi-stranded serial keyboard music. Corpora of such keyboard compositions based either on the non-tonal or the tonal melodic corpora were generated and assessed for their information-theoretic modeling properties.

  12. On the dosimetric behaviour of photon dose calculation algorithms in the presence of simple geometric heterogeneities: comparison with Monte Carlo calculations

    Science.gov (United States)

    Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk; Brink, Carsten; Clivio, Alessandro; Knöös, Tommy; Nicolini, Giorgia; Cozzi, Luca

    2007-03-01

    A comparative study was performed to reveal differences and relative figures of merit of seven different calculation algorithms for photon beams when applied to inhomogeneous media. The following algorithms were investigated: Varian Eclipse: the anisotropic analytical algorithm, and the pencil beam with modified Batho correction; Nucletron Helax-TMS: the collapsed cone and the pencil beam with equivalent path length correction; CMS XiO: the multigrid superposition and the fast Fourier transform convolution; Philips Pinnacle: the collapsed cone. Monte Carlo simulations (MC) performed with the EGSnrc codes BEAMnrc and DOSxyznrc from NRCC in Ottawa were used as a benchmark. The study was carried out in simple geometrical water phantoms (ρ = 1.00 g cm-3) with inserts of different densities simulating light lung tissue (ρ = 0.035 g cm-3), normal lung (ρ = 0.20 g cm-3) and cortical bone tissue (ρ = 1.80 g cm-3). Experiments were performed for low- and high-energy photon beams (6 and 15 MV) and for square (13 × 13 cm2) and elongated rectangular (2.8 × 13 cm2) fields. Analysis was carried out on the basis of depth dose curves and transverse profiles at several depths. Assuming the MC data as reference, γ index analysis was carried out distinguishing between regions inside the non-water inserts or inside the uniform water. For this study, a distance to agreement was set to 3 mm while the dose difference varied from 2% to 10%. In general all algorithms based on pencil-beam convolutions showed a systematic deficiency in managing the presence of heterogeneous media. In contrast, complicated patterns were observed for the advanced algorithms with significant discrepancies observed between algorithms in the lighter materials (ρ = 0.035 g cm-3), enhanced for the most energetic beam. For denser, and more clinical, densities a better agreement among the sophisticated algorithms with respect to MC was observed.

  13. Model parameter estimations from residual gravity anomalies due to simple-shaped sources using Differential Evolution Algorithm

    Science.gov (United States)

    Ekinci, Yunus Levent; Balkaya, Çağlayan; Göktürkler, Gökhan; Turan, Seçil

    2016-06-01

    An efficient approach to estimate model parameters from residual gravity data based on differential evolution (DE), a stochastic vector-based metaheuristic algorithm, has been presented. We have showed the applicability and effectiveness of this algorithm on both synthetic and field anomalies. According to our knowledge, this is a first attempt of applying DE for the parameter estimations of residual gravity anomalies due to isolated causative sources embedded in the subsurface. The model parameters dealt with here are the amplitude coefficient (A), the depth and exact origin of causative source (zo and xo, respectively) and the shape factors (q and ƞ). The error energy maps generated for some parameter pairs have successfully revealed the nature of the parameter estimation problem under consideration. Noise-free and noisy synthetic single gravity anomalies have been evaluated with success via DE/best/1/bin, which is a widely used strategy in DE. Additionally some complicated gravity anomalies caused by multiple source bodies have been considered, and the results obtained have showed the efficiency of the algorithm. Then using the strategy applied in synthetic examples some field anomalies observed for various mineral explorations such as a chromite deposit (Camaguey district, Cuba), a manganese deposit (Nagpur, India) and a base metal sulphide deposit (Quebec, Canada) have been considered to estimate the model parameters of the ore bodies. Applications have exhibited that the obtained results such as the depths and shapes of the ore bodies are quite consistent with those published in the literature. Uncertainty in the solutions obtained from DE algorithm has been also investigated by Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing without cooling schedule. Based on the resulting histogram reconstructions of both synthetic and field data examples the algorithm has provided reliable parameter estimations being within the sampling limits of

  14. Rarity-weighted richness: a simple and reliable alternative to integer programming and heuristic algorithms for minimum set and maximum coverage problems in conservation planning.

    Science.gov (United States)

    Albuquerque, Fabio; Beier, Paul

    2015-01-01

    Here we report that prioritizing sites in order of rarity-weighted richness (RWR) is a simple, reliable way to identify sites that represent all species in the fewest number of sites (minimum set problem) or to identify sites that represent the largest number of species within a given number of sites (maximum coverage problem). We compared the number of species represented in sites prioritized by RWR to numbers of species represented in sites prioritized by the Zonation software package for 11 datasets in which the size of individual planning units (sites) ranged from Integer programming remains the only guaranteed way find an optimal solution, and heuristic algorithms remain superior for conservation prioritizations that consider compactness and multiple near-optimal solutions in addition to species representation. But because RWR can be implemented easily and quickly in R or a spreadsheet, it is an attractive alternative to integer programming or heuristic algorithms in some conservation prioritization contexts.

  15. The Spectrum Prize: A simple algorithm to evaluate the relative sensitivity of γ-ray spectra, representative of detection systems

    Energy Technology Data Exchange (ETDEWEB)

    Spolaore, P.

    2016-03-11

    A simple analysis of gamma spectra selected to represent the performance of different detection systems, or, for one same system, different operation modes or states of progress of the system development, allows to compare the relative average-sensitivities of the represented systems themselves, as operated in the selected cases. The obtained SP figure-of-merit takes into account and correlates the main parameters commonly used to estimate the performance of a system. An example of application is given.

  16. Cellular Gauge Symmetry and the Li Organization Principle: A Mathematical Addendum. Quantifying energetic dynamics in physical and biological systems through a simple geometric tool and geodetic curves.

    Science.gov (United States)

    Yurkin, Alexander; Tozzi, Arturo; Peters, James F; Marijuán, Pedro C

    2017-12-01

    The present Addendum complements the accompanying paper "Cellular Gauge Symmetry and the Li Organization Principle"; it illustrates a recently-developed geometrical physical model able to assess electronic movements and energetic paths in atomic shells. The model describes a multi-level system of circular, wavy and zigzag paths which can be projected onto a horizontal tape. This model ushers in a visual interpretation of the distribution of atomic electrons' energy levels and the corresponding quantum numbers through rather simple tools, such as compasses, rulers and straightforward calculations. Here we show how this geometrical model, with the due corrections, among them the use of geodetic curves, might be able to describe and quantify the structure and the temporal development of countless physical and biological systems, from Langevin equations for random paths, to symmetry breaks occurring ubiquitously in physical and biological phenomena, to the relationships among different frequencies of EEG electric spikes. Therefore, in our work we explore the possible association of binomial distribution and geodetic curves configuring a uniform approach for the research of natural phenomena, in biology, medicine or the neurosciences. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Fuzzy clustering, genetic algorithms and neuro-fuzzy methods compared for hybrid fuzzy-first principles modeling

    NARCIS (Netherlands)

    van Lith, Pascal; van Lith, P.F.; Betlem, Bernardus H.L.; Roffel, B.

    2002-01-01

    Hybrid fuzzy-first principles models can be a good alternative if a complete physical model is difficult to derive. These hybrid models consist of a framework of dynamic mass and energy balances, supplemented by fuzzy submodels describing additional equations, such as mass transformation and

  18. Fuzzy Clustering, Genetic Algorithms and Neuro-Fuzzy Methods Compared for Hybrid Fuzzy-First Principles Modeling

    NARCIS (Netherlands)

    Lith, Pascal F. van; Betlem, Ben H.L.; Roffel, Brian

    2002-01-01

    Hybrid fuzzy-first principles models can be a good alternative if a complete physical model is difficult to derive. These hybrid models consist of a framework of dynamic mass and energy balances, supplemented by fuzzy submodels describing additional equations, such as mass transformation and

  19. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.

    Science.gov (United States)

    McDonnell, Mark D; Tissera, Migel D; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.

  20. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.

    Directory of Open Access Journals (Sweden)

    Mark D McDonnell

    Full Text Available Recent advances in training deep (multi-layer architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM approach, which also enables a very rapid training time (∼ 10 minutes. Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.

  1. A simple algorithm improves mass accuracy to 50-100 ppm for delayed extraction linear MALDI-TOF mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Hack, Christopher A.; Benner, W. Henry

    2001-10-31

    A simple mathematical technique for improving mass calibration accuracy of linear delayed extraction matrix assisted laser desorption ionization time-of-flight mass spectrometry (DE MALDI-TOF MS) spectra is presented. The method involves fitting a parabola to a plot of Dm vs. mass data where Dm is the difference between the theoretical mass of calibrants and the mass obtained from a linear relationship between the square root of m/z and ion time of flight. The quadratic equation that describes the parabola is then used to correct the mass of unknowns by subtracting the deviation predicted by the quadratic equation from measured data. By subtracting the value of the parabola at each mass from the calibrated data, the accuracy of mass data points can be improved by factors of 10 or more. This method produces highly similar results whether or not initial ion velocity is accounted for in the calibration equation; consequently, there is no need to depend on that uncertain parameter when using the quadratic correction. This method can be used to correct the internally calibrated masses of protein digest peaks. The effect of nitrocellulose as a matrix additive is also briefly discussed, and it is shown that using nitrocellulose as an additive to a CHCA matrix does not significantly change initial ion velocity but does change the average position of ions relative to the sample electrode at the instant the extraction voltage is applied.

  2. Simple and fast spectral domain algorithm for quantitative phase imaging of living cells with digital holographic microscopy

    Science.gov (United States)

    Min, Junwei; Yao, Baoli; Ketelhut, Steffi; Kemper, Björn

    2017-02-01

    The modular combination of optical microscopes with digital holographic microscopy (DHM) has been proven to be a powerful tool for quantitative live cell imaging. The introduction of condenser and different microscope objectives (MO) simplifies the usage of the technique and makes it easier to measure different kinds of specimens with different magnifications. However, the high flexibility of illumination and imaging also causes variable phase aberrations that need to be eliminated for high resolution quantitative phase imaging. The existent phase aberrations compensation methods either require add additional elements into the reference arm or need specimen free reference areas or separate reference holograms to build up suitable digital phase masks. These inherent requirements make them unpractical for usage with highly variable illumination and imaging systems and prevent on-line monitoring of living cells. In this paper, we present a simple numerical method for phase aberration compensation based on the analysis of holograms in spatial frequency domain with capabilities for on-line quantitative phase imaging. From a single shot off-axis hologram, the whole phase aberration can be eliminated automatically without numerical fitting or pre-knowledge of the setup. The capabilities and robustness for quantitative phase imaging of living cancer cells are demonstrated.

  3. Evolutionary Connectionism: Algorithmic Principles Underlying the Evolution of Biological Organisation in Evo-Devo, Evo-Eco and Evolutionary Transitions.

    Science.gov (United States)

    Watson, Richard A; Mills, Rob; Buckley, C L; Kouvaris, Kostas; Jackson, Adam; Powers, Simon T; Cox, Chris; Tudge, Simon; Davies, Adam; Kounios, Loizos; Power, Daniel

    2016-01-01

    The mechanisms of variation, selection and inheritance, on which evolution by natural selection depends, are not fixed over evolutionary time. Current evolutionary biology is increasingly focussed on understanding how the evolution of developmental organisations modifies the distribution of phenotypic variation, the evolution of ecological relationships modifies the selective environment, and the evolution of reproductive relationships modifies the heritability of the evolutionary unit. The major transitions in evolution, in particular, involve radical changes in developmental, ecological and reproductive organisations that instantiate variation, selection and inheritance at a higher level of biological organisation. However, current evolutionary theory is poorly equipped to describe how these organisations change over evolutionary time and especially how that results in adaptive complexes at successive scales of organisation (the key problem is that evolution is self-referential, i.e. the products of evolution change the parameters of the evolutionary process). Here we first reinterpret the central open questions in these domains from a perspective that emphasises the common underlying themes. We then synthesise the findings from a developing body of work that is building a new theoretical approach to these questions by converting well-understood theory and results from models of cognitive learning. Specifically, connectionist models of memory and learning demonstrate how simple incremental mechanisms, adjusting the relationships between individually-simple components, can produce organisations that exhibit complex system-level behaviours and improve the adaptive capabilities of the system. We use the term "evolutionary connectionism" to recognise that, by functionally equivalent processes, natural selection acting on the relationships within and between evolutionary entities can result in organisations that produce complex system-level behaviours in evolutionary

  4. Patch-based models and algorithms for image processing: a review of the basic principles and methods, and their application in computed tomography.

    Science.gov (United States)

    Karimi, Davood; Ward, Rabab K

    2016-10-01

    Image models are central to all image processing tasks. The great advancements in digital image processing would not have been made possible without powerful models which, themselves, have evolved over time. In the past decade, "patch-based" models have emerged as one of the most effective models for natural images. Patch-based methods have outperformed other competing methods in many image processing tasks. These developments have come at a time when greater availability of powerful computational resources and growing concerns over the health risks of the ionizing radiation encourage research on image processing algorithms for computed tomography (CT). The goal of this paper is to explain the principles of patch-based methods and to review some of their recent applications in CT. We first review the central concepts in patch-based image processing and explain some of the state-of-the-art algorithms, with a focus on aspects that are more relevant to CT. Then, we review some of the recent application of patch-based methods in CT. Patch-based methods have already transformed the field of image processing, leading to state-of-the-art results in many applications. More recently, several studies have proposed patch-based algorithms for various image processing tasks in CT, from denoising and restoration to iterative reconstruction. Although these studies have reported good results, the true potential of patch-based methods for CT has not been yet appreciated. Patch-based methods can play a central role in image reconstruction and processing for CT. They have the potential to lead to substantial improvements in the current state of the art.

  5. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  6. Algorithms for Simple Temporal Reasoning

    NARCIS (Netherlands)

    Planken, L.R.

    2013-01-01

    This dissertation describes research into new methods for automated temporal reasoning. For this purpose, several frameworks are available in literature. Chapter 1 presents a concise literature survey that provides a new overview of their interrelation. In the remainder of the dissertation, the

  7. MM Algorithms for Geometric and Signomial Programming.

    Science.gov (United States)

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  8. Fourier Transform Infrared Spectroscopy (FT-IR) and Simple Algorithm Analysis for Rapid and Non-Destructive Assessment of Developmental Cotton Fibers.

    Science.gov (United States)

    Liu, Yongliang; Kim, Hee-Jin

    2017-06-22

    With cotton fiber growth or maturation, cellulose content in cotton fibers markedly increases. Traditional chemical methods have been developed to determine cellulose content, but it is time-consuming and labor-intensive, mostly owing to the slow hydrolysis process of fiber cellulose components. As one approach, the attenuated total reflection Fourier transform infrared (ATR FT-IR) spectroscopy technique has also been utilized to monitor cotton cellulose formation, by implementing various spectral interpretation strategies of both multivariate principal component analysis (PCA) and 1-, 2- or 3-band/-variable intensity or intensity ratios. The main objective of this study was to compare the correlations between cellulose content determined by chemical analysis and ATR FT-IR spectral indices acquired by the reported procedures, among developmental Texas Marker-1 (TM-1) and immature fiber ( im ) mutant cotton fibers. It was observed that the R value, CI IR , and the integrated intensity of the 895 cm -1 band exhibited strong and linear relationships with cellulose content. The results have demonstrated the suitability and utility of ATR FT-IR spectroscopy, combined with a simple algorithm analysis, in assessing cotton fiber cellulose content, maturity, and crystallinity in a manner which is rapid, routine, and non-destructive.

  9. Simple process-led algorithms for simulating habitats (SPLASH v.1.0): robust indices of radiation, evapotranspiration and plant-available moisture

    Science.gov (United States)

    Davis, Tyler W.; Prentice, I. Colin; Stocker, Benjamin D.; Thomas, Rebecca T.; Whitley, Rhys J.; Wang, Han; Evans, Bradley J.; Gallego-Sala, Angela V.; Sykes, Martin T.; Cramer, Wolfgang

    2017-02-01

    Bioclimatic indices for use in studies of ecosystem function, species distribution, and vegetation dynamics under changing climate scenarios depend on estimates of surface fluxes and other quantities, such as radiation, evapotranspiration and soil moisture, for which direct observations are sparse. These quantities can be derived indirectly from meteorological variables, such as near-surface air temperature, precipitation and cloudiness. Here we present a consolidated set of simple process-led algorithms for simulating habitats (SPLASH) allowing robust approximations of key quantities at ecologically relevant timescales. We specify equations, derivations, simplifications, and assumptions for the estimation of daily and monthly quantities of top-of-the-atmosphere solar radiation, net surface radiation, photosynthetic photon flux density, evapotranspiration (potential, equilibrium, and actual), condensation, soil moisture, and runoff, based on analysis of their relationship to fundamental climatic drivers. The climatic drivers include a minimum of three meteorological inputs: precipitation, air temperature, and fraction of bright sunshine hours. Indices, such as the moisture index, the climatic water deficit, and the Priestley-Taylor coefficient, are also defined. The SPLASH code is transcribed in C++, FORTRAN, Python, and R. A total of 1 year of results are presented at the local and global scales to exemplify the spatiotemporal patterns of daily and monthly model outputs along with comparisons to other model results.

  10. Simple Machines Made Simple.

    Science.gov (United States)

    St. Andre, Ralph E.

    Simple machines have become a lost point of study in elementary schools as teachers continue to have more material to cover. This manual provides hands-on, cooperative learning activities for grades three through eight concerning the six simple machines: wheel and axle, inclined plane, screw, pulley, wedge, and lever. Most activities can be…

  11. A new anaesthetic breathing system combining Mapleson A, D and E principles. A simple apparatus for low flow universal use without carbon dioxide absorption.

    Science.gov (United States)

    Humphrey, D

    1983-04-01

    A new simple anaesthetic breathing system is described which has been designed to incorporate into a single system advantages of Mapleson A, D and E type systems. Coaxial and non-coaxial versions are available. The system can be used for adults, children or neonates and allows both spontaneous or controlled ventilation with low fresh gas flows at all times. As a Mapleson A system for spontaneous respiration a considerable saving of anaesthetic gases and vapours is achieved since, for adults, the new system requires a lower fresh gas flow even than that for the Magill. For children breathing spontaneously the system requires only one third of the fresh gas flow necessary for the Jackson Rees modification of Ayre's T-piece. For controlled ventilation the system behaves as a modified Mapleson D/E (Bain type) system with the advantage of predictable CO2 tensions and good humidification. The system is safe, simple in design and operation, and is easily sterilized. Further it offers low resistance to expiration and facilitates scavenging at all times which, with low anaesthetic gas flows, permits complete theatre pollution control. Its potential application in academic and rural environments and major advantages over the circle absorber system are discussed.

  12. Archimedes' Principle in Action

    Science.gov (United States)

    Kires, Marian

    2007-01-01

    The conceptual understanding of Archimedes' principle can be verified in experimental procedures which determine mass and density using a floating object. This is demonstrated by simple experiments using graduated beakers. (Contains 5 figures.)

  13. A Simple Method for Discovering Druggable, Specific Glycosaminoglycan-Protein Systems. Elucidation of Key Principles from Heparin/Heparan Sulfate-Binding Proteins.

    Directory of Open Access Journals (Sweden)

    Aurijit Sarkar

    Full Text Available Glycosaminoglycans (GAGs affect human physiology and pathology by modulating more than 500 proteins. GAG-protein interactions are generally assumed to be ionic and nonspecific, but specific interactions do exist. Here, we present a simple method to identify the GAG-binding site (GBS on proteins that in turn helps predict high specific GAG-protein systems. Contrary to contemporary thinking, we found that the electrostatic potential at basic arginine and lysine residues neither identifies the GBS consistently, nor its specificity. GBSs are better identified by considering the potential at neutral hydrogen bond donors such as asparagine or glutamine sidechains. Our studies also reveal that an unusual constellation of ionic and non-ionic residues in the binding site leads to specificity. Nature engineers the local environment of Asn45 of antithrombin, Gln255 of 3-O-sulfotransferase 3, Gln163 and Asn167 of 3-O-sulfotransferase 1 and Asn27 of basic fibroblast growth factor in the respective GBSs to induce specificity. Such residues are distinct from other uncharged residues on the same protein structure in possessing a significantly higher electrostatic potential, resultant from the local topology. In contrast, uncharged residues on nonspecific GBSs such as thrombin and serum albumin possess a diffuse spread of electrostatic potential. Our findings also contradict the paradigm that GAG-binding sites are simply a collection of contiguous Arg/Lys residues. Our work demonstrates the basis for discovering specifically interacting and druggable GAG-protein systems based on the structure of protein alone, without requiring access to any structure-function relationship data.

  14. Genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Grefenstette, J.J.

    1994-12-31

    Genetic algorithms solve problems by using principles inspired by natural population genetics: They maintain a population of knowledge structures that represent candidate solutions, and then let that population evolve over time through competition and controlled variation. GAs are being applied to a wide range of optimization and learning problems in many domains.

  15. A simple algorithm to estimate genetic variance in an animal threshold model using Bayesian inference Genetics Selection Evolution 2010, 42:29

    DEFF Research Database (Denmark)

    Ødegård, Jørgen; Meuwissen, Theo HE; Heringstad, Bjørg

    2010-01-01

    " or "non-informative" with respect to genetic (co)variance components. The "non-informative" individuals are characterized by their Mendelian sampling deviations (deviance from the mid-parent mean) being completely confounded with a single residual on the underlying liability scale. For threshold models...... individual records exist on parents. Therefore, the aim of our study was to develop a new Gibbs sampling algorithm for a proper estimation of genetic (co)variance components within an animal threshold model framework. Methods In the proposed algorithm, individuals are classified as either "informative......, residual variance on the underlying scale is not identifiable. Hence, variance of fully confounded Mendelian sampling deviations cannot be identified either, but can be inferred from the between-family variation. In the new algorithm, breeding values are sampled as in a standard animal model using the full...

  16. SU-E-T-33: A Feasibility-Seeking Algorithm Applied to Planning of Intensity Modulated Proton Therapy: A Proof of Principle Study

    International Nuclear Information System (INIS)

    Penfold, S; Casiraghi, M; Dou, T; Schulte, R; Censor, Y

    2015-01-01

    Purpose: To investigate the applicability of feasibility-seeking cyclic orthogonal projections to the field of intensity modulated proton therapy (IMPT) inverse planning. Feasibility of constraints only, as opposed to optimization of a merit function, is less demanding algorithmically and holds a promise of parallel computations capability with non-cyclic orthogonal projections algorithms such as string-averaging or block-iterative strategies. Methods: A virtual 2D geometry was designed containing a C-shaped planning target volume (PTV) surrounding an organ at risk (OAR). The geometry was pixelized into 1 mm pixels. Four beams containing a subset of proton pencil beams were simulated in Geant4 to provide the system matrix A whose elements a-ij correspond to the dose delivered to pixel i by a unit intensity pencil beam j. A cyclic orthogonal projections algorithm was applied with the goal of finding a pencil beam intensity distribution that would meet the following dose requirements: D-OAR < 54 Gy and 57 Gy < D-PTV < 64.2 Gy. The cyclic algorithm was based on the concept of orthogonal projections onto half-spaces according to the Agmon-Motzkin-Schoenberg algorithm, also known as ‘ART for inequalities’. Results: The cyclic orthogonal projections algorithm resulted in less than 5% of the PTV pixels and less than 1% of OAR pixels violating their dose constraints, respectively. Because of the abutting OAR-PTV geometry and the realistic modelling of the pencil beam penumbra, complete satisfaction of the dose objectives was not achieved, although this would be a clinically acceptable plan for a meningioma abutting the brainstem, for example. Conclusion: The cyclic orthogonal projections algorithm was demonstrated to be an effective tool for inverse IMPT planning in the 2D test geometry described. We plan to further develop this linear algorithm to be capable of incorporating dose-volume constraints into the feasibility-seeking algorithm

  17. SU-E-T-33: A Feasibility-Seeking Algorithm Applied to Planning of Intensity Modulated Proton Therapy: A Proof of Principle Study

    Energy Technology Data Exchange (ETDEWEB)

    Penfold, S [University of Adelaide, Adelaide, SA (Australia); Casiraghi, M [Paul Scherrer Institut, Villigen, Aargau (Switzerland); Dou, T [University of California, Los Angeles, Los Angeles, CA (United States); Schulte, R [Loma Linda University, Loma Linda, CA (United States); Censor, Y [University of Haifa, Haifa (Israel)

    2015-06-15

    Purpose: To investigate the applicability of feasibility-seeking cyclic orthogonal projections to the field of intensity modulated proton therapy (IMPT) inverse planning. Feasibility of constraints only, as opposed to optimization of a merit function, is less demanding algorithmically and holds a promise of parallel computations capability with non-cyclic orthogonal projections algorithms such as string-averaging or block-iterative strategies. Methods: A virtual 2D geometry was designed containing a C-shaped planning target volume (PTV) surrounding an organ at risk (OAR). The geometry was pixelized into 1 mm pixels. Four beams containing a subset of proton pencil beams were simulated in Geant4 to provide the system matrix A whose elements a-ij correspond to the dose delivered to pixel i by a unit intensity pencil beam j. A cyclic orthogonal projections algorithm was applied with the goal of finding a pencil beam intensity distribution that would meet the following dose requirements: D-OAR < 54 Gy and 57 Gy < D-PTV < 64.2 Gy. The cyclic algorithm was based on the concept of orthogonal projections onto half-spaces according to the Agmon-Motzkin-Schoenberg algorithm, also known as ‘ART for inequalities’. Results: The cyclic orthogonal projections algorithm resulted in less than 5% of the PTV pixels and less than 1% of OAR pixels violating their dose constraints, respectively. Because of the abutting OAR-PTV geometry and the realistic modelling of the pencil beam penumbra, complete satisfaction of the dose objectives was not achieved, although this would be a clinically acceptable plan for a meningioma abutting the brainstem, for example. Conclusion: The cyclic orthogonal projections algorithm was demonstrated to be an effective tool for inverse IMPT planning in the 2D test geometry described. We plan to further develop this linear algorithm to be capable of incorporating dose-volume constraints into the feasibility-seeking algorithm.

  18. Cosmological principles. II. Physical principles

    International Nuclear Information System (INIS)

    Harrison, E.R.

    1974-01-01

    The discussion of cosmological principle covers the uniformity principle of the laws of physics, the gravitation and cognizability principles, and the Dirac creation, chaos, and bootstrap principles. (U.S.)

  19. Effect of a simple two-step warfarin dosing algorithm on anticoagulant control as measured by time in therapeutic range : a pilot study

    NARCIS (Netherlands)

    Kim, Y. -K.; Nieuwlaat, R.; Connolly, S. J.; Schulman, S.; Meijer, K.; Raju, N.; Kaatz, S.; Eikelboom, J. W.

    Background: The efficacy and safety of vitamin K antagonists for the prevention of thromboembolism are dependent on the time for which the International Normalized Ratio (INR) is in the therapeutic range. The objective of our study was to determine the effect of introducing a simple two-step dosing

  20. Chronic obstructive pulmonary disease and coronary disease: COPDCoRi, a simple and effective algorithm for predicting the risk of coronary artery disease in COPD patients.

    Science.gov (United States)

    Cazzola, Mario; Calzetta, Luigino; Matera, Maria Gabriella; Muscoli, Saverio; Rogliani, Paola; Romeo, Francesco

    2015-08-01

    Chronic obstructive pulmonary disease (COPD) is often associated with cardiovascular artery disease (CAD), representing a potential and independent risk factor for cardiovascular morbidity. Therefore, the aim of this study was to identify an algorithm for predicting the risk of CAD in COPD patients. We analyzed data of patients afferent to the Cardiology ward and the Respiratory Diseases outpatient clinic of Tor Vergata University (2010-2012, 1596 records). The study population was clustered as training population (COPD patients undergoing coronary arteriography), control population (non-COPD patients undergoing coronary arteriography), test population (COPD patients whose records reported information on the coronary status). The predicting model was built via causal relationship between variables, stepwise binary logistic regression and Hosmer-Lemeshow analysis. The algorithm was validated via split-sample validation method and receiver operating characteristics (ROC) curve analysis. The diagnostic accuracy was assessed. In training population the variables gender (men/women OR: 1.7, 95%CI: 1.237-2.5, P COPD patients, whereas in control population also age and diabetes were correlated. The stepwise binary logistic regressions permitted to build a well fitting predictive model for training population but not for control population. The predictive algorithm shown a diagnostic accuracy of 81.5% (95%CI: 77.78-84.71) and an AUC of 0.81 (95%CI: 0.78-0.85) for the validation set. The proposed algorithm is effective for predicting the risk of CAD in COPD patients via a rapid, inexpensive and non-invasive approach. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. First-principles molecular dynamics for metals

    International Nuclear Information System (INIS)

    Fernando, G.W.; Qian, G.; Weinert, M.; Davenport, J.W.

    1989-01-01

    A Car-Parrinello-type first-principles molecular-dynamics approach capable of treating the partial occupancy of electronic states that occurs at the Fermi level in a metal is presented. The algorithms used to study metals are both simple and computationally efficient. We also discuss the connection between ordinary electronic-structure calculations and molecular-dynamics simulations as well as the role of Brillouin-zone sampling. This extension should be useful not only for metallic solids but also for solids that become metals in their liquid and/or amorphous phases

  2. On the dosimetric behaviour of photon dose calculation algorithms in the presence of simple geometric heterogeneities: comparison with Monte Carlo calculations

    DEFF Research Database (Denmark)

    Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk

    2007-01-01

    with modified Batho correction; Nucletron Helax-TMS: the collapsed cone and the pencil beam with equivalent path length correction; CMS XiO: the multigrid superposition and the fast Fourier transform convolution; Philips Pinnacle: the collapsed cone. Monte Carlo simulations (MC) performed with the EGSnrc codes...... BEAMnrc and DOSxyznrc from NRCC in Ottawa were used as a benchmark. The study was carried out in simple geometrical water phantoms (rho = 1.00 g cm(-3)) with inserts of different densities simulating light lung tissue (rho = 0.035 g cm(-3)), normal lung (rho = 0.20 g cm(-3)) and cortical bone tissue (rho...

  3. A simple chemical view of relaxations at stoichiometric (1 1 0) surfaces of rutile-structure type oxides: A first-principles study of stishovite, SiO 2

    Science.gov (United States)

    Muscenti, Thomas M.; Gibbs, G. V.; Cox, David F.

    2005-12-01

    First-principles electronic structure calculations have been used to examine the geometric and electronic structure of the bulk and (1 1 0) surface of stishovite, the rutile-structure polymorph of SiO 2. The primary changes in geometric and electronic structure associated with surface relaxation are similar to those predicted for stoichiometric (1 1 0) surfaces of other rutile-structure oxides: TiO 2, SnO 2, RuO 2. Occupied surface states can be attributed primarily to changes in the local coordination environment (hybridization) of surface oxygen anions, and the relaxations that lead to "rumpling" of the stoichiometric (1 1 0) surface can be viewed as a change in hybridization of 3-coordinated in-plane oxygen from a planar (sp 2) bulk local coordination environment to a lower-energy, non-planar, pyramidal (sp 3) surface geometry, following earlier descriptions by Godin and LaFemina for SnO 2(1 1 0). It is demonstrated that these descriptions follow naturally from a visual examination of the 3D valence charge density distributions and the electron localization function (ELF) which provide a view of the electronic structure in terms of electron bond pairs and lone pairs. Consideration of the surface relaxations in terms of molecular analogs suggests that the simple valence shell electron pair repulsion (VSEPR) model provides insight into the chemical driving force for surface relaxation and oxygen rehybridization.

  4. A new simple h-mesh adaptation algorithm for standard Smagorinsky LES: a first step of Taylor scale as a refinement variable

    Directory of Open Access Journals (Sweden)

    S Kaennakham

    2016-09-01

    Full Text Available The interaction between discretization error and modeling error has led to some doubts in adopting Solution Adaptive Grid (SAG strategies with LES. Existing SAG approaches contain undesired aspects making the use of one complicated and less convenient to apply to real engineering applications. In this work, a new refinement algorithm is proposed aiming to enhance the efficiency of SAG methodology in terms of simplicity in defining, less user's judgment, designed especially for standard Smagorinsky LES and computational affordability. The construction of a new refinement variable as a function of the Taylor scale, corresponding to the kinetic energy balance requirement of the Smagorinsky SGS model is presented. The numerical study has been tested out with a turbulent plane jet in two dimensions. It is found that the result quality can be effectively improved as well as a significant reduction in CPU time compared to fixed grid cases.

  5. Simple machines

    CERN Document Server

    Graybill, George

    2007-01-01

    Just how simple are simple machines? With our ready-to-use resource, they are simple to teach and easy to learn! Chocked full of information and activities, we begin with a look at force, motion and work, and examples of simple machines in daily life are given. With this background, we move on to different kinds of simple machines including: Levers, Inclined Planes, Wedges, Screws, Pulleys, and Wheels and Axles. An exploration of some compound machines follows, such as the can opener. Our resource is a real time-saver as all the reading passages, student activities are provided. Presented in s

  6. Mosaic Texture and Double c-Axis Periodicity of β-NiOOH: Insights from First-Principles and Genetic Algorithm Calculations.

    Science.gov (United States)

    Li, Ye-Fei; Selloni, Annabella

    2014-11-20

    Fe-doped NiOx has recently emerged as a promising anode material for the oxygen evolution reaction, but the origin of the high activity is still unclear, due largely to the structural uncertainty of the active phase of NiOx. Here, we report a theoretical study of the structure of β-NiOOH, one of the active components of NiOx. Using a genetic algorithm search of crystal structures combined with dispersion-corrected hybrid density functional theory calculations, we identify two groups of favorable structures: (i) layered structures with alternate Ni(OH)2 and NiO2 layers, consistent with the doubling of the c axis observed in high resolution transmission electron microscopy (TEM) measurements, and (ii) tunnel structures isostructural with MnO2 polymorphs, which can provide a rationale for the mosaic textures observed in TEM. Analysis of the Ni ions oxidation state further indicates a disproportionation of half of the Ni(3+) cations to Ni(2+)/Ni(4+) pairs. Hybrid density functionals are found essential for a correct description of the electronic structure of β-NiOOH.

  7. Simple improvements of a simple solution for inverting resolution

    NARCIS (Netherlands)

    J.C. Bioch (Cor); P.R.J. van der Laag

    1991-01-01

    textabstractIn this paper we address some simple improvements of the algorithm of Rouveirol and Puget [1989] for inverting resolution. Their approach is based on automatic change of representation called flattening and unflattening of clauses in a logic program. This enables a simple implementation

  8. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  9. A New Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Medha Gupta

    2016-07-01

    Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.

  10. Simple unification

    International Nuclear Information System (INIS)

    Ponce, W.A.; Zepeda, A.

    1987-08-01

    We present the results obtained from our systematic search of a simple Lie group that unifies weak and electromagnetic interactions in a single truly unified theory. We work with fractionally charged quarks, and allow for particles and antiparticles to belong to the same irreducible representation. We found that models based on SU(6), SU(7), SU(8) and SU(10) are viable candidates for simple unification. (author). 23 refs

  11. A simple algorithm for computing canonical forms

    Science.gov (United States)

    Ford, H.; Hunt, L. R.; Renjeng, S.

    1986-01-01

    It is well known that all linear time-invariant controllable systems can be transformed to Brunovsky canonical form by a transformation consisting only of coordinate changes and linear feedback. However, the actual procedures for doing this have tended to be overly complex. The technique introduced here is envisioned as an on-line procedure and is inspired by George Meyer's tangent model for nonlinear systems. The process utilizes Meyer's block triangular form as an intermedicate step in going to Brunovsky form. The method also involves orthogonal matrices, thus eliminating the need for the computation of matrix inverses. In addition, the Kronecker indices can be computed as a by-product of this transformation so it is necessary to know them in advance.

  12. Convex hull ranking algorithm for multi-objective evolutionary algorithms

    NARCIS (Netherlands)

    Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.

    2012-01-01

    Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity

  13. Simple Interactions

    DEFF Research Database (Denmark)

    and international public. The exhibition Simple Interactions. Sound Art from Japan presents works by 9 Japanese artists at the Museum of Contemporary Art Roskilde. The exhibition mixes installations, performances and documentations, all of which examine how simple interactions can create complex systems...... and patterns. Works and performances by the following artists are presented: Yuji DOGANE - Yukio FUJIMOTO - Atsuhiro ITO - Soichiro MIHARA - Atsushi NISHIJIMA - Jio SHIMIZU - Toshiya TSUNODA - Tetsuya UMEDA - Miki YUI The book presents texts by Minoru HATANAKa; Takashi KOJIMA, Rune SØCHTING and the editors...

  14. Introductory review on `Flying Triangulation': a motion-robust optical 3D measurement principle

    Science.gov (United States)

    Ettl, Svenja

    2015-04-01

    'Flying Triangulation' (FlyTri) is a recently developed principle which allows for a motion-robust optical 3D measurement of rough surfaces. It combines a simple sensor with sophisticated algorithms: a single-shot sensor acquires 2D camera images. From each camera image, a 3D profile is generated. The series of 3D profiles generated are aligned to one another by algorithms, without relying on any external tracking device. It delivers real-time feedback of the measurement process which enables an all-around measurement of objects. The principle has great potential for small-space acquisition environments, such as the measurement of the interior of a car, and motion-sensitive measurement tasks, such as the intraoral measurement of teeth. This article gives an overview of the basic ideas and applications of FlyTri. The main challenges and their solutions are discussed. Measurement examples are also given to demonstrate the potential of the measurement principle.

  15. Improved FHT Algorithms for Fast Computation of the Discrete Hartley Transform

    Directory of Open Access Journals (Sweden)

    M. T. Hamood

    2013-05-01

    Full Text Available In this paper, by using the symmetrical properties of the discrete Hartley transform (DHT, an improved radix-2 fast Hartley transform (FHT algorithm with arithmetic complexity comparable to that of the real-valued fast Fourier transform (RFFT is developed. It has a simple and regular butterfly structure and possesses the in-place computation property. Furthermore, using the same principles, the development can be extended to more efficient radix-based FHT algorithms. An example for the improved radix-4 FHT algorithm is given to show the validity of the presented method. The arithmetic complexity for the new algorithms are computed and then compared with the existing FHT algorithms. The results of these comparisons have shown that the developed algorithms reduce the number of multiplications and additions considerably.

  16. Ant-based extraction of rules in simple decision systems over ontological graphs

    Directory of Open Access Journals (Sweden)

    Pancerz Krzysztof

    2015-06-01

    Full Text Available In the paper, the problem of extraction of complex decision rules in simple decision systems over ontological graphs is considered. The extracted rules are consistent with the dominance principle similar to that applied in the dominancebased rough set approach (DRSA. In our study, we propose to use a heuristic algorithm, utilizing the ant-based clustering approach, searching the semantic spaces of concepts presented by means of ontological graphs. Concepts included in the semantic spaces are values of attributes describing objects in simple decision systems

  17. Simple concurrent garbage collection almost without synchronization

    NARCIS (Netherlands)

    Hesselink, Wim H.; Lali, M.I.

    We present two simple mark and sweep algorithms, A and B, for concurrent garbage collection by a single collector running concurrently with a number of mutators that concurrently modify shared data. Both algorithms are based on the ideas of Ben-Ari's classical algorithm for on-the-fly garbage

  18. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  19. ASP made simple

    CERN Document Server

    Deane, Sharon

    2003-01-01

    ASP Made Simple provides a brief introduction to ASP for the person who favours self teaching and/or does not have expensive computing facilities to learn on. The book will demonstrate how the principles of ASP can be learned with an ordinary PC running Personal Web Server, MS Access and a general text editor like Notepad.After working through the material readers should be able to:* Write ASP scripts that can display changing information on a web browser* Request records from a remote database or add records to it* Check user names & passwords and take this knowledge forward, either for their

  20. Variational principles

    CERN Document Server

    Moiseiwitsch, B L

    2004-01-01

    This graduate-level text's primary objective is to demonstrate the expression of the equations of the various branches of mathematical physics in the succinct and elegant form of variational principles (and thereby illuminate their interrelationship). Its related intentions are to show how variational principles may be employed to determine the discrete eigenvalues for stationary state problems and to illustrate how to find the values of quantities (such as the phase shifts) that arise in the theory of scattering. Chapter-by-chapter treatment consists of analytical dynamics; optics, wave mecha

  1. Mach's Principle

    Indian Academy of Sciences (India)

    that allows one to write down the laws of motion and arrive at the concept of inertia is somehow intimately re- lated to the background of distant parts of the universe. This argument is known as `Mach's principle' and we will analyse its implications further. When expressed in the framework of the absolute space,. Newton's ...

  2. Safety Principles

    Directory of Open Access Journals (Sweden)

    V. A. Grinenko

    2011-06-01

    Full Text Available The offered material in the article is picked up so that the reader could have a complete representation about concept “safety”, intrinsic characteristics and formalization possibilities. Principles and possible strategy of safety are considered. A material of the article is destined for the experts who are taking up the problems of safety.

  3. Mach's Principle

    Indian Academy of Sciences (India)

    popularize science. The underlying idea in Mach's principle is that the origin of inertia or mass of a particle is a dynamical quantity determined by the environ- ... Knowing the latitude of the location of the pendulum it is possible to calculate the Earth's spin period. The two methods give the same answer. At first sight this does ...

  4. Principles of speech coding

    CERN Document Server

    Ogunfunmi, Tokunbo

    2010-01-01

    It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the

  5. Speed, Acceleration, and Velocity: Level II, Unit 9, Lesson 1; Force, Mass, and Distance: Lesson 2; Types of Motion and Rest: Lesson 3; Electricity and Magnetism: Lesson 4; Electrical, Magnetic, and Gravitational Fields: Lesson 5; The Conservation and Conversion of Matter and Energy: Lesson 6; Simple Machines and Work: Lesson 7; Gas Laws: Lesson 8; Principles of Heat Engines: Lesson 9; Sound and Sound Waves: Lesson 10; Light Waves and Particles: Lesson 11; Program. A High.....

    Science.gov (United States)

    Manpower Administration (DOL), Washington, DC. Job Corps.

    This self-study program for high-school level contains lessons on: Speed, Acceleration, and Velocity; Force, Mass, and Distance; Types of Motion and Rest; Electricity and Magnetism; Electrical, Magnetic, and Gravitational Fields; The Conservation and Conversion of Matter and Energy; Simple Machines and Work; Gas Laws; Principles of Heat Engines;…

  6. A Flocking Based algorithm for Document Clustering Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Gao, Jinzhu [ORNL; Potok, Thomas E [ORNL

    2006-01-01

    Social animals or insects in nature often exhibit a form of emergent collective behavior known as flocking. In this paper, we present a novel Flocking based approach for document clustering analysis. Our Flocking clustering algorithm uses stochastic and heuristic principles discovered from observing bird flocks or fish schools. Unlike other partition clustering algorithm such as K-means, the Flocking based algorithm does not require initial partitional seeds. The algorithm generates a clustering of a given set of data through the embedding of the high-dimensional data items on a two-dimensional grid for easy clustering result retrieval and visualization. Inspired by the self-organized behavior of bird flocks, we represent each document object with a flock boid. The simple local rules followed by each flock boid result in the entire document flock generating complex global behaviors, which eventually result in a clustering of the documents. We evaluate the efficiency of our algorithm with both a synthetic dataset and a real document collection that includes 100 news articles collected from the Internet. Our results show that the Flocking clustering algorithm achieves better performance compared to the K- means and the Ant clustering algorithm for real document clustering.

  7. Zymography Principles.

    Science.gov (United States)

    Wilkesman, Jeff; Kurz, Liliana

    2017-01-01

    Zymography, the detection, identification, and even quantification of enzyme activity fractionated by gel electrophoresis, has received increasing attention in the last years, as revealed by the number of articles published. A number of enzymes are routinely detected by zymography, especially with clinical interest. This introductory chapter reviews the major principles behind zymography. New advances of this method are basically focused towards two-dimensional zymography and transfer zymography as will be explained in the rest of the chapters. Some general considerations when performing the experiments are outlined as well as the major troubleshooting and safety issues necessary for correct development of the electrophoresis.

  8. Basic principles

    International Nuclear Information System (INIS)

    Wilson, P.D.

    1996-01-01

    Some basic explanations are given of the principles underlying the nuclear fuel cycle, starting with the physics of atomic and nuclear structure and continuing with nuclear energy and reactors, fuel and waste management and finally a discussion of economics and the future. An important aspect of the fuel cycle concerns the possibility of ''closing the back end'' i.e. reprocessing the waste or unused fuel in order to re-use it in reactors of various kinds. The alternative, the ''oncethrough'' cycle, discards the discharged fuel completely. An interim measure involves the prolonged storage of highly radioactive waste fuel. (UK)

  9. Demonstrating Fermat's Principle in Optics

    Science.gov (United States)

    Paleiov, Orr; Pupko, Ofir; Lipson, S. G.

    2011-01-01

    We demonstrate Fermat's principle in optics by a simple experiment using reflection from an arbitrarily shaped one-dimensional reflector. We investigated a range of possible light paths from a lamp to a fixed slit by reflection in a curved reflector and showed by direct measurement that the paths along which light is concentrated have either…

  10. Retrospective and Prospective Human Intravenous and Oral Pharmacokinetic Projection of Dipeptidyl peptidase-IV Inhibitors Using Simple Allometric Principles - Case Studies of ABT-279, ABT-341, Alogliptin, Carmegliptin, Sitagliptin and Vildagliptin.

    Science.gov (United States)

    Gilibili, Ravindranath R; Bhamidipati, Ravi Kanth; Mullangi, Ramesh; Srinivas, Nuggehally R

    2015-01-01

    The purpose of this exercise was to explore the utility of allometric scaling approach for the prediction of intravenous and oral pharmacokinetics of six dipeptidy peptidase-IV (DPP-IV) inhibitors viz. ABT-279, ABT-341, alogliptin, carmegliptin, sitagliptin and vildagliptin. The availability of intravenous and oral pharmacokinetic data in animals enabled the allometry scaling of 6 DPP-IV inhibitors. The relationship between the main pharmacokinetic parameters [viz. volume of distribution (Vd) and clearance (CL)] and body weight was studied across three or four mammalian species, using double logarithmic plots to predict the human pharmacokinetic parameters of CL and Vd using simple allometry. A simply allometry relationship: Y = aWb was found to be adequate for the prediction of intravenous and oral human clearance/volume of distribution for DPP-IV inhibitors. The allometric equations for alogliptin, carmegliptin, sitagliptin, vildagliptin, ABT-279 and ABT-341 were 1.867W0.780, 1.170W0.756, 2.020W0.529, 1.959 W0.847, 0.672 W1.016, 1.077W 0.649, respectively, to predict intravenous clearance (CL) and the corresponding equations to predict intravenous volume of distribution (Vd) were: 3.313W0.987, 6.096W0.992, 7.140W0.805, 2.742W0.941, 1.299W0.695 and 5.370W0.803. With the exception of a few discordant values the exponent rule appeared to hold for CL (0.75) and Vd (1.0) for the predictions of various DPP-IV inhibitors. Regardless of the routes, the predicted values were within 2-3 fold of observed values and intravenous allometry was better than oral allometry. Simple allometry retrospectively predicted with reasonable accuracy the human reported values of gliptins and could be used as a prospective tool for this class of drugs.

  11. Simple machines made simple a teacher resource manual

    CERN Document Server

    Andre, Ralph E St

    1993-01-01

    This book allows you to present scientific principles and simple mechanics through hands-on cooperative learning activities. Using inexpensive materials (e.g., tape, paper clips), students build simple machines-such as levers, pulleys, spring scales, gears, wheels and axles, windmills, and wedges-that demonstrate how things work. Activities have easy-to-locate materials lists, time requirements, and step-by-step directions (usually illustrated) on presentation. Ideas for bulletin boards, learning centers, and computer-assisted instruction are an added bonus.

  12. Unsupervised Classification Using Immune Algorithm

    OpenAIRE

    Al-Muallim, M. T.; El-Kouatly, R.

    2012-01-01

    Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...

  13. Gamescape Principles

    DEFF Research Database (Denmark)

    Nobaew, Banphot; Ryberg, Thomas

    2011-01-01

    developed by Buckingham. It supplements and extends this framework by offering a more detailed account of how visual principles and elements in games can be analysed. In developing this visual grammar we draw theoretically on existing approaches within: the arts, history, film study, semiotics, multimodal...... analysis, and game studies. We illustrate the theoretical and analytical framework by analysing samples of screenshots and video clips collected from the online game “World of Warcraft” (WoW) where we have conducted our online research. The research data is supplemented by ethnographic data (observation......This paper proposes a new theoretical framework or visual grammar for analysing visual aspects of digital 3D games, and for understanding more deeply the notion of Visual Digital Game Literacy. The framework focuses on the development of a visual grammar by drawing on the digital literacy framework...

  14. A simple convex optimization problem with many applications

    DEFF Research Database (Denmark)

    Vidal, Rene Victor Valqui

    1994-01-01

    This paper presents an algorithm for the solution of a simple convex optimization problem. This problem is a generalization of several other optimization problems which have applications to resource allocation, optimal capacity expansion, and vehicle scheduling. The algorithm is based...

  15. Quantum Computations: Fundamentals and Algorithms

    International Nuclear Information System (INIS)

    Duplij, S.A.; Shapoval, I.I.

    2007-01-01

    Basic concepts of quantum information theory, principles of quantum calculations and the possibility of creation on this basis unique on calculation power and functioning principle device, named quantum computer, are concerned. The main blocks of quantum logic, schemes of quantum calculations implementation, as well as some known today effective quantum algorithms, called to realize advantages of quantum calculations upon classical, are presented here. Among them special place is taken by Shor's algorithm of number factorization and Grover's algorithm of unsorted database search. Phenomena of decoherence, its influence on quantum computer stability and methods of quantum errors correction are described

  16. VARIATIONAL PRINCIPLE FOR PLANETARY INTERIORS

    International Nuclear Information System (INIS)

    Zeng, Li; Jacobsen, Stein B.

    2016-01-01

    In the past few years, the number of confirmed planets has grown above 2000. It is clear that they represent a diversity of structures not seen in our own solar system. In addition to very detailed interior modeling, it is valuable to have a simple analytical framework for describing planetary structures. The variational principle is a fundamental principle in physics, entailing that a physical system follows the trajectory, which minimizes its action. It is alternative to the differential equation formulation of a physical system. Applying the variational principle to the planetary interior can beautifully summarize the set of differential equations into one, which provides us some insight into the problem. From this principle, a universal mass–radius relation, an estimate of the error propagation from the equation of state to the mass–radius relation, and a form of the virial theorem applicable to planetary interiors are derived.

  17. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  18. Microscopic Description of Le Chatelier's Principle

    Science.gov (United States)

    Novak, Igor

    2005-01-01

    A simple approach that "demystifies" Le Chatelier's principle (LCP) and simulates students to think about fundamental physical background behind the well-known principles is presented. The approach uses microscopic descriptors of matter like energy levels and populations and does not require any assumption about the fixed amount of substance being…

  19. A Simple Approach to the Reconstruction of a Set of Points from the Multiset of n2Pairwise Distances in n2Steps for the Sequencing Problem: II. Algorithm.

    Science.gov (United States)

    Fomin, Eduard

    2016-12-01

    A new uniform algorithm based on sequential removal of redundancy from inputs is proposed to solve the turnpike and beltway problems. For error-free inputs that simulate experimental data with high accuracy, the size of inputs decreases from [Formula: see text] to [Formula: see text], which permits one to eliminate exhaustive search almost completely and reconstruct sequences in [Formula: see text] steps. Computational experiments show high efficiency of the algorithm for both the turnpike and beltway cases, with the reconstruction time for sequences of lengths up to several thousand elements being within 1 second on a modern PC.

  20. Stochastic Programming with Simple Integer Recourse

    NARCIS (Netherlands)

    Louveaux, François V.; van der Vlerk, Maarten H.

    1993-01-01

    Stochastic integer programs are notoriously difficult. Very few properties are known and solution algorithms are very scarce. In this paper, we introduce the class of stochastic programs with simple integer recourse, a natural extension of the simple recourse case extensively studied in stochastic

  1. Recovery Rate of Clustering Algorithms

    NARCIS (Netherlands)

    Li, Fajie; Klette, Reinhard; Wada, T; Huang, F; Lin, S

    2009-01-01

    This article provides a simple and general way for defining the recovery rate of clustering algorithms using a given family of old clusters for evaluating the performance of the algorithm when calculating a family of new clusters. Under the assumption of dealing with simulated data (i.e., known old

  2. Mao-Gilles Stabilization Algorithm

    OpenAIRE

    Jérôme Gilles

    2013-01-01

    Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...

  3. Mao-Gilles Stabilization Algorithm

    Directory of Open Access Journals (Sweden)

    Jérôme Gilles

    2013-07-01

    Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.

  4. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  5. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  6. Nonlinear optics principles and applications

    CERN Document Server

    Li, Chunfei

    2017-01-01

    This book reflects the latest advances in nonlinear optics. Besides the simple, strict mathematical deduction, it also discusses the experimental verification and possible future applications, such as the all-optical switches. It consistently uses the practical unit system throughout. It employs simple physical images, such as "light waves" and "photons" to systematically explain the main principles of nonlinear optical effects. It uses the first-order nonlinear wave equation in frequency domain under the condition of “slowly varying amplitude approximation" and the classical model of the interaction between the light and electric dipole. At the same time, it also uses the rate equations based on the energy-level transition of particle systems excited by photons and the energy and momentum conservation principles to explain the nonlinear optical phenomenon. The book is intended for researchers, engineers and graduate students in the field of the optics, optoelectronics, fiber communication, information tech...

  7. The gauge principle vs. the equivalence principle

    International Nuclear Information System (INIS)

    Gates, S.J. Jr.

    1984-01-01

    Within the context of field theory, it is argued that the role of the equivalence principle may be replaced by the principle of gauge invariance to provide a logical framework for theories of gravitation

  8. Equivalence principles and electromagnetism

    Science.gov (United States)

    Ni, W.-T.

    1977-01-01

    The implications of the weak equivalence principles are investigated in detail for electromagnetic systems in a general framework. In particular, it is shown that the universality of free-fall trajectories (Galileo weak equivalence principle) does not imply the validity of the Einstein equivalence principle. However, the Galileo principle plus the universality of free-fall rotation states does imply the Einstein principle.

  9. The principle of least action

    CERN Document Server

    Rojo, Alberto

    2018-01-01

    The principle of least action originates in the idea that, if nature has a purpose, it should follow a minimum or critical path. This simple principle, and its variants and generalizations, applies to optics, mechanics, electromagnetism, relativity, and quantum mechanics, and provides an essential guide to understanding the beauty of physics. This unique text provides an accessible introduction to the action principle across these various fields of physics, and examines its history and fundamental role in science. It includes - with varying levels of mathematical sophistication - explanations from historical sources, discussion of classic papers, and original worked examples. The result is a story that is understandable to those with a modest mathematical background, as well as to researchers and students in physics and the history of physics.

  10. A Simple Inexpensive Procedure for Illustrating Some Principles of Tomography

    Science.gov (United States)

    Darvey, Ivan G.

    2013-01-01

    The experiment proposed here illustrates some concepts of tomography via a qualitative determination of the relative concentration of various dilutions of food dye without "a priori" knowledge of the concentration of each dye mixture. This is performed in a manner analogous to computed tomography (CT) scans. In order to determine the…

  11. Storage capacity of the Tilinglike Learning Algorithm

    International Nuclear Information System (INIS)

    Buhot, Arnaud; Gordon, Mirta B.

    2001-01-01

    The storage capacity of an incremental learning algorithm for the parity machine, the Tilinglike Learning Algorithm, is analytically determined in the limit of a large number of hidden perceptrons. Different learning rules for the simple perceptron are investigated. The usual Gardner-Derrida rule leads to a storage capacity close to the upper bound, which is independent of the learning algorithm considered

  12. On König's root finding algorithms

    DEFF Research Database (Denmark)

    Buff, Xavier; Henriksen, Christian

    2003-01-01

    In this paper, we first recall the definition of a family of root-finding algorithms known as König's algorithms. We establish some local and some global properties of those algorithms. We give a characterization of rational maps which arise as König's methods of polynomials with simple roots. We...

  13. Substoichiometric method in the simple radiometric analysis

    International Nuclear Information System (INIS)

    Ikeda, N.; Noguchi, K.

    1979-01-01

    The substoichiometric method is applied to simple radiometric analysis. Two methods - the standard reagent method and the standard sample method - are proposed. The validity of the principle of the methods is verified experimentally in the determination of silver by the precipitation method, or of zinc by the ion-exchange or solvent-extraction method. The proposed methods are simple and rapid compared with the conventional superstoichiometric method. (author)

  14. Simple Electromagnetic Analysis in Cryptography

    Directory of Open Access Journals (Sweden)

    Zdenek Martinasek

    2012-07-01

    Full Text Available The article describes the main principle and methods of simple electromagnetic analysis and thus provides an overview of simple electromagnetic analysis.The introductions chapters describe specific SPA attack used visual inspection of EM traces, template based attack and collision attack.After reading the article, the reader is sufficiently informed of any context of SEMA.Another aim of the article is the practical realization of SEMA which is focused on AES implementation.The visual inspection of EM trace of AES is performed step by step and the result is the determination of secret key Hamming weight.On the resulting EM trace, the Hamming weight of the secret key 1 to 8 was clearly visible.This method allows reduction from the number of possible keys for following brute force attack.

  15. An Experimental Method for the Active Learning of Greedy Algorithms

    Science.gov (United States)

    Velazquez-Iturbide, J. Angel

    2013-01-01

    Greedy algorithms constitute an apparently simple algorithm design technique, but its learning goals are not simple to achieve.We present a didacticmethod aimed at promoting active learning of greedy algorithms. The method is focused on the concept of selection function, and is based on explicit learning goals. It mainly consists of an…

  16. First-principles study of complex material systems

    Science.gov (United States)

    He, Lixin

    This thesis covers several topics concerning the study of complex materials systems by first-principles methods. It contains four chapters. A brief, introductory motivation of this work will be given in Chapter 1. In Chapter 2, I will give a short overview of the first-principles methods, including density-functional theory (DFT), planewave pseudopotential methods, and the Berry-phase theory of polarization in crystallines insulators. I then discuss in detail the locality and exponential decay properties of Wannier functions and of related quantities such as the density matrix, and their application in linear-scaling algorithms. In Chapter 3, I investigate the interaction of oxygen vacancies and 180° domain walls in tetragonal PbTiO3 using first-principles methods. Our calculations indicate that the oxygen vacancies have a lower formation energy in the domain wall than in the bulk, thereby confirming the tendency of these defects to migrate to, and pin, the domain walls. The pinning energies are reported for each of the three possible orientations of the original Ti--O--Ti bonds, and attempts to model the results with simple continuum models are discussed. CaCu3Ti4O12 (CCTO) has attracted a lot of attention recently because it was found to have an enormous dielectric response over a very wide temperature range. In Chapter 4, I study the electronic and lattice structure, and the lattice dynamical properties, of this system. Our first-principles calculations together with experimental results point towards an extrinsic mechanism as the origin of the unusual dielectric response.

  17. Simple Numerical Simulation of Strain Measurement

    Science.gov (United States)

    Tai, H.

    2002-01-01

    By adopting the basic principle of the reflection (and transmission) of a plane polarized electromagnetic wave incident normal to a stack of films of alternating refractive index, a simple numerical code was written to simulate the maximum reflectivity (transmittivity) of a fiber optic Bragg grating corresponding to various non-uniform strain conditions including photo-elastic effect in certain cases.

  18. Elementary functions algorithms and implementation

    CERN Document Server

    Muller, Jean-Michel

    2016-01-01

    This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...

  19. A Simple Spectral Observer

    Directory of Open Access Journals (Sweden)

    Lizeth Torres

    2018-05-01

    Full Text Available The principal aim of a spectral observer is twofold: the reconstruction of a signal of time via state estimation and the decomposition of such a signal into the frequencies that make it up. A spectral observer can be catalogued as an online algorithm for time-frequency analysis because is a method that can compute on the fly the Fourier transform (FT of a signal, without having the entire signal available from the start. In this regard, this paper presents a novel spectral observer with an adjustable constant gain for reconstructing a given signal by means of the recursive identification of the coefficients of a Fourier series. The reconstruction or estimation of a signal in the context of this work means to find the coefficients of a linear combination of sines a cosines that fits a signal such that it can be reproduced. The design procedure of the spectral observer is presented along with the following applications: (1 the reconstruction of a simple periodical signal, (2 the approximation of both a square and a triangular signal, (3 the edge detection in signals by using the Fourier coefficients, (4 the fitting of the historical Bitcoin market data from 1 December 2014 to 8 January 2018 and (5 the estimation of a input force acting upon a Duffing oscillator. To round out this paper, we present a detailed discussion about the results of the applications as well as a comparative analysis of the proposed spectral observer vis-à-vis the Short Time Fourier Transform (STFT, which is a well-known method for time-frequency analysis.

  20. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  1. A retrodictive stochastic simulation algorithm

    International Nuclear Information System (INIS)

    Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.

    2010-01-01

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  2. Principles of Chemistry (by Michael Munowitz)

    Science.gov (United States)

    Kovac, Reviewed By Jeffrey

    2000-05-01

    At a time when almost all general chemistry textbooks seem to have become commodities designed by marketing departments to offend no one, it is refreshing to find a book with a unique perspective. Michael Munowitz has written what I can only describe as a delightful chemistry book, full of conceptual insight, that uses a novel and interesting pedagogic strategy. This is a book that has much to recommend it. This is the best-written general chemistry book I have ever read. An editor with whom I have worked recently remarked that he felt his job was to help authors make their writing sing. Well, the writing in Principles of Chemistry sings with the full, rich harmonies and creative inventiveness of the King's Singers or Chanticleer. Here is the first sentence of the introduction: "Central to any understanding of the physical world is one discovery of paramount importance, a truth disarmingly simple yet profound in its implications: matter is not continuous." This is prose to be savored and celebrated. Principles of Chemistry has a distinct perspective on chemistry: the perspective of the physical chemist. The focus is on simplicity, what is common about molecules and reactions; begin with the microscopic and build bridges to the macroscopic. The author's perspective is clear from the organization of the book. After three rather broad introductory chapters, there are four chapters that develop the quantum mechanical theory of atoms and molecules, including a strong treatment of molecular orbital theory. Unlike many books, Principles of Chemistry presents the molecular orbital approach first and introduces valence bond theory later only as an approximation for dealing with more complicated molecules. The usual chapters on descriptive inorganic chemistry are absent (though there is an excellent chapter on organic and biological molecules and reactions as well as one on transition metal complexes). Instead, descriptive chemistry is integrated into the development of

  3. Cognitive radio resource allocation based on coupled chaotic genetic algorithm

    International Nuclear Information System (INIS)

    Zu Yun-Xiao; Zhou Jie; Zeng Chang-Chang

    2010-01-01

    A coupled chaotic genetic algorithm for cognitive radio resource allocation which is based on genetic algorithm and coupled Logistic map is proposed. A fitness function for cognitive radio resource allocation is provided. Simulations are conducted for cognitive radio resource allocation by using the coupled chaotic genetic algorithm, simple genetic algorithm and dynamic allocation algorithm respectively. The simulation results show that, compared with simple genetic and dynamic allocation algorithm, coupled chaotic genetic algorithm reduces the total transmission power and bit error rate in cognitive radio system, and has faster convergence speed

  4. Algorithms for Global Positioning

    DEFF Research Database (Denmark)

    Borre, Kai; Strang, Gilbert

    and replaces the authors' previous work, Linear Algebra, Geodesy, and GPS (1997). An initial discussion of the basic concepts, characteristics and technical aspects of different satellite systems is followed by the necessary mathematical content which is presented in a detailed and self-contained fashion......The emergence of satellite technology has changed the lives of millions of people. In particular, GPS has brought an unprecedented level of accuracy to the field of geodesy. This text is a guide to the algorithms and mathematical principles that account for the success of GPS technology....... At the heart of the matter are the positioning algorithms on which GPS technology relies, the discussion of which will affirm the mathematical contents of the previous chapters. Numerous ready-to-use MATLAB codes are included for the reader. This comprehensive guide will be invaluable for engineers...

  5. Crossing simple resonances

    Energy Technology Data Exchange (ETDEWEB)

    Collins, T.

    1985-08-01

    A simple criterion governs the beam distortion and/or loss of protons on a fast resonance crossing. Results from numerical integrations are illustrated for simple sextupole, octupole, and 10-pole resonances.

  6. From properties to materials: An efficient and simple approach

    Science.gov (United States)

    Huwig, Kai; Fan, Chencheng; Springborg, Michael

    2017-12-01

    We present an inverse-design method, the poor man's materials optimization, that is designed to identify materials within a very large class with optimized values for a pre-chosen property. The method combines an efficient genetic-algorithm-based optimization, an automatic approach for generating modified molecules, a simple approach for calculating the property of interest, and a mathematical formulation of the quantity whose value shall be optimized. In order to illustrate the performance of our approach, we study the properties of organic molecules related to those used in dye-sensitized solar cells, whereby we, for the sake of proof of principle, consider benzene as a simple test system. Using a genetic algorithm, the substituents attached to the organic backbone are varied and the best performing molecules are identified. We consider several properties to describe the performance of organic molecules, including the HOMO-LUMO gap, the sunlight absorption, the spatial distance of the orbitals, and the reorganisation energy. The results show that our method is able to identify a large number of good candidate structures within a short time. In some cases, chemical/physical intuition can be used to rationalize the substitution pattern of the best structures, although this is not always possible. The present investigations provide a solid foundation for dealing with more complex and technically relevant systems such as porphyrins. Furthermore, our "properties first, materials second" approach is not limited to solar-energy harvesting but can be applied to many other fields, as briefly is discussed in the paper.

  7. From properties to materials: An efficient and simple approach.

    Science.gov (United States)

    Huwig, Kai; Fan, Chencheng; Springborg, Michael

    2017-12-21

    We present an inverse-design method, the poor man's materials optimization, that is designed to identify materials within a very large class with optimized values for a pre-chosen property. The method combines an efficient genetic-algorithm-based optimization, an automatic approach for generating modified molecules, a simple approach for calculating the property of interest, and a mathematical formulation of the quantity whose value shall be optimized. In order to illustrate the performance of our approach, we study the properties of organic molecules related to those used in dye-sensitized solar cells, whereby we, for the sake of proof of principle, consider benzene as a simple test system. Using a genetic algorithm, the substituents attached to the organic backbone are varied and the best performing molecules are identified. We consider several properties to describe the performance of organic molecules, including the HOMO-LUMO gap, the sunlight absorption, the spatial distance of the orbitals, and the reorganisation energy. The results show that our method is able to identify a large number of good candidate structures within a short time. In some cases, chemical/physical intuition can be used to rationalize the substitution pattern of the best structures, although this is not always possible. The present investigations provide a solid foundation for dealing with more complex and technically relevant systems such as porphyrins. Furthermore, our "properties first, materials second" approach is not limited to solar-energy harvesting but can be applied to many other fields, as briefly is discussed in the paper.

  8. An investigation of genetic algorithms

    International Nuclear Information System (INIS)

    Douglas, S.R.

    1995-04-01

    Genetic algorithms mimic biological evolution by natural selection in their search for better individuals within a changing population. they can be used as efficient optimizers. This report discusses the developing field of genetic algorithms. It gives a simple example of the search process and introduces the concept of schema. It also discusses modifications to the basic genetic algorithm that result in species and niche formation, in machine learning and artificial evolution of computer programs, and in the streamlining of human-computer interaction. (author). 3 refs., 1 tab., 2 figs

  9. Principles of project management

    Science.gov (United States)

    1982-01-01

    The basic principles of project management as practiced by NASA management personnel are presented. These principles are given as ground rules and guidelines to be used in the performance of research, development, construction or operational assignments.

  10. Robustness of Multiple Clustering Algorithms on Hyperspectral Images

    National Research Council Canada - National Science Library

    Williams, Jason P

    2007-01-01

    .... Various clustering algorithms were employed, including a hierarchical method, ISODATA, K-means, and X-means, and were used on a simple two dimensional dataset in order to discover potential problems with the algorithms...

  11. Gravitation and Special Relativity from Compton Wave Interactions at the Planck Scale: An Algorithmic Approach

    Science.gov (United States)

    Blackwell, William C., Jr.

    2004-01-01

    In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.

  12. Economic principles in communication: An experimental study

    NARCIS (Netherlands)

    De Jaegher, K.; Rosenkranz, S.; Weitzel, G.U.

    2014-01-01

    This paper experimentally investigates how economic principles affect communication. In a simple sender–receiver game with common interests over payoffs, the sender can send a signal without a pre-given meaning in an infrequent or frequent state of the world. When the signal is costly, several

  13. Dimensional cosmological principles

    International Nuclear Information System (INIS)

    Chi, L.K.

    1985-01-01

    The dimensional cosmological principles proposed by Wesson require that the density, pressure, and mass of cosmological models be functions of the dimensionless variables which are themselves combinations of the gravitational constant, the speed of light, and the spacetime coordinates. The space coordinate is not the comoving coordinate. In this paper, the dimensional cosmological principle and the dimensional perfect cosmological principle are reformulated by using the comoving coordinate. The dimensional perfect cosmological principle is further modified to allow the possibility that mass creation may occur. Self-similar spacetimes are found to be models obeying the new dimensional cosmological principle

  14. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  15. Kinetics of enzyme action: essential principles for drug hunters

    National Research Council Canada - National Science Library

    Stein, Ross L

    2011-01-01

    ... field. Beginning with the most basic principles pertaining to simple, one-substrate enzyme reactions and their inhibitors, and progressing to a thorough treatment of two-substrate enzymes, Kinetics of Enzyme Action...

  16. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...

  17. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  18. MSDR-D Network Localization Algorithm

    Science.gov (United States)

    Coogan, Kevin; Khare, Varun; Kobourov, Stephen G.; Katz, Bastian

    We present a distributed multi-scale dead-reckoning (MSDR-D) algorithm for network localization that utilizes local distance and angular information for nearby sensors. The algorithm is anchor-free and does not require particular network topology, rigidity of the underlying communication graph, or high average connectivity. The algorithm scales well to large and sparse networks with complex topologies and outperforms previous algorithms when the noise levels are high. The algorithm is simple to implement and is available, along with source code, executables, and experimental results, at http://msdr-d.cs.arizona.edu/.

  19. The simple approach to deposition

    International Nuclear Information System (INIS)

    Jensen, N.O.

    1980-01-01

    The use of a simple top hat plume model in conjunction with the principle of source depletion facilitates an analytical treatment of the deposition problem. With such a model, explicit formulae for downwind deposition amounts and ground level atmospheric concentrations are given. The method has the advantage of allowing estimates of the most unfavorable parameter combinations for, say, the maximum deposition that can occur at a given distance from the source. With regard to the land contamination problem, where an area is defined as 'contaminated' when the amount of deposited material is greater than some minimum value, estimates of, for example, the maximum area contaminated and the maximum amount of contamination deposited will also be given

  20. Application of the maximum entropy production principle to electrical systems

    International Nuclear Information System (INIS)

    Christen, Thomas

    2006-01-01

    For a simple class of electrical systems, the principle of the maximum entropy production rate (MaxEP) is discussed. First, we compare the MaxEP principle and the principle of the minimum entropy production rate and illustrate the superiority of the MaxEP principle for the example of two parallel constant resistors. Secondly, we show that the Steenbeck principle for the electric arc as well as the ohmic contact behaviour of space-charge limited conductors follow from the MaxEP principle. In line with work by Dewar, the investigations seem to suggest that the MaxEP principle can also be applied to systems far from equilibrium, provided appropriate information is available that enters the constraints of the optimization problem. Finally, we apply the MaxEP principle to a mesoscopic system and show that the universal conductance quantum, e 2 /h, of a one-dimensional ballistic conductor can be estimated

  1. On the sufficiency of the linear maximum principle

    DEFF Research Database (Denmark)

    Vidal, Rene Victor Valqui

    1987-01-01

    Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results......Presents a family of linear maximum principles for the discrete-time optimal control problem, derived from the saddle-point theorem of mathematical programming. Some simple examples illustrate the applicability of the main theoretical results...

  2. MANAGER PRINCIPLES AS BASIS OF MANAGEMENT STYLE TRANSFORMATION

    OpenAIRE

    R. A. Kopytov

    2011-01-01

    The paper considers an approach which is based on non-conventional mechanisms of management style formation. The preset level of sustainable management is maintained by self-organized environment created in the process of management style transformation in efficient management principles. Their efficiency is checked within an adaptive algorithm. The algorithm is developed on the basis of combination of evaluative tools  and base of operational  proves. The operating algorithm capability is te...

  3. The Porter Stemming Algorithm: Then and Now

    Science.gov (United States)

    Willett, Peter

    2006-01-01

    Purpose: In 1980, Porter presented a simple algorithm for stemming English language words. This paper summarises the main features of the algorithm, and highlights its role not just in modern information retrieval research, but also in a range of related subject domains. Design/methodology/approach: Review of literature and research involving use…

  4. Approximation algorithms for guarding holey polygons ...

    African Journals Online (AJOL)

    Guarding edges of polygons is a version of art gallery problem.The goal is finding the minimum number of guards to cover the edges of a polygon. This problem is NP-hard, and to our knowledge there are approximation algorithms just for simple polygons. In this paper we present two approximation algorithms for guarding ...

  5. A simple consensus algorithm for distributed averaging in random ...

    Indian Academy of Sciences (India)

    Distributed averaging in random geographical networks. It can be simply proved that for the values of the uniform step size σ in the range. (0,1/kmax], with kmax being the maximum degree of the graph, the above system is asymptotically globally convergent to [17]. ∀i; lim k→∞ xi (k) = α = 1. N. N. ∑ i=1 xi (0),. (3) which is ...

  6. A simple greedy algorithm for dynamic graph orientation

    DEFF Research Database (Denmark)

    Berglin, Edvin; Brodal, Gerth Stølting

    2017-01-01

    Graph orientations with low out-degree are one of several ways to efficiently store sparse graphs. If the graphs allow for insertion and deletion of edges, one may have to flip the orientation of some edges to prevent blowing up the maximum out-degree. We use arboricity as our sparsity measure...

  7. A Simple "Tubeless" Telescope

    Science.gov (United States)

    Straulino, S.; Bonechi, L.

    2010-01-01

    Two lenses make it possible to create a simple telescope with quite large magnification. The set-up is very simple and can be reproduced in schools, provided the laboratory has a range of lenses with different focal lengths. In this article, the authors adopt the Keplerian configuration, which is composed of two converging lenses. This instrument,…

  8. Simple Machine Junk Cars

    Science.gov (United States)

    Herald, Christine

    2010-01-01

    During the month of May, the author's eighth-grade physical science students study the six simple machines through hands-on activities, reading assignments, videos, and notes. At the end of the month, they can easily identify the six types of simple machine: inclined plane, wheel and axle, pulley, screw, wedge, and lever. To conclude this unit,…

  9. Biomechanics principles and practices

    CERN Document Server

    Peterson, Donald R

    2014-01-01

    Presents Current Principles and ApplicationsBiomedical engineering is considered to be the most expansive of all the engineering sciences. Its function involves the direct combination of core engineering sciences as well as knowledge of nonengineering disciplines such as biology and medicine. Drawing on material from the biomechanics section of The Biomedical Engineering Handbook, Fourth Edition and utilizing the expert knowledge of respected published scientists in the application and research of biomechanics, Biomechanics: Principles and Practices discusses the latest principles and applicat

  10. Fusion research principles

    CERN Document Server

    Dolan, Thomas James

    2013-01-01

    Fusion Research, Volume I: Principles provides a general description of the methods and problems of fusion research. The book contains three main parts: Principles, Experiments, and Technology. The Principles part describes the conditions necessary for a fusion reaction, as well as the fundamentals of plasma confinement, heating, and diagnostics. The Experiments part details about forty plasma confinement schemes and experiments. The last part explores various engineering problems associated with reactor design, vacuum and magnet systems, materials, plasma purity, fueling, blankets, neutronics

  11. Database principles programming performance

    CERN Document Server

    O'Neil, Patrick

    2014-01-01

    Database: Principles Programming Performance provides an introduction to the fundamental principles of database systems. This book focuses on database programming and the relationships between principles, programming, and performance.Organized into 10 chapters, this book begins with an overview of database design principles and presents a comprehensive introduction to the concepts used by a DBA. This text then provides grounding in many abstract concepts of the relational model. Other chapters introduce SQL, describing its capabilities and covering the statements and functions of the programmi

  12. Principles of ecotoxicology

    National Research Council Canada - National Science Library

    Walker, C. H

    2012-01-01

    "Now in its fourth edition, this exceptionally accessible text provides students with a multidisciplinary perspective and a grounding in the fundamental principles required for research in toxicology today...

  13. Hardware modules of the RSA algorithm

    Directory of Open Access Journals (Sweden)

    Škobić Velibor

    2014-01-01

    Full Text Available This paper describes basic principles of data protection using the RSA algorithm, as well as algorithms for its calculation. The RSA algorithm is implemented on FPGA integrated circuit EP4CE115F29C7, family Cyclone IV, Altera. Four modules of Montgomery algorithm are designed using VHDL. Synthesis and simulation are done using Quartus II software and ModelSim. The modules are analyzed for different key lengths (16 to 1024 in terms of the number of logic elements, the maximum frequency and speed.

  14. Nonlinear Gossip Algorithms for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Chao Shi

    2014-01-01

    Full Text Available We study some nonlinear gossip algorithms for wireless sensor networks. Firstly, two types of nonlinear single gossip algorithms are proposed. By using Lyapunov theory, Lagrange mean value theorem, and stochastic Lasalle’s invariance principle, we prove that the nonlinear single gossip algorithms can converge to the average of initial states with probability one. Secondly, two types of nonlinear multigossip algorithms are also presented and the convergence is proved by the same methods. Finally, computer simulation is also given to show the validity of the theoretical results.

  15. A simple technique to increase profits in wood products marketing

    Science.gov (United States)

    George B. Harpole

    1971-01-01

    Mathematical models can be used to solve quickly some simple day-to-day marketing problems. This note explains how a sawmill production manager, who has an essentially fixed-capacity mill, can solve several optimization problems by using pencil and paper, a forecast of market prices, and a simple algorithm. One such problem is to maximize profits in an operating period...

  16. Solving Simple Stochastic Games with Few Random Vertices

    NARCIS (Netherlands)

    H. Gimbert; F. Horn (Florian)

    2009-01-01

    htmlabstractSimple stochastic games are two-player zero-sum stochastic games with turn-based moves, perfect information, and reachability winning conditions. We present two new algorithms computing the values of simple stochastic games. Both of them rely on the existence of optimal permutation

  17. Solving simple stochastic games with few coin toss positions

    DEFF Research Database (Denmark)

    Ibsen-Jensen, Rasmus; Miltersen, Peter Bro

    2011-01-01

    Gimbert and Horn gave an algorithm for solving simple stochastic games with running time O(r! n) where n is the number of positions of the simple stochastic game and r is the number of its coin toss positions. Chatterjee et al. pointed out that a variant of strategy iteration can be implemented...

  18. Solving Simple Stochastic Games with Few Coin Toss Positions

    DEFF Research Database (Denmark)

    Ibsen-Jensen, Rasmus; Miltersen, Peter Bro

    2012-01-01

    Gimbert and Horn gave an algorithm for solving simple stochastic games with running time O(r! n) where n is the number of positions of the simple stochastic game and r is the number of its coin toss positions. Chatterjee et al. pointed out that a variant of strategy iteration can be implemented...

  19. Fermat's principle and nonlinear traveltime tomography

    International Nuclear Information System (INIS)

    Berryman, J.G.; Courant Institute of Mathematical Sciences, New York University, 251 Mercer Street, New York, New York 10012)

    1989-01-01

    Fermat's principle shows that a definite convex set of feasible slowness models, depending only on the traveltime data, exists for the fully nonlinear traveltime inversion problem. In a new iterative reconstruction algorithm, the minimum number of nonfeasible ray paths is used as a figure of merit to determine the optimum size of the model correction at each step. The numerical results show that the new algorithm is robust, stable, and produces very good reconstructions even for high contrast materials where standard methods tend to diverge

  20. DEVELOPMENT OF A NEW ALGORITHM FOR KEY AND S-BOX GENERATION IN BLOWFISH ALGORITHM

    Directory of Open Access Journals (Sweden)

    TAYSEER S. ATIA

    2014-08-01

    Full Text Available Blowfish algorithm is a block cipher algorithm, its strong, simple algorithm used to encrypt data in block of size 64-bit. Key and S-box generation process in this algorithm require time and memory space the reasons that make this algorithm not convenient to be used in smart card or application requires changing secret key frequently. In this paper a new key and S-box generation process was developed based on Self Synchronization Stream Cipher (SSS algorithm where the key generation process for this algorithm was modified to be used with the blowfish algorithm. Test result shows that the generation process requires relatively slow time and reasonably low memory requirement and this enhance the algorithm and gave it the possibility for different usage.

  1. Assessment principles and tools.

    Science.gov (United States)

    Golnik, Karl C

    2014-01-01

    The goal of ophthalmology residency training is to produce competent ophthalmologists. Competence can only be determined by appropriately assessing resident performance. There are accepted guiding principles that should be applied to competence assessment methods. These principles are enumerated herein and ophthalmology-specific assessment tools that are available are described.

  2. The principle of equivalence

    International Nuclear Information System (INIS)

    Unnikrishnan, C.S.

    1994-01-01

    Principle of equivalence was the fundamental guiding principle in the formulation of the general theory of relativity. What are its key elements? What are the empirical observations which establish it? What is its relevance to some new experiments? These questions are discussed in this article. (author). 11 refs., 5 figs

  3. Principles of Critical Dialogue.

    Science.gov (United States)

    Lankford, E. Louis

    1986-01-01

    Proposes four principles of critical dialog designed to suggest a consistent pattern of preparation for criticism. The principles concern the characteristics of the intended audience, establishing the goals of art criticism, making a commitment to a context of relevant dialogue, and clarifying one's concept of art in qualifying an object for…

  4. The anthropic principle

    International Nuclear Information System (INIS)

    Carr, B.J.

    1982-01-01

    The anthropic principle (the conjecture that certain features of the world are determined by the existence of Man) is discussed with the listing of the objections, and is stated that nearly all the constants of nature may be determined by the anthropic principle which does not give exact values for the constants but only their orders of magnitude. (J.T.)

  5. Great Principles of Computing

    OpenAIRE

    Denning, Peter J.

    2008-01-01

    The Great Principles of Computing is a framework for understanding computing as a field of science. The website ...April 2008 (Rev. 8/31/08) The Great Principles of Computing is a framework for understanding computing as a field of science.

  6. A Relation-algebraic Approach to Simple Games

    OpenAIRE

    Rudolf Berghammer; Harrie De Swart; Agnieszka Rusinowska

    2011-01-01

    Simple games are a powerful tool to analyze decision-making and coalition formation in social and political life. In this paper, we present relation-algebraic models of simple games and develop relational algorithms for solving some basic problems of them. In particular, we test certain fundamental properties of simple games (being monotone, proper, respectively strong) and compute specific players (dummies, dictators, vetoers, null players) and coalitions (minimal winning coalitions and vulne...

  7. Constrained Minimization Algorithms

    Science.gov (United States)

    Lantéri, H.; Theys, C.; Richard, C.

    2013-03-01

    In this paper, we consider the inverse problem of restoring an unknown signal or image, knowing the transformation suffered by the unknowns. More specifically we deal with transformations described by a linear model linking the unknown signal to an unnoisy version of the data. The measured data are generally corrupted by noise. This aspect of the problem is presented in the introduction for general models. In Section 2, we introduce the linear models, and some examples of linear inverse problems are presented. The specificities of the inverse problems are briefly mentionned and shown on a simple example. In Section 3, we give some information on classical distances or divergences. Indeed, an inverse problem is generally solved by minimizing a discrepancy function (divergence or distance) between the measured data and the model (here linear) of such data. Section 4 deals with the likelihood maximization and with their links with divergences minimization. The physical constraints on the solution are indicated and the Split Gradient Method (SGM) is detailed in Section 5. A constraint on the inferior bound of the solution is introduced at first; the positivity constraint is a particular case of such a constraint. We show how to obtain strictly, the multiplicative form of the algorithms. In a second step, the so-called flux constraint is introduced, and a complete algorithmic form is given. In Section 6 we give some brief information on acceleration method of such algorithms. A conclusion is given in Section 7.

  8. Mach's holographic principle

    International Nuclear Information System (INIS)

    Khoury, Justin; Parikh, Maulik

    2009-01-01

    Mach's principle is the proposition that inertial frames are determined by matter. We put forth and implement a precise correspondence between matter and geometry that realizes Mach's principle. Einstein's equations are not modified and no selection principle is applied to their solutions; Mach's principle is realized wholly within Einstein's general theory of relativity. The key insight is the observation that, in addition to bulk matter, one can also add boundary matter. Given a space-time, and thus the inertial frames, we can read off both boundary and bulk stress tensors, thereby relating matter and geometry. We consider some global conditions that are necessary for the space-time to be reconstructible, in principle, from bulk and boundary matter. Our framework is similar to that of the black hole membrane paradigm and, in asymptotically anti-de Sitter space-times, is consistent with holographic duality.

  9. Variational principles in physics

    CERN Document Server

    Basdevant, Jean-Louis

    2007-01-01

    Optimization under constraints is an essential part of everyday life. Indeed, we routinely solve problems by striking a balance between contradictory interests, individual desires and material contingencies. This notion of equilibrium was dear to thinkers of the enlightenment, as illustrated by Montesquieu’s famous formulation: "In all magistracies, the greatness of the power must be compensated by the brevity of the duration." Astonishingly, natural laws are guided by a similar principle. Variational principles have proven to be surprisingly fertile. For example, Fermat used variational methods to demonstrate that light follows the fastest route from one point to another, an idea which came to be known as Fermat’s principle, a cornerstone of geometrical optics. Variational Principles in Physics explains variational principles and charts their use throughout modern physics. The heart of the book is devoted to the analytical mechanics of Lagrange and Hamilton, the basic tools of any physicist. Prof. Basdev...

  10. The Basic Principles and Methods of the System Approach to Compression of Telemetry Data

    Science.gov (United States)

    Levenets, A. V.

    2018-01-01

    The task of data compressing of measurement data is still urgent for information-measurement systems. In paper the basic principles necessary for designing of highly effective systems of compression of telemetric information are offered. A basis of the offered principles is representation of a telemetric frame as whole information space where we can find of existing correlation. The methods of data transformation and compressing algorithms realizing the offered principles are described. The compression ratio for offered compression algorithm is about 1.8 times higher, than for a classic algorithm. Thus, results of a research of methods and algorithms showing their good perspectives.

  11. Principles of broadband switching and networking

    CERN Document Server

    Liew, Soung C

    2010-01-01

    An authoritative introduction to the roles of switching and transmission in broadband integrated services networks Principles of Broadband Switching and Networking explains the design and analysis of switch architectures suitable for broadband integrated services networks, emphasizing packet-switched interconnection networks with distributed routing algorithms. The text examines the mathematical properties of these networks, rather than specific implementation technologies. Although the pedagogical explanations in this book are in the context of switches, many of the fundamenta

  12. Fast affine projections and the regularized modified filtered-error algorithm in multichannel active noise control.

    Science.gov (United States)

    Wesselink, J M; Berkhoff, A P

    2008-08-01

    In this paper, real-time results are given for broadband multichannel active noise control using the regularized modified filtered-error algorithm. As compared to the standard filtered-error algorithm, the improved convergence rate and stability of the algorithm are obtained by using an inner-outer factorization of the transfer path between the actuators and the error sensors, combined with a delay compensation technique using double control filters and a regularization technique that preserves the factorization properties. The latter techniques allow the use of relatively simple and efficient adaptation schemes in which filtering of the reference signals is unnecessary. Results are given for a multichannel adaptive feedback implementation based on the internal model control principle. In feedforward systems based on this algorithm, colored reference signals may lead to reduced convergence rates. An adaptive extension based on the use of affine projections is presented, for which real-time results and simulations are given, showing the improved convergence rates of the regularized modified filtered-error algorithm for colored reference signals.

  13. kFOIL: Learning simple relational kernels

    OpenAIRE

    Landwehr, Niels; Passerini, Andrea; De Raedt, Luc; Frasconi, Paolo

    2006-01-01

    A novel and simple combination of inductive logic programming with kernel methods is presented. The kFOIL algorithm integrates the well-known inductive logic programming system FOIL with kernel methods. The feature space is constructed by leveraging FOIL search for a set of relevant clauses. The search is driven by the performance obtained by a support vector machine based on the resulting kernel. In this way, kFOIL implements a dynamic propositionalization approach. Both classification an...

  14. Simple Random Sampling-Based Probe Station Selection for Fault Detection in Wireless Sensor Networks

    Science.gov (United States)

    Huang, Rimao; Qiu, Xuesong; Rui, Lanlan

    2011-01-01

    Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate. PMID:22163789

  15. A Hybrid Chaotic Quantum Evolutionary Algorithm

    DEFF Research Database (Denmark)

    Cai, Y.; Zhang, M.; Cai, H.

    2010-01-01

    and enhance the global search ability. A large number of tests show that the proposed algorithm has higher convergence speed and better optimizing ability than quantum evolutionary algorithm, real-coded quantum evolutionary algorithm and hybrid quantum genetic algorithm. Tests also show that when chaos......A hybrid chaotic quantum evolutionary algorithm is proposed to reduce amount of computation, speed up convergence and restrain premature phenomena of quantum evolutionary algorithm. The proposed algorithm adopts the chaotic initialization method to generate initial population which will form...... a perfect distribution in feasible solution space in advantage of randomicity and non-repetitive ergodicity of chaos, the simple quantum rotation gate to update non-optimal individuals of population to reduce amount of computation, and the hybrid chaotic search strategy to speed up its convergence...

  16. Principles and Algorithms for Natural and Engineered Systems

    Science.gov (United States)

    2014-12-16

    2000 these methods were introduced into the subject of collective behavior by the PI, with support from AFOSR, ARO and ONR. During the course of this...contrast to using forces of attraction and repulsion as encountered say in celestial mechanics and chemistry . This outlook led us to the idea of...pursuit strategy, and providing a detailed analysis of the two- particle mutual pursuit case. We complete the work by considering evasive strategies to

  17. Principles of a new treatment algorithm in multiple sclerosis

    DEFF Research Database (Denmark)

    Hartung, Hans-Peter; Montalban, Xavier; Sorensen, Per Soelberg

    2011-01-01

    We are entering a new era in the management of patients with multiple sclerosis (MS). The first oral treatment (fingolimod) has now gained US FDA approval, addressing an unmet need for patients with MS who wish to avoid parenteral administration. A second agent (cladribine) is currently being con...

  18. Maximum principle based algorithm for hysteresis in micromagnetics

    Czech Academy of Sciences Publication Activity Database

    Kružík, Martin

    2003-01-01

    Roč. 13, č. 2 (2003), s. 461-485 ISSN 1343-4373 R&D Projects: GA AV ČR IAA1075005 Institutional research plan: CEZ:AV0Z1075907 Keywords : calculus of variations * convexification * ferromagnetism Subject RIV: BA - General Mathematics

  19. Principles of a new treatment algorithm in multiple sclerosis

    DEFF Research Database (Denmark)

    Hartung, Hans-Peter; Montalban, Xavier; Sorensen, Per Soelberg

    2011-01-01

    We are entering a new era in the management of patients with multiple sclerosis (MS). The first oral treatment (fingolimod) has now gained US FDA approval, addressing an unmet need for patients with MS who wish to avoid parenteral administration. A second agent (cladribine) is currently being...

  20. A family of smoothing algorithms for electron and other spectroscopies based on the Chebyshev filter

    International Nuclear Information System (INIS)

    Lopez-Camacho, E.; Garcia-Cortes, A.; Palacio, C.

    2006-01-01

    Smoothing is a useful tool to improve the signal-to-noise ratio of spectroscopic data. This paper reports a new family of smoothing formulas, the Chebyshev filters, which are derived approaching the original data to a polynomial and using the mini-max principle, that is, keeping the maximum error down to a minimum, as fitting criterion. The properties of the filters are studied analyzing their associated transfer functions in the frequency domain. This leads us to the concept of spectral window and spectral window width as tool and parameter, respectively, to remove the high frequency noise components accompanying the experimental data. Also, simple criteria to choose the appropriate width of the spectral windows are put forward. The behaviour of the filters is easy to understand and the filters are fast and simple to use and to program. Finally the proposed smoothing algorithms have been tested using synthetic data as well as X-ray photoelectron spectra and optical absorption spectra

  1. Optimisation combinatoire Theorie et algorithmes

    CERN Document Server

    Korte, Bernhard; Fonlupt, Jean

    2010-01-01

    Ce livre est la traduction fran aise de la quatri me et derni re dition de Combinatorial Optimization: Theory and Algorithms crit par deux minents sp cialistes du domaine: Bernhard Korte et Jens Vygen de l'universit de Bonn en Allemagne. Il met l accent sur les aspects th oriques de l'optimisation combinatoire ainsi que sur les algorithmes efficaces et exacts de r solution de probl mes. Il se distingue en cela des approches heuristiques plus simples et souvent d crites par ailleurs. L ouvrage contient de nombreuses d monstrations, concises et l gantes, de r sultats difficiles. Destin aux tudia

  2. Intelligent instrumentation principles and applications

    CERN Document Server

    Bhuyan, Manabendra

    2011-01-01

    With the advent of microprocessors and digital-processing technologies as catalyst, classical sensors capable of simple signal conditioning operations have evolved rapidly to take on higher and more specialized functions including validation, compensation, and classification. This new category of sensor expands the scope of incorporating intelligence into instrumentation systems, yet with such rapid changes, there has developed no universal standard for design, definition, or requirement with which to unify intelligent instrumentation. Explaining the underlying design methodologies of intelligent instrumentation, Intelligent Instrumentation: Principles and Applications provides a comprehensive and authoritative resource on the scientific foundations from which to coordinate and advance the field. Employing a textbook-like language, this book translates methodologies to more than 80 numerical examples, and provides applications in 14 case studies for a complete and working understanding of the material. Beginn...

  3. Strategy as simple rules.

    Science.gov (United States)

    Eisenhardt, K M; Sull, D N

    2001-01-01

    The success of Yahoo!, eBay, Enron, and other companies that have become adept at morphing to meet the demands of changing markets can't be explained using traditional thinking about competitive strategy. These companies have succeeded by pursuing constantly evolving strategies in market spaces that were considered unattractive according to traditional measures. In this article--the third in an HBR series by Kathleen Eisenhardt and Donald Sull on strategy in the new economy--the authors ask, what are the sources of competitive advantage in high-velocity markets? The secret, they say, is strategy as simple rules. The companies know that the greatest opportunities for competitive advantage lie in market confusion, but they recognize the need for a few crucial strategic processes and a few simple rules. In traditional strategy, advantage comes from exploiting resources or stable market positions. In strategy as simple rules, advantage comes from successfully seizing fleeting opportunities. Key strategic processes, such as product innovation, partnering, or spinout creation, place the company where the flow of opportunities is greatest. Simple rules then provide the guidelines within which managers can pursue such opportunities. Simple rules, which grow out of experience, fall into five broad categories: how- to rules, boundary conditions, priority rules, timing rules, and exit rules. Companies with simple-rules strategies must follow the rules religiously and avoid the temptation to change them too frequently. A consistent strategy helps managers sort through opportunities and gain short-term advantage by exploiting the attractive ones. In stable markets, managers rely on complicated strategies built on detailed predictions of the future. But when business is complicated, strategy should be simple.

  4. Principles of dynamics

    CERN Document Server

    Hill, Rodney

    2013-01-01

    Principles of Dynamics presents classical dynamics primarily as an exemplar of scientific theory and method. This book is divided into three major parts concerned with gravitational theory of planetary systems; general principles of the foundations of mechanics; and general motion of a rigid body. Some of the specific topics covered are Keplerian Laws of Planetary Motion; gravitational potential and potential energy; and fields of axisymmetric bodies. The principles of work and energy, fictitious body-forces, and inertial mass are also looked into. Other specific topics examined are kinematics

  5. Modern electronic maintenance principles

    CERN Document Server

    Garland, DJ

    2013-01-01

    Modern Electronic Maintenance Principles reviews the principles of maintaining modern, complex electronic equipment, with emphasis on preventive and corrective maintenance. Unfamiliar subjects such as the half-split method of fault location, functional diagrams, and fault finding guides are explained. This book consists of 12 chapters and begins by stressing the need for maintenance principles and discussing the problem of complexity as well as the requirements for a maintenance technician. The next chapter deals with the connection between reliability and maintenance and defines the terms fai

  6. [Bioethics of principles].

    Science.gov (United States)

    Pérez-Soba Díez del Corral, Juan José

    2008-01-01

    Bioethics emerges about the tecnological problems of acting in human life. Emerges also the problem of the moral limits determination, because they seem exterior of this practice. The Bioethics of Principles, take his rationality of the teleological thinking, and the autonomism. These divergence manifest the epistemological fragility and the great difficulty of hmoralñ thinking. This is evident in the determination of autonomy's principle, it has not the ethical content of Kant's propose. We need a new ethic rationality with a new refelxion of new Principles whose emerges of the basic ethic experiences.

  7. Hamilton's principle for beginners

    International Nuclear Information System (INIS)

    Brun, J L

    2007-01-01

    I find that students have difficulty with Hamilton's principle, at least the first time they come into contact with it, and therefore it is worth designing some examples to help students grasp its complex meaning. This paper supplies the simplest example to consolidate the learning of the quoted principle: that of a free particle moving along a line. Next, students are challenged to add gravity to reinforce the argument and, finally, a two-dimensional motion in a vertical plane is considered. Furthermore these examples force us to be very clear about such an abstract principle

  8. Limitations of Boltzmann's principle

    International Nuclear Information System (INIS)

    Lavenda, B.H.

    1995-01-01

    The usual form of Boltzmann's principle assures that maximum entropy, or entropy reduction, occurs with maximum probability, implying a unimodal distribution. Boltzmann's principle cannot be applied to nonunimodal distributions, like the arcsine law, because the entropy may be concave only over a limited portion of the interval. The method of subordination shows that the arcsine distribution corresponds to a process with a single degree of freedom, thereby confirming the invalidation of Boltzmann's principle. The fractalization of time leads to a new distribution in which arcsine and Cauchy distributions can coexist simultaneously for nonintegral degrees of freedom between √2 and 2

  9. Biomedical engineering principles

    CERN Document Server

    Ritter, Arthur B; Valdevit, Antonio; Ascione, Alfred N

    2011-01-01

    Introduction: Modeling of Physiological ProcessesCell Physiology and TransportPrinciples and Biomedical Applications of HemodynamicsA Systems Approach to PhysiologyThe Cardiovascular SystemBiomedical Signal ProcessingSignal Acquisition and ProcessingTechniques for Physiological Signal ProcessingExamples of Physiological Signal ProcessingPrinciples of BiomechanicsPractical Applications of BiomechanicsBiomaterialsPrinciples of Biomedical Capstone DesignUnmet Clinical NeedsEntrepreneurship: Reasons why Most Good Designs Never Get to MarketAn Engineering Solution in Search of a Biomedical Problem

  10. The Top Ten Algorithms in Data Mining

    CERN Document Server

    Wu, Xindong

    2009-01-01

    From classification and clustering to statistical learning, association analysis, and link mining, this book covers the most important topics in data mining research. It presents the ten most influential algorithms used in the data mining community today. Each chapter provides a detailed description of the algorithm, a discussion of available software implementation, advanced topics, and exercises. With a simple data set, examples illustrate how each algorithm works and highlight the overall performance of each algorithm in a real-world application. Featuring contributions from leading researc

  11. Is a weak violation of the Pauli principle possible?

    International Nuclear Information System (INIS)

    Ignat'ev, A.Y.; Kuz'min, V.A.

    1987-01-01

    We examine models in which there is a weak violation of the Pauli principle. A simple algebra of creation and annihilation operators is constructed which contains a parameter β and describes a weak violation of the Pauli principle (when β = 0 the Pauli principle is satisfied exactly). The commutation relations in this algebra turn out to be trilinear. A model based on this algebra is described. It allows transitions in which the Pauli principle is violated, but the probability of these transitions is suppressed by the quantity β 2 (even though the interaction Hamiltonian does not contain small parameters)

  12. Is weak violation of the Pauli principle possible?

    International Nuclear Information System (INIS)

    Ignat'ev, A.Yu.; Kuz'min, V.A.

    1987-01-01

    The question considered in the work is whether there are models which can account for small violation of the Pauli principle. A simple algebra is constructed for the creation-annihilation operators, which contains a parameter β and describe small violation of the Pauli principle (the Pauli principle is valid exactly for β=0). The commutation relations in this algebra are trilinear. A model is presented, basing upon this commutator algebra, which allows transitions violating the Pauli principle, their probability being suppressed by a factor of β 2 (even though the Hamiltonian does not contain small parameters)

  13. Using the Perceptron Algorithm to Find Consistent Hypotheses

    OpenAIRE

    Anthony, M.; Shawe-Taylor, J.

    1993-01-01

    The perceptron learning algorithm yields quite naturally an algorithm for finding a linearly separable boolean function consistent with a sample of such a function. Using the idea of a specifying sample, we give a simple proof that this algorithm is not efficient, in general.

  14. Explicit filtering of building blocks for genetic algorithms

    NARCIS (Netherlands)

    C.H.M. van Kemenade

    1996-01-01

    textabstractGenetic algorithms are often applied to building block problems. We have developed a simple filtering algorithm that can locate building blocks within a bit-string, and does not make assumptions regarding the linkage of the bits. A comparison between the filtering algorithm and genetic

  15. Design Principles for Security

    National Research Council Canada - National Science Library

    Benzel, Terry V; Irvine, Cynthia E; Levin, Timothy E; Bhaskara, Ganesha; Nguyen, Thuy D; Clark, Paul C

    2005-01-01

    As a prelude to the clean-slate design for the SecureCore project, the fundamental security principles from more than four decades of research and development in information security technology were reviewed...

  16. Principles of applied statistics

    National Research Council Canada - National Science Library

    Cox, D. R; Donnelly, Christl A

    2011-01-01

    .... David Cox and Christl Donnelly distil decades of scientific experience into usable principles for the successful application of statistics, showing how good statistical strategy shapes every stage of an investigation...

  17. Minimum entropy production principle

    Czech Academy of Sciences Publication Activity Database

    Maes, C.; Netočný, Karel

    2013-01-01

    Roč. 8, č. 7 (2013), s. 9664-9677 ISSN 1941-6016 Institutional support: RVO:68378271 Keywords : MINEP Subject RIV: BE - Theoretical Physics http://www.scholarpedia.org/article/Minimum_entropy_production_principle

  18. Vaccinology: principles and practice

    National Research Council Canada - National Science Library

    Morrow, John

    2012-01-01

    ... principles to implementation. This is an authoritative textbook that details a comprehensive and systematic approach to the science of vaccinology focusing on not only basic science, but the many stages required to commercialize...

  19. Rules Extraction with an Immune Algorithm

    Directory of Open Access Journals (Sweden)

    Deqin Yan

    2007-12-01

    Full Text Available In this paper, a method of extracting rules with immune algorithms from information systems is proposed. Designing an immune algorithm is based on a sharing mechanism to extract rules. The principle of sharing and competing resources in the sharing mechanism is consistent with the relationship of sharing and rivalry among rules. In order to extract rules efficiently, a new concept of flexible confidence and rule measurement is introduced. Experiments demonstrate that the proposed method is effective.

  20. Simple Finite Sums

    KAUST Repository

    Alabdulmohsin, Ibrahim M.

    2018-03-07

    We will begin our treatment of summability calculus by analyzing what will be referred to, throughout this book, as simple finite sums. Even though the results of this chapter are particular cases of the more general results presented in later chapters, they are important to start with for a few reasons. First, this chapter serves as an excellent introduction to what summability calculus can markedly accomplish. Second, simple finite sums are encountered more often and, hence, they deserve special treatment. Third, the results presented in this chapter for simple finite sums will, themselves, be used as building blocks for deriving the most general results in subsequent chapters. Among others, we establish that fractional finite sums are well-defined mathematical objects and show how various identities related to the Euler constant as well as the Riemann zeta function can actually be derived in an elementary manner using fractional finite sums.

  1. Analytic representation for first-principles pseudopotentials

    International Nuclear Information System (INIS)

    Lam, P.K.; Cohen, M.L.; Zunger, A.

    1980-01-01

    The first-principles pseudopotentials developed by Zunger and Cohen are fit with a simple analytic form chosen to model the main physical properties of the potentials. The fitting parameters for the first three rows of the Periodic Table are presented, and the quality of the fit is discussed. The parameters reflect chemical trends of the elements. We find that a minimum of three parameters is required to reproduce the regularities of the Periodic Table. Application of these analytic potentials is also discussed

  2. Electrical and electronic principles

    CERN Document Server

    Knight, S A

    1991-01-01

    Electrical and Electronic Principles, 2, Second Edition covers the syllabus requirements of BTEC Unit U86/329, including the principles of control systems and elements of data transmission. The book first tackles series and parallel circuits, electrical networks, and capacitors and capacitance. Discussions focus on flux density, electric force, permittivity, Kirchhoff's laws, superposition theorem, arrangement of resistors, internal resistance, and powers in a circuit. The text then takes a look at capacitors in circuit, magnetism and magnetization, electromagnetic induction, and alternating v

  3. Electrical and electronic principles

    CERN Document Server

    Knight, SA

    1988-01-01

    Electrical and Electronic Principles, 3 focuses on the principles involved in electrical and electronic circuits, including impedance, inductance, capacitance, and resistance.The book first deals with circuit elements and theorems, D.C. transients, and the series circuits of alternating current. Discussions focus on inductance and resistance in series, resistance and capacitance in series, power factor, impedance, circuit magnification, equation of charge, discharge of a capacitor, transfer of power, and decibels and attenuation. The manuscript then examines the parallel circuits of alternatin

  4. Remark on Heisenberg's principle

    International Nuclear Information System (INIS)

    Noguez, G.

    1988-01-01

    Application of Heisenberg's principle to inertial frame transformations allows a distinction between three commutative groups of reciprocal transformations along one direction: Galilean transformations, dual transformations, and Lorentz transformations. These are three conjugate groups and for a given direction, the related commutators are all proportional to one single conjugation transformation which compensates for uniform and rectilinear motions. The three transformation groups correspond to three complementary ways of measuring space-time as a whole. Heisenberg's Principle then gets another explanation [fr

  5. Microprocessors principles and applications

    CERN Document Server

    Debenham, Michael J

    1979-01-01

    Microprocessors: Principles and Applications deals with the principles and applications of microprocessors and covers topics ranging from computer architecture and programmed machines to microprocessor programming, support systems and software, and system design. A number of microprocessor applications are considered, including data processing, process control, and telephone switching. This book is comprised of 10 chapters and begins with a historical overview of computers and computing, followed by a discussion on computer architecture and programmed machines, paying particular attention to t

  6. Microwave system engineering principles

    CERN Document Server

    Raff, Samuel J

    1977-01-01

    Microwave System Engineering Principles focuses on the calculus, differential equations, and transforms of microwave systems. This book discusses the basic nature and principles that can be derived from thermal noise; statistical concepts and binomial distribution; incoherent signal processing; basic properties of antennas; and beam widths and useful approximations. The fundamentals of propagation; LaPlace's Equation and Transmission Line (TEM) waves; interfaces between homogeneous media; modulation, bandwidth, and noise; and communications satellites are also deliberated in this text. This bo

  7. A novel hybrid algorithm of GSA with Kepler algorithm for numerical optimization

    Directory of Open Access Journals (Sweden)

    Soroor Sarafrazi

    2015-07-01

    Full Text Available It is now well recognized that pure algorithms can be promisingly improved by hybridization with other techniques. One of the relatively new metaheuristic algorithms is Gravitational Search Algorithm (GSA which is based on the Newton laws. In this paper, to enhance the performance of GSA, a novel algorithm called “Kepler”, inspired by the astrophysics, is introduced. The Kepler algorithm is based on the principle of the first Kepler law. The hybridization of GSA and Kepler algorithm is an efficient approach to provide much stronger specialization in intensification and/or diversification. The performance of GSA–Kepler is evaluated by applying it to 14 benchmark functions with 20–1000 dimensions and the optimal approximation of linear system as a practical optimization problem. The results obtained reveal that the proposed hybrid algorithm is robust enough to optimize the benchmark functions and practical optimization problems.

  8. Droids Made Simple

    CERN Document Server

    Mazo, Gary

    2011-01-01

    If you have a Droid series smartphone - Droid, Droid X, Droid 2, or Droid 2 Global - and are eager to get the most out of your device, Droids Made Simple is perfect for you. Authors Martin Trautschold, Gary Mazo and Marziah Karch guide you through all of the features, tips, and tricks using their proven combination of clear instructions and detailed visuals. With hundreds of annotated screenshots and step-by-step directions, Droids Made Simple will transform you into a Droid expert, improving your productivity, and most importantly, helping you take advantage of all of the cool features that c

  9. Excel 2010 Made Simple

    CERN Document Server

    Katz, Abbott

    2011-01-01

    Get the most out of Excel 2010 with Excel 2010 Made Simple - learn the key features, understand what's new, and utilize dozens of time-saving tips and tricks to get your job done. Over 500 screen visuals and clear-cut instructions guide you through the features of Excel 2010, from formulas and charts to navigating around a worksheet and understanding Visual Basic for Applications (VBA) and macros. Excel 2010 Made Simple takes a practical and highly effective approach to using Excel 2010, showing you the best way to complete your most common spreadsheet tasks. You'll learn how to input, format,

  10. Algorithmic causets

    International Nuclear Information System (INIS)

    Bolognesi, Tommaso

    2011-01-01

    In the context of quantum gravity theories, several researchers have proposed causal sets as appropriate discrete models of spacetime. We investigate families of causal sets obtained from two simple models of computation - 2D Turing machines and network mobile automata - that operate on 'high-dimensional' supports, namely 2D arrays of cells and planar graphs, respectively. We study a number of quantitative and qualitative emergent properties of these causal sets, including dimension, curvature and localized structures, or 'particles'. We show how the possibility to detect and separate particles from background space depends on the choice between a global or local view at the causal set. Finally, we spot very rare cases of pseudo-randomness, or deterministic chaos; these exhibit a spontaneous phenomenon of 'causal compartmentation' that appears as a prerequisite for the occurrence of anything of physical interest in the evolution of spacetime.

  11. Working with Simple Machines

    Science.gov (United States)

    Norbury, John W.

    2006-01-01

    A set of examples is provided that illustrate the use of work as applied to simple machines. The ramp, pulley, lever and hydraulic press are common experiences in the life of a student, and their theoretical analysis therefore makes the abstract concept of work more real. The mechanical advantage of each of these systems is also discussed so that…

  12. A Simple Hydrogen Electrode

    Science.gov (United States)

    Eggen, Per-Odd

    2009-01-01

    This article describes the construction of an inexpensive, robust, and simple hydrogen electrode, as well as the use of this electrode to measure "standard" potentials. In the experiment described here the students can measure the reduction potentials of metal-metal ion pairs directly, without using a secondary reference electrode. Measurements…

  13. Simple Driving Techniques

    DEFF Research Database (Denmark)

    Rosendahl, Mads

    2002-01-01

    -like language. Our aim is to extract a simple notion of driving and show that even in this tamed form it has much of the power of more general notions of driving. Our driving technique may be used to simplify functional programs which use function composition and will often be able to remove intermediate data...

  14. Simple cryogenic infrared window

    NARCIS (Netherlands)

    Hartemink, M.; Hartemink, M.; Godfried, H.P; Godfried, Herman

    1991-01-01

    A simple, cheap technique is reported that allows materials with both large and small thermal expansion coefficients to be mounted as windows in low temperature cryostats while at the same time avoiding thermal stresses. The construction may be thermally cycled many times with no change in its

  15. Structure of simple liquids

    International Nuclear Information System (INIS)

    Blain, J.F.

    1969-01-01

    The results obtained by application to argon and sodium of the two important methods of studying the structure of liquids: scattering of X-rays and neutrons, are presented on one hand. On the other hand the principal models employed for reconstituting the structure of simple liquids are exposed: mathematical models, lattice models and their derived models, experimental models. (author) [fr

  16. Basic Principles - Chapter 6

    Data.gov (United States)

    National Aeronautics and Space Administration — This chapter described at a very high level some of the considerations that need to be made when designing algorithms for a vehicle health management application....

  17. Empirical study of parallel LRU simulation algorithms

    Science.gov (United States)

    Carr, Eric; Nicol, David M.

    1994-01-01

    This paper reports on the performance of five parallel algorithms for simulating a fully associative cache operating under the LRU (Least-Recently-Used) replacement policy. Three of the algorithms are SIMD, and are implemented on the MasPar MP-2 architecture. Two other algorithms are parallelizations of an efficient serial algorithm on the Intel Paragon. One SIMD algorithm is quite simple, but its cost is linear in the cache size. The two other SIMD algorithm are more complex, but have costs that are independent on the cache size. Both the second and third SIMD algorithms compute all stack distances; the second SIMD algorithm is completely general, whereas the third SIMD algorithm presumes and takes advantage of bounds on the range of reference tags. Both MIMD algorithm implemented on the Paragon are general and compute all stack distances; they differ in one step that may affect their respective scalability. We assess the strengths and weaknesses of these algorithms as a function of problem size and characteristics, and compare their performance on traces derived from execution of three SPEC benchmark programs.

  18. Research Techniques Made Simple

    DEFF Research Database (Denmark)

    Kim, Noori; Fischer, Alexander H; Dyring-Andersen, Beatrice

    2017-01-01

    The statistical significance of results is an important component to drawing appropriate conclusions in a study. Choosing the correct statistical test to analyze results is essential in interpreting the validity of the study and centers on defining the study variables and purpose of the analysis........ The complexity of statistical modeling makes this a daunting task, so we propose a basic algorithmic approach as an initial step in determining what statistical method will be appropriate for a particular clinical study....

  19. An Improved Robot Path Planning Algorithm

    Directory of Open Access Journals (Sweden)

    Xuesong Yan

    2012-12-01

    Full Text Available Robot path planning is a NP problem; traditional optimization methods are not very effective to solve it.Traditional genetic algorithm trapped into the local minimum easily. Therefore, based on a simple genetic algorithm and combine the base ideology of orthogonal design method then applied it to the population initialization, using the intergenerational elite mechanism, as well as the introduction of adaptive local search operator to prevent trapped into the local minimum and improve the convergence speed to form a new genetic algorithm. Through the series of numerical experiments, the new algorithm has been proved to be efficiency. We also use the proposed algorithm to solve the robot path planning problem and the experiment results indicated that the new algorithm is efficiency for solving the robot path planning problems and the best path usually can be found.

  20. Algorithms for worst-case tolerance optimization

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans; Madsen, Kaj

    1979-01-01

    New algorithms are presented for the solution of optimum tolerance assignment problems. The problems considered are defined mathematically as a worst-case problem (WCP), a fixed tolerance problem (FTP), and a variable tolerance problem (VTP). The basic optimization problem without tolerances...... is denoted the zero tolerance problem (ZTP). For solution of the WCP we suggest application of interval arithmetic and also alternative methods. For solution of the FTP an algorithm is suggested which is conceptually similar to algorithms previously developed by the authors for the ZTP. Finally, the VTP...... is solved by a double-iterative algorithm in which the inner iteration is performed by the FTP- algorithm. The application of the algorithm is demonstrated by means of relatively simple numerical examples. Basic properties, such as convergence properties, are displayed based on the examples....

  1. GENERAL PRINCIPLES OF LAW

    Directory of Open Access Journals (Sweden)

    Elena ANGHEL

    2016-05-01

    Full Text Available According to Professor Djuvara “law can be a science, and legal knowledge can also become science when, referring to a number as large as possible of acts of those covered by law, sorts and connects them by their essential characters upon legal concepts or principles which are universally valid, just like the laws of nature”. The general principles of law take a privileged place in the positive legal order and represent the foundation of any legal construction. The essence of the legal principles resides in their generality. In respect of the term “general”, Franck Moderne raised the question on the degree of generality used in order to define a principle as being general – at the level of an institution, of a branch of the law or at the level of the entire legal order. The purpose of this study is to find out the characteristics of law principles. In our opinion, four characteristics can be mentioned.

  2. Ethical principles of scientific communication

    Directory of Open Access Journals (Sweden)

    Baranov G. V.

    2017-03-01

    Full Text Available the article presents the principles of ethical management of scientific communication. The author approves the priority of ethical principle of social responsibility of the scientist.

  3. Data structures and algorithm analysis in Java

    CERN Document Server

    Shaffer, Clifford A

    2011-01-01

    With its focus on creating efficient data structures and algorithms, this comprehensive text helps readers understand how to select or design the tools that will best solve specific problems. It uses Java as the programming language and is suitable for second-year data structure courses and computer science courses in algorithm analysis. Techniques for representing data are presented within the context of assessing costs and benefits, promoting an understanding of the principles of algorithm analysis and the effects of a chosen physical medium. The text also explores tradeoff issues, familiari

  4. Data structures and algorithm analysis in C++

    CERN Document Server

    Shaffer, Clifford A

    2011-01-01

    With its focus on creating efficient data structures and algorithms, this comprehensive text helps readers understand how to select or design the tools that will best solve specific problems. It uses Microsoft C++ as the programming language and is suitable for second-year data structure courses and computer science courses in algorithm analysis.Techniques for representing data are presented within the context of assessing costs and benefits, promoting an understanding of the principles of algorithm analysis and the effects of a chosen physical medium. The text also explores tradeoff issues, f

  5. Genetic Algorithm Tuned Fuzzy Logic for Gliding Return Trajectories

    Science.gov (United States)

    Burchett, Bradley T.

    2003-01-01

    The problem of designing and flying a trajectory for successful recovery of a reusable launch vehicle is tackled using fuzzy logic control with genetic algorithm optimization. The plant is approximated by a simplified three degree of freedom non-linear model. A baseline trajectory design and guidance algorithm consisting of several Mamdani type fuzzy controllers is tuned using a simple genetic algorithm. Preliminary results show that the performance of the overall system is shown to improve with genetic algorithm tuning.

  6. Developing principles of growth

    DEFF Research Database (Denmark)

    Neergaard, Helle; Fleck, Emma

    of the principles of growth among women-owned firms. Using an in-depth case study methodology, data was collected from women-owned firms in Denmark and Ireland, as these countries are similar in contextual terms, e.g. population and business composition, dominated by micro, small and medium-sized enterprises......Although it has been widely recognized that the growth of women-owned businesses is central to wealth creation, innovation and economic development; limited attention has been devoted to understanding small business growth from a female perspective.This research seeks to develop an understanding....... Extending on principles put forward in effectuation theory, we propose that women grow their firms according to five principles which enable women’s enterprises to survive in the face of crises such as the current financial world crisis....

  7. Principles of musical acoustics

    CERN Document Server

    Hartmann, William M

    2013-01-01

    Principles of Musical Acoustics focuses on the basic principles in the science and technology of music. Musical examples and specific musical instruments demonstrate the principles. The book begins with a study of vibrations and waves, in that order. These topics constitute the basic physical properties of sound, one of two pillars supporting the science of musical acoustics. The second pillar is the human element, the physiological and psychological aspects of acoustical science. The perceptual topics include loudness, pitch, tone color, and localization of sound. With these two pillars in place, it is possible to go in a variety of directions. The book treats in turn, the topics of room acoustics, audio both analog and digital, broadcasting, and speech. It ends with chapters on the traditional musical instruments, organized by family. The mathematical level of this book assumes that the reader is familiar with elementary algebra. Trigonometric functions, logarithms and powers also appear in the book, but co...

  8. Bee Colony Optimization - part I: The algorithm overview

    Directory of Open Access Journals (Sweden)

    Davidović Tatjana

    2015-01-01

    Full Text Available This paper is an extensive survey of the Bee Colony Optimization (BCO algorithm, proposed for the first time in 2001. BCO and its numerous variants belong to a class of nature-inspired meta-heuristic methods, based on the foraging habits of honeybees. Our main goal is to promote it among the wide operations research community. BCO is a simple, but efficient meta-heuristic technique that has been successfully applied to many optimization problems, mostly in transport, location and scheduling fields. Firstly, we shall give a brief overview of the other meta-heuristics inspired by bees’ foraging principles pointing out the differences between them. Then, we shall provide the detailed description of the BCO algorithm and its modifications, including the strategies for BCO parallelization, and giving the preliminary results regarding its convergence. The application survey is elaborated in Part II of our paper. [Projekat Ministarstva nauke Republike Srbije, br. OI174010, br. OI174033 i br. TR36002

  9. The Design of SimpleITK

    Directory of Open Access Journals (Sweden)

    Bradley Christopher Lowekamp

    2013-12-01

    Full Text Available SimpleITK is a new interface to the Insight Segmentation andRegistration Toolkit (ITK designed to facilitate rapid prototyping, educationand scientific activities, via high level programminglanguages. ITK is a templated C++ library of image processingalgorithms and frameworks for biomedical and other applications, andit was designed to be generic, flexible and extensible. Initially, ITKprovided a direct wrapping interface to languages such as Python andTcl through the WrapITK system. Unlike WrapITK, which exposed ITK'scomplex templated interface, SimpleITK was designed to provide an easyto use and simplified interface to ITK's algorithms. It includesprocedural methods, hides ITK's demand driven pipeline, and provides atemplate-less layer. Also SimpleITK provides practical conveniencessuch as binary distribution packages and overloaded operators. Ouruser-friendly design goals dictated a departure from the directinterface wrapping approach of WrapITK, towards a new facadeclass structure that only exposes the required functionality, hidingITK's extensive template use. Internally SimpleITK utilizes a manualdescription of each filter with code-generation and advanced C++meta-programming to provide the higher-level interface, bringing thecapabilities of ITK to a wider audience. SimpleITK is licensed asopen source software under the Apache License Version 2.0 and more informationabout downloading it can be found at http://www.simpleitk.org.

  10. Complexity is simple!

    Science.gov (United States)

    Cottrell, William; Montero, Miguel

    2018-02-01

    In this note we investigate the role of Lloyd's computational bound in holographic complexity. Our goal is to translate the assumptions behind Lloyd's proof into the bulk language. In particular, we discuss the distinction between orthogonalizing and `simple' gates and argue that these notions are useful for diagnosing holographic complexity. We show that large black holes constructed from series circuits necessarily employ simple gates, and thus do not satisfy Lloyd's assumptions. We also estimate the degree of parallel processing required in this case for elementary gates to orthogonalize. Finally, we show that for small black holes at fixed chemical potential, the orthogonalization condition is satisfied near the phase transition, supporting a possible argument for the Weak Gravity Conjecture first advocated in [1].

  11. Modern mathematics made simple

    CERN Document Server

    Murphy, Patrick

    1982-01-01

    Modern Mathematics: Made Simple presents topics in modern mathematics, from elementary mathematical logic and switching circuits to multibase arithmetic and finite systems. Sets and relations, vectors and matrices, tesselations, and linear programming are also discussed.Comprised of 12 chapters, this book begins with an introduction to sets and basic operations on sets, as well as solving problems with Venn diagrams. The discussion then turns to elementary mathematical logic, with emphasis on inductive and deductive reasoning; conjunctions and disjunctions; compound statements and conditional

  12. Working with simple machines

    OpenAIRE

    Norbury, John W.

    2006-01-01

    A set of examples is provided that illustrate the use of work as applied to simple machines. The ramp, pulley, lever and hydraulic press are common experiences in the life of a student and their theoretical analysis therefore makes the abstract concept of work more real. The mechanical advantage of each of these systems is also discussed so that students can evaluate their usefulness as machines.

  13. The simple complex numbers

    OpenAIRE

    Zalesny, Jaroslaw

    2008-01-01

    A new simple geometrical interpretation of complex numbers is presented. It differs from their usual interpretation as points in the complex plane. From the new point of view the complex numbers are rather operations on vectors than points. Moreover, in this approach the real, imaginary and complex numbers have similar interpretation. They are simply some operations on vectors. The presented interpretation is simpler, more natural, and better adjusted to possible applications in geometry and ...

  14. Information technology made simple

    CERN Document Server

    Carter, Roger

    1991-01-01

    Information Technology: Made Simple covers the full range of information technology topics, including more traditional subjects such as programming languages, data processing, and systems analysis. The book discusses information revolution, including topics about microchips, information processing operations, analog and digital systems, information processing system, and systems analysis. The text also describes computers, computer hardware, microprocessors, and microcomputers. The peripheral devices connected to the central processing unit; the main types of system software; application soft

  15. Simple substrates for complex cognition

    Directory of Open Access Journals (Sweden)

    Peter Dayan

    2008-12-01

    Full Text Available Complex cognitive tasks present a range of computational and algorithmic challenges for neural accounts of both learning and inference. In particular, it is extremely hard to solve them using the sort of simple policies that have been extensively studied as solutions to elementary Markov decision problems. There has thus been recent interest in architectures for the instantiation and even learning of policies that are formally more complicated than these, involving operations such as gated working memory. However, the focus of these ideas and methods has largely been on what might best be considered as automatized, routine or, in the sense of animal conditioning, habitual, performance. Thus, they have yet to provide a route towards understanding the workings of rule-based control, which is critical for cognitively sophisticated competence. Here, we review a recent suggestion for a uniform architecture for habitual and rule-based execution, discuss some of the habitual mechanisms that underpin the use of rules, and consider a statistical relationship between rules and habits.

  16. Electrical principles 3 checkbook

    CERN Document Server

    Bird, J O

    2013-01-01

    Electrical Principles 3 Checkbook aims to introduce students to the basic electrical principles needed by technicians in electrical engineering, electronics, and telecommunications.The book first tackles circuit theorems, single-phase series A.C. circuits, and single-phase parallel A.C. circuits. Discussions focus on worked problems on parallel A.C. circuits, worked problems on series A.C. circuits, main points concerned with D.C. circuit analysis, worked problems on circuit theorems, and further problems on circuit theorems. The manuscript then examines three-phase systems and D.C. transients

  17. Principles of statistics

    CERN Document Server

    Bulmer, M G

    1979-01-01

    There are many textbooks which describe current methods of statistical analysis, while neglecting related theory. There are equally many advanced textbooks which delve into the far reaches of statistical theory, while bypassing practical applications. But between these two approaches is an unfilled gap, in which theory and practice merge at an intermediate level. Professor M. G. Bulmer's Principles of Statistics, originally published in 1965, was created to fill that need. The new, corrected Dover edition of Principles of Statistics makes this invaluable mid-level text available once again fo

  18. Teaching/learning principles

    Science.gov (United States)

    Hankins, D. B.; Wake, W. H.

    1981-01-01

    The potential remote sensing user community is enormous, and the teaching and training tasks are even larger; however, some underlying principles may be synthesized and applied at all levels from elementary school children to sophisticated and knowledgeable adults. The basic rules applying to each of the six major elements of any training course and the underlying principle involved in each rule are summarized. The six identified major elements are: (1) field sites for problems and practice; (2) lectures and inside study; (3) learning materials and resources (the kit); (4) the field experience; (5) laboratory sessions; and (6) testing and evaluation.

  19. Principles of quantum electronics

    CERN Document Server

    Marcuse, Dietrich

    1980-01-01

    Principles of Quantum Electronics focuses on the concept of quantum electronics as the application of quantum theory to engineering problems. It examines the principles that govern specific quantum electronics devices and presents their theoretical applications to typical problems. Comprised of 10 chapters, this book starts with an overview of the Dirac formulation of quantum mechanics. This text then considers the derivation of the formalism of field quantization and discusses the properties of photons and phonons. Other chapters examine the interaction between the electromagnetic field and c

  20. Mechanical engineering principles

    CERN Document Server

    Bird, John

    2014-01-01

    A student-friendly introduction to core engineering topicsThis book introduces mechanical principles and technology through examples and applications, enabling students to develop a sound understanding of both engineering principles and their use in practice. These theoretical concepts are supported by 400 fully worked problems, 700 further problems with answers, and 300 multiple-choice questions, all of which add up to give the reader a firm grounding on each topic.The new edition is up to date with the latest BTEC National specifications and can also be used on undergraduate courses in mecha

  1. Particle swarm genetic algorithm and its application

    International Nuclear Information System (INIS)

    Liu Chengxiang; Yan Changxiang; Wang Jianjun; Liu Zhenhai

    2012-01-01

    To solve the problems of slow convergence speed and tendency to fall into the local optimum of the standard particle swarm optimization while dealing with nonlinear constraint optimization problem, a particle swarm genetic algorithm is designed. The proposed algorithm adopts feasibility principle handles constraint conditions and avoids the difficulty of penalty function method in selecting punishment factor, generates initial feasible group randomly, which accelerates particle swarm convergence speed, and introduces genetic algorithm crossover and mutation strategy to avoid particle swarm falls into the local optimum Through the optimization calculation of the typical test functions, the results show that particle swarm genetic algorithm has better optimized performance. The algorithm is applied in nuclear power plant optimization, and the optimization results are significantly. (authors)

  2. The mGA1.0: A common LISP implementation of a messy genetic algorithm

    Science.gov (United States)

    Goldberg, David E.; Kerzic, Travis

    1990-01-01

    Genetic algorithms (GAs) are finding increased application in difficult search, optimization, and machine learning problems in science and engineering. Increasing demands are being placed on algorithm performance, and the remaining challenges of genetic algorithm theory and practice are becoming increasingly unavoidable. Perhaps the most difficult of these challenges is the so-called linkage problem. Messy GAs were created to overcome the linkage problem of simple genetic algorithms by combining variable-length strings, gene expression, messy operators, and a nonhomogeneous phasing of evolutionary processing. Results on a number of difficult deceptive test functions are encouraging with the mGA always finding global optima in a polynomial number of function evaluations. Theoretical and empirical studies are continuing, and a first version of a messy GA is ready for testing by others. A Common LISP implementation called mGA1.0 is documented and related to the basic principles and operators developed by Goldberg et. al. (1989, 1990). Although the code was prepared with care, it is not a general-purpose code, only a research version. Important data structures and global variations are described. Thereafter brief function descriptions are given, and sample input data are presented together with sample program output. A source listing with comments is also included.

  3. Insulin degludec once-daily in type 2 diabetes: simple or step-wise titration (BEGIN: once simple use).

    Science.gov (United States)

    Philis-Tsimikas, Athena; Brod, Meryl; Niemeyer, Marcus; Ocampo Francisco, Ann Marie; Rothman, Jeffrey

    2013-06-01

    Insulin degludec (IDeg) is a new basal insulin in development with a flat, ultra-long action profile that may permit dosing using a simplified titration algorithm with less frequent self-measured blood glucose (SMBG) measurements and more simplified titration steps than currently available basal insulins. This 26-week, multi-center, open-label, randomized, treat-to-target study compared the efficacy and safety of IDeg administered once-daily in combination with metformin in insulin-naïve subjects with type 2 diabetes using two different patient-driven titration algorithms: a "Simple" algorithm, with dose adjustments based on one pre-breakfast SMBG measurement (n = 111) versus a "Step-wise" algorithm, with adjustments based on three consecutive pre-breakfast SMBG values (n = 111). IDeg was administered using the FlexTouch® insulin pen (Novo Nordisk A/S, Bagsværd, Denmark), with once-weekly dose titration in both groups. Glycosylated hemoglobin (HbA1c) decreased from baseline to week 26 in both groups (-1.09%, IDegSimple; -0.93%, IDegStep-wise). IDegSimple was non-inferior to IDegStep-wise in lowering HbA1c [estimated treatment difference (IDegSimple - IDegStep-wise): -0.16% points (-0.39; 0.07)95% CI]. Fasting plasma glucose was reduced (-3.27 mmol/L, IDegSimple; -2.68 mmol/L, IDegStep-wise) with no significant difference between groups. Rates of confirmed hypoglycemia [1.60, IDegSimple; 1.17, IDegStep-wise events/patient year of exposure (PYE)] and nocturnal confirmed hypoglycemia (0.21, IDegSimple; 0.10, IDegStep-wise events/PYE) were low, with no significant differences between groups. Daily insulin dose after 26 weeks was 0.61 U/kg (IDegSimple) and 0.50 U/kg (IDegStep-wise). No significant difference in weight change was seen between groups by week 26 (+1.6 kg, IDegSimple; +1.1 kg, IDegStep-wise), and there were no clinically relevant differences in adverse event profiles. IDeg was effective and well tolerated using either the Simple or Step-wise titration

  4. Simple models of equilibrium and nonequilibrium phenomena

    International Nuclear Information System (INIS)

    Lebowitz, J.L.

    1987-01-01

    This volume consists of two chapters of particular interest to researchers in the field of statistical mechanics. The first chapter is based on the premise that the best way to understand the qualitative properties that characterize many-body (i.e. macroscopic) systems is to study 'a number of the more significant model systems which, at least in principle are susceptible of complete analysis'. The second chapter deals exclusively with nonequilibrium phenomena. It reviews the theory of fluctuations in open systems to which they have made important contributions. Simple but interesting model examples are emphasised

  5. Fundamental Safety Principles

    International Nuclear Information System (INIS)

    Abdelmalik, W.E.Y.

    2011-01-01

    This work presents a summary of the IAEA Safety Standards Series publication No. SF-1 entitled F UDAMENTAL Safety PRINCIPLES p ublished on 2006. This publication states the fundamental safety objective and ten associated safety principles, and briefly describes their intent and purposes. Safety measures and security measures have in common the aim of protecting human life and health and the environment. These safety principles are: 1) Responsibility for safety, 2) Role of the government, 3) Leadership and management for safety, 4) Justification of facilities and activities, 5) Optimization of protection, 6) Limitation of risks to individuals, 7) Protection of present and future generations, 8) Prevention of accidents, 9)Emergency preparedness and response and 10) Protective action to reduce existing or unregulated radiation risks. The safety principles concern the security of facilities and activities to the extent that they apply to measures that contribute to both safety and security. Safety measures and security measures must be designed and implemented in an integrated manner so that security measures do not compromise safety and safety measures do not compromise security.

  6. Principles of Protocol Design

    DEFF Research Database (Denmark)

    Sharp, Robin

    This is a new and updated edition of a book first published in 1994. The book introduces the reader to the principles used in the construction of a large range of modern data communication protocols, as used in distributed computer systems of all kinds. The approach taken is rather a formal one...

  7. Principles of Bridge Reliability

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, Andrzej S.

    The paper gives a brief introduction to the basic principles of structural reliability theory and its application to bridge engineering. Fundamental concepts like failure probability and reliability index are introduced. Ultimate as well as serviceability limit states for bridges are formulated...

  8. Pattern recognition principles

    Science.gov (United States)

    Tou, J. T.; Gonzalez, R. C.

    1974-01-01

    The present work gives an account of basic principles and available techniques for the analysis and design of pattern processing and recognition systems. Areas covered include decision functions, pattern classification by distance functions, pattern classification by likelihood functions, the perceptron and the potential function approaches to trainable pattern classifiers, statistical approach to trainable classifiers, pattern preprocessing and feature selection, and syntactic pattern recognition.

  9. The Handicap Principle

    Indian Academy of Sciences (India)

    IAS Admin

    The Handicap Principle is an idea proposed by the husband and wife scientist team of Amotz and Avishag Zahavi from. Israel in the 1970's. It is among the most innovative ideas of the 20th century in the field of behavioural biology and attempts to explain several long-standing puzzles that have baffled naturalists since the ...

  10. Schrodinger's Uncertainty Principle?

    Indian Academy of Sciences (India)

    correlation between x and p. The virtue of Schrodinger's version (5) is that it accounts for this correlation. In spe- cial cases like the free particle and the harmonic oscillator, the 'Schrodinger uncertainty product' even remains constant with time, whereas Heisenberg's does not. The glory of giving the uncertainty principle to ...

  11. The traveltime holographic principle

    KAUST Repository

    Huang, Y.

    2014-11-06

    Fermat\\'s interferometric principle is used to compute interior transmission traveltimes τpq from exterior transmission traveltimes τsp and τsq. Here, the exterior traveltimes are computed for sources s on a boundary B that encloses a volume V of interior points p and q. Once the exterior traveltimes are computed, no further ray tracing is needed to calculate the interior times τpq. Therefore this interferometric approach can be more efficient than explicitly computing interior traveltimes τpq by ray tracing. Moreover, the memory requirement of the traveltimes is reduced by one dimension, because the boundary B is of one fewer dimension than the volume V. An application of this approach is demonstrated with interbed multiple (IM) elimination. Here, the IMs in the observed data are predicted from the migration image and are subsequently removed by adaptive subtraction. This prediction is enabled by the knowledge of interior transmission traveltimes τpq computed according to Fermat\\'s interferometric principle. We denote this principle as the ‘traveltime holographic principle’, by analogy with the holographic principle in cosmology where information in a volume is encoded on the region\\'s boundary.

  12. Cooperatives, Principles and Practices.

    Science.gov (United States)

    Schaars, Marvin A.

    A teaching aid and information source on activities, principles, and practices of cooperatives is presented. The following topics are included: (1) Basic Interests of People, (2) Legal Organization of Business in the United States, (3) What Is a Cooperative? (4) Procedure for Organizing Cooperatives, (5) How Cooperatives Are Run and Managed, (6)…

  13. Principles of economics textbooks

    DEFF Research Database (Denmark)

    Madsen, Poul Thøis

    2012-01-01

    Has the financial crisis already changed US principles of economics textbooks? Rather little has changed in individual textbooks, but taken as a whole ten of the best-selling textbooks suggest rather encompassing changes of core curriculum. A critical analysis of these changes shows how individual...

  14. Principles of Proper Validation

    DEFF Research Database (Denmark)

    Esbensen, Kim; Geladi, Paul

    2010-01-01

    Validation in chemometrics is presented using the exemplar context of multivariate calibration/prediction. A phenomenological analysis of common validation practices in data analysis and chemometrics leads to formulation of a set of generic Principles of Proper Validation (PPV), which is based...

  15. Schrodinger's Uncertainty Principle?

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 4; Issue 2. Schrödinger's Uncertainty Principle? - Lilies can be Painted. Rajaram Nityananda. General Article Volume 4 Issue 2 February 1999 pp 24-26. Fulltext. Click here to view fulltext PDF. Permanent link:

  16. Euthanasia: Buddhist principles.

    Science.gov (United States)

    Barnes, M

    1996-04-01

    Religions provide various forms of motivation for moral action. This chapter takes Buddhism as an example from within the Indian "family' of religions and seeks to identify the doctrinal and cultural principles on which ethical decisions are taken. Although beginning from very different religious premises, it is argued that the conclusions to which Buddhism tends are broadly similar to those found within mainstream Christianity.

  17. NeatSort - A practical adaptive algorithm

    OpenAIRE

    La Rocca, Marcello; Cantone, Domenico

    2014-01-01

    We present a new adaptive sorting algorithm which is optimal for most disorder metrics and, more important, has a simple and quick implementation. On input $X$, our algorithm has a theoretical $\\Omega (|X|)$ lower bound and a $\\mathcal{O}(|X|\\log|X|)$ upper bound, exhibiting amazing adaptive properties which makes it run closer to its lower bound as disorder (computed on different metrics) diminishes. From a practical point of view, \\textit{NeatSort} has proven itself competitive with (and of...

  18. A simple technique for estimating EUVE sky survey exposure times

    Science.gov (United States)

    Carlisle, G. L.

    1986-01-01

    A simple way to estimate accumulated exposure time over the celestial sphere for a scanning telescope in earth orbit is described. Primary constraints on observation time, such as earth blockage, solar occultation, and passage through the South Atlantic Anomaly, are modeled using relatively straightforward, mainly closed-form geometrical solutions. The resulting algorithm is implemented on a desktop microcomputer. Though not rigorously precise, the algorithm is sufficient for conducting preliminary mission design studies for the Extreme Ultraviolet Explorer (EUVE).

  19. Simple simulation schemes for CIR and Wishart processes

    DEFF Research Database (Denmark)

    Pisani, Camilla

    2013-01-01

    We develop some simple simulation algorithms for CIR and Wishart processes. The main idea is the splitting of their generator into the sum of the square of an Ornstein-Uhlenbeck matrix process and a deterministic process. Joint work with Paolo Baldi, Tor Vergata University, Rome......We develop some simple simulation algorithms for CIR and Wishart processes. The main idea is the splitting of their generator into the sum of the square of an Ornstein-Uhlenbeck matrix process and a deterministic process. Joint work with Paolo Baldi, Tor Vergata University, Rome...

  20. Online learning algorithm for ensemble of decision rules

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    We describe an online learning algorithm that builds a system of decision rules for a classification problem. Rules are constructed according to the minimum description length principle by a greedy algorithm or using the dynamic programming approach. © 2011 Springer-Verlag.

  1. Bio Inspired Algorithms in Single and Multiobjective Reliability Optimization

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albeanu, Grigore; Burtschy, Bernard

    2014-01-01

    Non-traditional search and optimization methods based on natural phenomena have been proposed recently in order to avoid local or unstable behavior when run towards an optimum state. This paper describes the principles of bio inspired algorithms and reports on Migration Algorithms and Bees...

  2. Clustered K nearest neighbor algorithm for daily inflow forecasting

    NARCIS (Netherlands)

    Akbari, M.; Van Overloop, P.J.A.T.M.; Afshar, A.

    2010-01-01

    Instance based learning (IBL) algorithms are a common choice among data driven algorithms for inflow forecasting. They are based on the similarity principle and prediction is made by the finite number of similar neighbors. In this sense, the similarity of a query instance is estimated according to

  3. Theory of simple liquids

    CERN Document Server

    Hansen, Jean-Pierre

    1986-01-01

    This book gives a comprehensive and up-to-date treatment of the theory of ""simple"" liquids. The new second edition has been rearranged and considerably expanded to give a balanced account both of basic theory and of the advances of the past decade. It presents the main ideas of modern liquid state theory in a way that is both pedagogical and self-contained. The book should be accessible to graduate students and research workers, both experimentalists and theorists, who have a good background in elementary mechanics.Key Features* Compares theoretical deductions with experimental r

  4. Beyond Simple Headquarters Configurations

    DEFF Research Database (Denmark)

    Dellestrand, Henrik; Kappen, Philip; Nell, Phillip Christopher

    -divisional importance and embeddedness effects are contingent on the overall complexity of the innovation project as signified by the size of the development network. The results lend support for the notion that parenting in complex structures entails complex headquarters structures and that we need to go beyond simple.......e., an innovation that is important for the firm beyond the divisional boundaries, drives dual headquarters involvement in innovation development. Contrary to expectations, on average, a non-significant effect of cross-divisional embeddedness on dual headquarters involvement is found. Yet, both cross...

  5. Dimensional analysis made simple

    International Nuclear Information System (INIS)

    Lira, Ignacio

    2013-01-01

    An inductive strategy is proposed for teaching dimensional analysis to second- or third-year students of physics, chemistry, or engineering. In this strategy, Buckingham's theorem is seen as a consequence and not as the starting point. In order to concentrate on the basics, the mathematics is kept as elementary as possible. Simple examples are suggested for classroom demonstrations of the power of the technique and others are put forward for homework or experimentation, but instructors are encouraged to produce examples of their own. (paper)

  6. Data processing made simple

    CERN Document Server

    Wooldridge, Susan

    2013-01-01

    Data Processing: Made Simple, Second Edition presents discussions of a number of trends and developments in the world of commercial data processing. The book covers the rapid growth of micro- and mini-computers for both home and office use; word processing and the 'automated office'; the advent of distributed data processing; and the continued growth of database-oriented systems. The text also discusses modern digital computers; fundamental computer concepts; information and data processing requirements of commercial organizations; and the historical perspective of the computer industry. The

  7. Simple and surgical exodontia.

    Science.gov (United States)

    DeBowes, Linda J

    2005-07-01

    Preemptive and postoperative pain management is part of patient care when performing extractions. Simple extractions can become complicated when tooth roots are fractured. Adequate lighting,magnification, and surgical techniques are important when per-forming surgical (complicated) extractions. Radiographs should be taken before extractions and also during the procedure to assist with difficult extractions. Adequate flap design and bone removal are necessary when performing surgical extractions. Complications, including ocular trauma, jaw fracture, and soft tissue trauma, are avoided or minimized with proper patient selection and technique.

  8. Applied mathematics made simple

    CERN Document Server

    Murphy, Patrick

    1982-01-01

    Applied Mathematics: Made Simple provides an elementary study of the three main branches of classical applied mathematics: statics, hydrostatics, and dynamics. The book begins with discussion of the concepts of mechanics, parallel forces and rigid bodies, kinematics, motion with uniform acceleration in a straight line, and Newton's law of motion. Separate chapters cover vector algebra and coplanar motion, relative motion, projectiles, friction, and rigid bodies in equilibrium under the action of coplanar forces. The final chapters deal with machines and hydrostatics. The standard and conte

  9. Extremum principles for irreversible processes

    International Nuclear Information System (INIS)

    Hillert, M.; Agren, J.

    2006-01-01

    Hamilton's extremum principle is a powerful mathematical tool in classical mechanics. Onsager's extremum principle may play a similar role in irreversible thermodynamics and may also become a valuable tool. His principle may formally be regarded as a principle of maximum rate of entropy production but does not have a clear physical interpretation. Prigogine's principle of minimum rate of entropy production has a physical interpretation when it applies, but is not strictly valid except for a very special case

  10. A Multipath Mitigation Algorithm for vehicle with Smart Antenna

    Science.gov (United States)

    Ji, Jing; Zhang, Jiantong; Chen, Wei; Su, Deliang

    2018-01-01

    In this paper, the antenna array adaptive method is used to eliminate the multipath interference in the environment of GPS L1 frequency. Combined with the power inversion (PI) algorithm and the minimum variance no distortion response (MVDR) algorithm, the anti-Simulation and verification of the antenna array, and the program into the FPGA, the actual test on the CBD road, the theoretical analysis of the LCMV criteria and PI and MVDR algorithm principles and characteristics of MVDR algorithm to verify anti-multipath interference performance is better than PI algorithm, The satellite navigation in the field of vehicle engineering practice has some guidance and reference.

  11. General Quantum Interference Principle and Duality Computer

    International Nuclear Information System (INIS)

    Long Guilu

    2006-01-01

    In this article, we propose a general principle of quantum interference for quantum system, and based on this we propose a new type of computing machine, the duality computer, that may outperform in principle both classical computer and the quantum computer. According to the general principle of quantum interference, the very essence of quantum interference is the interference of the sub-waves of the quantum system itself. A quantum system considered here can be any quantum system: a single microscopic particle, a composite quantum system such as an atom or a molecule, or a loose collection of a few quantum objects such as two independent photons. In the duality computer, the wave of the duality computer is split into several sub-waves and they pass through different routes, where different computing gate operations are performed. These sub-waves are then re-combined to interfere to give the computational results. The quantum computer, however, has only used the particle nature of quantum object. In a duality computer, it may be possible to find a marked item from an unsorted database using only a single query, and all NP-complete problems may have polynomial algorithms. Two proof-of-the-principle designs of the duality computer are presented: the giant molecule scheme and the nonlinear quantum optics scheme. We also propose thought experiment to check the related fundamental issues, the measurement efficiency of a partial wave function.

  12. Probabilistic simple sticker systems

    Science.gov (United States)

    Selvarajoo, Mathuri; Heng, Fong Wan; Sarmin, Nor Haniza; Turaev, Sherzod

    2017-04-01

    A model for DNA computing using the recombination behavior of DNA molecules, known as a sticker system, was introduced by by L. Kari, G. Paun, G. Rozenberg, A. Salomaa, and S. Yu in the paper entitled DNA computing, sticker systems and universality from the journal of Acta Informatica vol. 35, pp. 401-420 in the year 1998. A sticker system uses the Watson-Crick complementary feature of DNA molecules: starting from the incomplete double stranded sequences, and iteratively using sticking operations until a complete double stranded sequence is obtained. It is known that sticker systems with finite sets of axioms and sticker rules generate only regular languages. Hence, different types of restrictions have been considered to increase the computational power of sticker systems. Recently, a variant of restricted sticker systems, called probabilistic sticker systems, has been introduced [4]. In this variant, the probabilities are initially associated with the axioms, and the probability of a generated string is computed by multiplying the probabilities of all occurrences of the initial strings in the computation of the string. Strings for the language are selected according to some probabilistic requirements. In this paper, we study fundamental properties of probabilistic simple sticker systems. We prove that the probabilistic enhancement increases the computational power of simple sticker systems.

  13. MANAGER PRINCIPLES AS BASIS OF MANAGEMENT STYLE TRANSFORMATION

    Directory of Open Access Journals (Sweden)

    R. A. Kopytov

    2011-01-01

    Full Text Available The paper considers an approach which is based on non-conventional mechanisms of management style formation. The preset level of sustainable management is maintained by self-organized environment created in the process of management style transformation in efficient management principles. Their efficiency is checked within an adaptive algorithm. The algorithm is developed on the basis of combination of evaluative tools  and base of operational  proves. The operating algorithm capability is tested within the framework of an operating enterprise. The obtained results testify about  formation of sustainable business.

  14. PERBANDINGAN KINERJA ALGORITMA BIC, CUBIC DAN HTCP PADA TOPOLOGI DUMBBELL DAN SIMPLE NETWORK MENGGUNAKAN NS2

    Directory of Open Access Journals (Sweden)

    Rian Fahrizal

    2015-04-01

    Full Text Available High speed computer networks with a large waiting time is a common from of network in the future. In this network are commonly used TCP algorithms have difficullty in sending data. There are several algorithms that has used the BIC, CUBIC and HTCP. These algorithms needs to be tested to determine its performance when apllied to the network topology with two dumbbells, and simple network. Teh results obtained testing the algorithms is best HTCP performance by having the smallest value.

  15. Protein Structure Prediction with Evolutionary Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hart, W.E.; Krasnogor, N.; Pelta, D.A.; Smith, J.

    1999-02-08

    Evolutionary algorithms have been successfully applied to a variety of molecular structure prediction problems. In this paper we reconsider the design of genetic algorithms that have been applied to a simple protein structure prediction problem. Our analysis considers the impact of several algorithmic factors for this problem: the confirmational representation, the energy formulation and the way in which infeasible conformations are penalized, Further we empirically evaluated the impact of these factors on a small set of polymer sequences. Our analysis leads to specific recommendations for both GAs as well as other heuristic methods for solving PSP on the HP model.

  16. Space mapping optimization algorithms for engineering design

    DEFF Research Database (Denmark)

    Koziel, Slawomir; Bandler, John W.; Madsen, Kaj

    2006-01-01

    -order derivatives between the mapped coarse model and the fine model at the current iteration point. We also consider an enhanced version in which the input SM coefficients are frequency dependent. The performance of our new algorithms is comparable with the recently published SMIS algorithm when applied......A simple, efficient optimization algorithm based on space mapping (SM) is presented. It utilizes input SM to reduce the misalignment between the coarse and fine models of the optimized object over a region of interest, and output space mapping (OSM) to ensure matching of response and first...

  17. Torque Optimization Algorithm for SRM Drives Using a Robust Predictive Strategy

    DEFF Research Database (Denmark)

    Argeseanu, Alin; Ritchie, Ewen; Leban, Krisztina Monika

    2010-01-01

    This paper presents a new torque optimization algorithm to maximize the torque generated by an SRM drive. The new algorithm uses a predictive strategy. The behaviour of the SRM demands a sequential algorithm. To preserve the advantages of SRM drives (simple and rugged topology) the new algorithm...

  18. A Simple Inquiry-Based Lab for Teaching Osmosis

    Science.gov (United States)

    Taylor, John R.

    2014-01-01

    This simple inquiry-based lab was designed to teach the principle of osmosis while also providing an experience for students to use the skills and practices commonly found in science. Students first design their own experiment using very basic equipment and supplies, which generally results in mixed, but mostly poor, outcomes. Classroom "talk…

  19. Audiovisual Fundamentals; Basic Equipment Operation and Simple Materials Production.

    Science.gov (United States)

    Bullard, John R.; Mether, Calvin E.

    A guide illustrated with simple sketches explains the functions and step-by-step uses of audiovisual (AV) equipment. Principles of projection, audio, AV equipment, lettering, limited-quantity and quantity duplication, and materials preservation are outlined. Apparatus discussed include overhead, opaque, slide-filmstrip, and multiple-loading slide…

  20. Flutter signal extracting technique based on FOG and self-adaptive sparse representation algorithm

    Science.gov (United States)

    Lei, Jian; Meng, Xiangtao; Xiang, Zheng

    2016-10-01

    Due to various moving parts inside, when a spacecraft runs in orbits, its structure could get a minor angular vibration, which results in vague image formation of space camera. Thus, image compensation technique is required to eliminate or alleviate the effect of movement on image formation and it is necessary to realize precise measuring of flutter angle. Due to the advantages such as high sensitivity, broad bandwidth, simple structure and no inner mechanical moving parts, FOG (fiber optical gyro) is adopted in this study to measure minor angular vibration. Then, movement leading to image degeneration is achieved by calculation. The idea of the movement information extracting algorithm based on self-adaptive sparse representation is to use arctangent function approximating L0 norm to construct unconstrained noisy-signal-aimed sparse reconstruction model and then solve the model by a method based on steepest descent algorithm and BFGS algorithm to estimate sparse signal. Then taking the advantage of the principle of random noises not able to be represented by linear combination of elements, useful signal and random noised are separated effectively. Because the main interference of minor angular vibration to image formation of space camera is random noises, sparse representation algorithm could extract useful information to a large extent and acts as a fitting pre-process method of image restoration. The self-adaptive sparse representation algorithm presented in this paper is used to process the measured minor-angle-vibration signal of FOG used by some certain spacecraft. By component analysis of the processing results, we can find out that the algorithm could extract micro angular vibration signal of FOG precisely and effectively, and can achieve the precision degree of 0.1".

  1. Principles of geodynamics

    CERN Document Server

    Scheidegger, Adrian E

    1982-01-01

    Geodynamics is commonly thought to be one of the subjects which provide the basis for understanding the origin of the visible surface features of the Earth: the latter are usually assumed as having been built up by geodynamic forces originating inside the Earth ("endogenetic" processes) and then as having been degrad­ ed by geomorphological agents originating in the atmosphere and ocean ("exogenetic" agents). The modem view holds that the sequence of events is not as neat as it was once thought to be, and that, in effect, both geodynamic and geomorphological processes act simultaneously ("Principle of Antagonism"); however, the division of theoretical geology into the principles of geodynamics and those of theoretical geomorphology seems to be useful for didactic purposes. It has therefore been maintained in the present writer's works. This present treatise on geodynamics is the first part of the author's treatment of theoretical geology, the treatise on Theoretical Geomorphology (also published by the Sprin...

  2. [Ethical principles in electronvulsivotherapy].

    Science.gov (United States)

    Richa, S; De Carvalho, W

    2016-12-01

    ECT or electroconvulsive therapy (ECT) is a therapeutic technique invented in 1935 but which was really developed after World War II and then spreading widely until the mid 1960s. The source of this technique, and some forms of stigma including films, have participated widely to make it suspect from a moral point of view. The ethical principles that support the establishment of a treatment by ECT are those relating to any action in psychiatry and are based on the one hand on the founding principles of bioethics: autonomy, beneficence, non-malfeasance, and justice, and on the other hand on the information on the technical and consent to this type of care. Copyright © 2016 L’Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.

  3. Principles of Stellar Interferometry

    CERN Document Server

    Glindemann, Andreas

    2011-01-01

    Over the last decade, stellar interferometry has developed from a specialist tool to a mainstream observing technique, attracting scientists whose research benefits from milliarcsecond angular resolution. Stellar interferometry has become part of the astronomer’s toolbox, complementing single-telescope observations by providing unique capabilities that will advance astronomical research. This carefully written book is intended to provide a solid understanding of the principles of stellar interferometry to students starting an astronomical research project in this field or to develop instruments and to astronomers using interferometry but who are not interferometrists per se. Illustrated by excellent drawings and calculated graphs the imaging process in stellar interferometers is explained starting from first principles on light propagation and diffraction wave propagation through turbulence is described in detail using Kolmogorov statistics the impact of turbulence on the imaging process is discussed both f...

  4. Principles of mobile communication

    CERN Document Server

    Stüber, Gordon L

    2017-01-01

    This mathematically rigorous overview of physical layer wireless communications is now in a 4th, fully revised and updated edition. The new edition features new content on 4G cellular systems, 5G cellular outlook, bandpass signals and systems, and polarization, among many other topics, in addition to a new chapters on channel assignment techniques. Along with coverage of fundamentals and basic principles sufficient for novice students, the volume includes finer details that satisfy the requirements of graduate students aiming to conduct in-depth research. The book begins with a survey of the field, introducing issues relevant to wireless communications. The book moves on to cover relevant discrete subjects, from radio propagation, to error probability performance, and cellular radio resource management. An appendix provides a tutorial on probability and random processes. The content stresses core principles that are applicable to a broad range of wireless standards. New examples are provided throughout the bo...

  5. Principles of harmonic analysis

    CERN Document Server

    Deitmar, Anton

    2014-01-01

    This book offers a complete and streamlined treatment of the central principles of abelian harmonic analysis: Pontryagin duality, the Plancherel theorem and the Poisson summation formula, as well as their respective generalizations to non-abelian groups, including the Selberg trace formula. The principles are then applied to spectral analysis of Heisenberg manifolds and Riemann surfaces. This new edition contains a new chapter on p-adic and adelic groups, as well as a complementary section on direct and projective limits. Many of the supporting proofs have been revised and refined. The book is an excellent resource for graduate students who wish to learn and understand harmonic analysis and for researchers seeking to apply it.

  6. Principles of systems science

    CERN Document Server

    Mobus, George E

    2015-01-01

    This pioneering text provides a comprehensive introduction to systems structure, function, and modeling as applied in all fields of science and engineering. Systems understanding is increasingly recognized as a key to a more holistic education and greater problem solving skills, and is also reflected in the trend toward interdisciplinary approaches to research on complex phenomena. The subject of systems science, as a basis for understanding the components and drivers of phenomena at all scales, should be viewed with the same importance as a traditional liberal arts education. Principles of Systems Science contains many graphs, illustrations, side bars, examples, and problems to enhance understanding. From basic principles of organization, complexity, abstract representations, and behavior (dynamics) to deeper aspects such as the relations between information, knowledge, computation, and system control, to higher order aspects such as auto-organization, emergence and evolution, the book provides an integrated...

  7. Principles of magnetodynamic chemotherapy.

    Science.gov (United States)

    Babincová, M; Leszczynska, D; Sourivong, P; Babinec, P; Leszczynski, J

    2004-01-01

    Basic principles of a novel method of cancer treatment are explained. Method is based on the thermal activation of an inactive prodrug encapsulated in magnetoliposomes via Neél and Brown effects of inductive heating of subdomain superparamagnetic particles to sufficiently high temperatures. This principle may be combined with targeted drug delivery (using constant magnetic field) and controlled release (using high-frequency magnetic field) of an activated drug entrapped in magnetoliposomes. Using this method drug may be applied very selectively in the particular site of organism and this procedure may be repeated several times using e.g. stealth magnetoliposomes which are circulating in a blood-stream for several days. Moreover the magnetoliposomes concentrated by external constant magnetic field in tumor vasculature may lead to embolic lesions and necrosis of a tumor body and further the heat produced for thermal activation of a drug enhances the effect of chemotherapy by local hyperthermic treatment of neoplastic cells.

  8. Principles of photonics

    CERN Document Server

    Liu, Jia-Ming

    2016-01-01

    With this self-contained and comprehensive text, students will gain a detailed understanding of the fundamental concepts and major principles of photonics. Assuming only a basic background in optics, readers are guided through key topics such as the nature of optical fields, the properties of optical materials, and the principles of major photonic functions regarding the generation, propagation, coupling, interference, amplification, modulation, and detection of optical waves or signals. Numerous examples and problems are provided throughout to enhance understanding, and a solutions manual containing detailed solutions and explanations is available online for instructors. This is the ideal resource for electrical engineering and physics undergraduates taking introductory, single-semester or single-quarter courses in photonics, providing them with the knowledge and skills needed to progress to more advanced courses on photonic devices, systems and applications.

  9. Principles of Fourier analysis

    CERN Document Server

    Howell, Kenneth B

    2001-01-01

    Fourier analysis is one of the most useful and widely employed sets of tools for the engineer, the scientist, and the applied mathematician. As such, students and practitioners in these disciplines need a practical and mathematically solid introduction to its principles. They need straightforward verifications of its results and formulas, and they need clear indications of the limitations of those results and formulas.Principles of Fourier Analysis furnishes all this and more. It provides a comprehensive overview of the mathematical theory of Fourier analysis, including the development of Fourier series, "classical" Fourier transforms, generalized Fourier transforms and analysis, and the discrete theory. Much of the author''s development is strikingly different from typical presentations. His approach to defining the classical Fourier transform results in a much cleaner, more coherent theory that leads naturally to a starting point for the generalized theory. He also introduces a new generalized theory based ...

  10. Principles of mathematical modeling

    CERN Document Server

    Dym, Clive

    2004-01-01

    Science and engineering students depend heavily on concepts of mathematical modeling. In an age where almost everything is done on a computer, author Clive Dym believes that students need to understand and "own" the underlying mathematics that computers are doing on their behalf. His goal for Principles of Mathematical Modeling, Second Edition, is to engage the student reader in developing a foundational understanding of the subject that will serve them well into their careers. The first half of the book begins with a clearly defined set of modeling principles, and then introduces a set of foundational tools including dimensional analysis, scaling techniques, and approximation and validation techniques. The second half demonstrates the latest applications for these tools to a broad variety of subjects, including exponential growth and decay in fields ranging from biology to economics, traffic flow, free and forced vibration of mechanical and other systems, and optimization problems in biology, structures, an...

  11. Principles of Mobile Communication

    CERN Document Server

    Stüber, Gordon L

    2012-01-01

    This mathematically rigorous overview of physical layer wireless communications is now in a third, fully revised and updated edition. Along with coverage of basic principles sufficient for novice students, the volume includes plenty of finer details that will satisfy the requirements of graduate students aiming to research the topic in depth. It also has a role as a handy reference for wireless engineers. The content stresses core principles that are applicable to a broad range of wireless standards. Beginning with a survey of the field that introduces an array of issues relevant to wireless communications and which traces the historical development of today’s accepted wireless standards, the book moves on to cover all the relevant discrete subjects, from radio propagation to error probability performance and cellular radio resource management. A valuable appendix provides a succinct and focused tutorial on probability and random processes, concepts widely used throughout the book. This new edition, revised...

  12. Principles of fluid mechanics

    International Nuclear Information System (INIS)

    Kreider, J.F.

    1985-01-01

    This book is an introduction on fluid mechanics incorporating computer applications. Topics covered are as follows: brief history; what is a fluid; two classes of fluids: liquids and gases; the continuum model of a fluid; methods of analyzing fluid flows; important characteristics of fluids; fundamentals and equations of motion; fluid statics; dimensional analysis and the similarity principle; laminar internal flows; ideal flow; external laminar and channel flows; turbulent flow; compressible flow; fluid flow measurements

  13. Principles of artificial intelligence

    CERN Document Server

    Nilsson, Nils J

    1980-01-01

    A classic introduction to artificial intelligence intended to bridge the gap between theory and practice, Principles of Artificial Intelligence describes fundamental AI ideas that underlie applications such as natural language processing, automatic programming, robotics, machine vision, automatic theorem proving, and intelligent data retrieval. Rather than focusing on the subject matter of the applications, the book is organized around general computational concepts involving the kinds of data structures used, the types of operations performed on the data structures, and the properties of th

  14. Principles of Protocol Design

    DEFF Research Database (Denmark)

    Sharp, Robin

    This is a new and updated edition of a book first published in 1994. The book introduces the reader to the principles used in the construction of a large range of modern data communication protocols, as used in distributed computer systems of all kinds. The approach taken is rather a formal one......, primarily based on descriptions of the protocols in the notation of CSP (Communicating Sequential Processes)....

  15. Principles of electrical safety

    CERN Document Server

    Sutherland, Peter E

    2015-01-01

    Principles of Electrical Safety discusses current issues in electrical safety, which are accompanied by series' of practical applications that can be used by practicing professionals, graduate students, and researchers. .  Provides extensive introductions to important topics in electrical safety Comprehensive overview of inductance, resistance, and capacitance as applied to the human body Serves as a preparatory guide for today's practicing engineers

  16. General Principles Governing Liability

    International Nuclear Information System (INIS)

    Reyners, P.

    1998-01-01

    This paper contains a brief review of the basic principles which govern the special regime of liability and compensation for nuclear damage originating on nuclear installations, in particular the strict and exclusive liability of the nuclear operator, the provision of a financial security to cover this liability and the limits applicable both in amount and in time. The paper also reviews the most important international agreements currently in force which constitute the foundation of this special regime. (author)

  17. The Principle of Proportionality

    DEFF Research Database (Denmark)

    Bennedsen, Morten; Meisner Nielsen, Kasper

    2005-01-01

    Recent policy initiatives within the harmonization of European company laws have promoted a so-called "principle of proportionality" through proposals that regulate mechanisms opposing a proportional distribution of ownership and control. We scrutinize the foundation for these initiatives...... in relationship to the process of harmonization of the European capital markets.JEL classifications: G30, G32, G34 and G38Keywords: Ownership Structure, Dual Class Shares, Pyramids, EU companylaws....

  18. Physics Without Physics. The Power of Information-theoretical Principles

    Science.gov (United States)

    D'Ariano, Giacomo Mauro

    2017-01-01

    David Finkelstein was very fond of the new information-theoretic paradigm of physics advocated by John Archibald Wheeler and Richard Feynman. Only recently, however, the paradigm has concretely shown its full power, with the derivation of quantum theory (Chiribella et al., Phys. Rev. A 84:012311, 2011; D'Ariano et al., 2017) and of free quantum field theory (D'Ariano and Perinotti, Phys. Rev. A 90:062106, 2014; Bisio et al., Phys. Rev. A 88:032301, 2013; Bisio et al., Ann. Phys. 354:244, 2015; Bisio et al., Ann. Phys. 368:177, 2016) from informational principles. The paradigm has opened for the first time the possibility of avoiding physical primitives in the axioms of the physical theory, allowing a re-foundation of the whole physics over logically solid grounds. In addition to such methodological value, the new information-theoretic derivation of quantum field theory is particularly interesting for establishing a theoretical framework for quantum gravity, with the idea of obtaining gravity itself as emergent from the quantum information processing, as also suggested by the role played by information in the holographic principle (Susskind, J. Math. Phys. 36:6377, 1995; Bousso, Rev. Mod. Phys. 74:825, 2002). In this paper I review how free quantum field theory is derived without using mechanical primitives, including space-time, special relativity, Hamiltonians, and quantization rules. The theory is simply provided by the simplest quantum algorithm encompassing a countable set of quantum systems whose network of interactions satisfies the three following simple principles: homogeneity, locality, and isotropy. The inherent discrete nature of the informational derivation leads to an extension of quantum field theory in terms of a quantum cellular automata and quantum walks. A simple heuristic argument sets the scale to the Planck one, and the currently observed regime where discreteness is not visible is the so-called "relativistic regime" of small wavevectors, which

  19. Principled Missing Data Treatments.

    Science.gov (United States)

    Lang, Kyle M; Little, Todd D

    2018-04-01

    We review a number of issues regarding missing data treatments for intervention and prevention researchers. Many of the common missing data practices in prevention research are still, unfortunately, ill-advised (e.g., use of listwise and pairwise deletion, insufficient use of auxiliary variables). Our goal is to promote better practice in the handling of missing data. We review the current state of missing data methodology and recent missing data reporting in prevention research. We describe antiquated, ad hoc missing data treatments and discuss their limitations. We discuss two modern, principled missing data treatments: multiple imputation and full information maximum likelihood, and we offer practical tips on how to best employ these methods in prevention research. The principled missing data treatments that we discuss are couched in terms of how they improve causal and statistical inference in the prevention sciences. Our recommendations are firmly grounded in missing data theory and well-validated statistical principles for handling the missing data issues that are ubiquitous in biosocial and prevention research. We augment our broad survey of missing data analysis with references to more exhaustive resources.

  20. Principles of medical statistics

    National Research Council Canada - National Science Library

    Feinstein, Alvan R

    2002-01-01

    ... or limited attention. They are then offered a simple, superficial account of the most common doctrines and applications of statistical theory. The "get-it-over-withquickly" approach has been encouraged and often necessitated by the short time given to statistics in modern biomedical education. The curriculum is supposed to provide fundament...

  1. Principles of managing stands

    Science.gov (United States)

    David A. Marquis; Rodney Jacobs

    1989-01-01

    Forest stands are managed to achieve some combination of desired products or values. These products or values may include income and tangible benefits from timber production or fees for hunting rights and other recreational activities. The values may be intangible, such as the enjoyment of seeing wildlife or flowering plants, or the simple satisfaction of knowing that...

  2. The parallel plate avalanche counter: a simple, rugged, imaging X-ray counter

    International Nuclear Information System (INIS)

    Joensen, K.D.; Budtz-Joergensen, C.; Bahnsen, A.; Madsen, M.M.; Olesen, C.; Schnopper, H.W.

    1995-01-01

    A two-dimensional parallel gap proportional counter has been developed at the Danish Space Research Institute. Imaging over the 120 mm diameter active area is obtained using the positive ion component of the avalanche signals as recorded by a system of wedge- and strip-electrodes. An electronically simple, but very effective background rejection is obtained by using the fast electron component of the avalanche signal. Gas gains up to 8x10 5 have been achieved. An energy-resolution of 16% and a sub-millimeter spatial resolution have been measured at 5.9 keV for an operating gas gain of 10 5 . In principle, the position coordinates are linear functions of electronic readouts. The present model, however, exhibits non-linearities, caused by imperfections in the wedge and strip-electrode pattern. These non-linearities are corrected by using a bilinear correction algorithm. We conclude that the rugged construction, the simple electronics, the effectiveness of the background rejection and the actual imaging performance makes this a very attractive laboratory detector for low and intermediate count rate imaging applications. ((orig.))

  3. Design principle and structure of the ANI data centre

    International Nuclear Information System (INIS)

    Akopov, N.Z.; Arutyunyan, S.Kh.; Chilingaryan, A.A.; Galfayan, S.Kh.; Matevosyan, V.Kh.; Zazyan, M.Z.

    1985-01-01

    The design principles and structure of applied statistical programms used for processing the data from the ANI experiments are described. Nonparametric algorithms provide development of high-efficient method for simultaneous analysis of computerized and experimental data, from cosmic ray experiments. Relation data base for unified data storage, protection, renewing and erasuring as well as for fast and convenient information retrieval is considered

  4. Trophic dynamics of a simple model ecosystem.

    Science.gov (United States)

    Bell, Graham; Fortier-Dubois, Étienne

    2017-09-13

    We have constructed a model of community dynamics that is simple enough to enumerate all possible food webs, yet complex enough to represent a wide range of ecological processes. We use the transition matrix to predict the outcome of succession and then investigate how the transition probabilities are governed by resource supply and immigration. Low-input regimes lead to simple communities whereas trophically complex communities develop when there is an adequate supply of both resources and immigrants. Our interpretation of trophic dynamics in complex communities hinges on a new principle of mutual replenishment, defined as the reciprocal alternation of state in a pair of communities linked by the invasion and extinction of a shared species. Such neutral couples are the outcome of succession under local dispersal and imply that food webs will often be made up of suites of trophically equivalent species. When immigrants arrive from an external pool of fixed composition a similar principle predicts a dynamic core of webs constituting a neutral interchange network, although communities may express an extensive range of other webs whose membership is only in part predictable. The food web is not in general predictable from whole-community properties such as productivity or stability, although it may profoundly influence these properties. © 2017 The Author(s).

  5. Simple relation algebras

    CERN Document Server

    Givant, Steven

    2017-01-01

    This monograph details several different methods for constructing simple relation algebras, many of which are new with this book. By drawing these seemingly different methods together, all are shown to be aspects of one general approach, for which several applications are given. These tools for constructing and analyzing relation algebras are of particular interest to mathematicians working in logic, algebraic logic, or universal algebra, but will also appeal to philosophers and theoretical computer scientists working in fields that use mathematics. The book is written with a broad audience in mind and features a careful, pedagogical approach; an appendix contains the requisite background material in relation algebras. Over 400 exercises provide ample opportunities to engage with the material, making this a monograph equally appropriate for use in a special topics course or for independent study. Readers interested in pursuing an extended background study of relation algebras will find a comprehensive treatme...

  6. A Simple Harmonic Universe

    Energy Technology Data Exchange (ETDEWEB)

    Graham, Peter W.; /Stanford U., ITP; Horn, Bart; Kachru, Shamit; /Stanford U., ITP /SLAC; Rajendran, Surjeet; /Johns Hopkins U. /Stanford U., ITP; Torroba, Gonzalo; /Stanford U., ITP /SLAC

    2011-12-14

    We explore simple but novel bouncing solutions of general relativity that avoid singularities. These solutions require curvature k = +1, and are supported by a negative cosmological term and matter with -1 < w < -1 = 3. In the case of moderate bounces (where the ratio of the maximal scale factor a{sub +} to the minimal scale factor a{sub -} is {Omicron}(1)), the solutions are shown to be classically stable and cycle through an infinite set of bounces. For more extreme cases with large a{sub +} = a{sub -}, the solutions can still oscillate many times before classical instabilities take them out of the regime of validity of our approximations. In this regime, quantum particle production also leads eventually to a departure from the realm of validity of semiclassical general relativity, likely yielding a singular crunch. We briefly discuss possible applications of these models to realistic cosmology.

  7. SIMPLE LIFE AND RELIGION

    Directory of Open Access Journals (Sweden)

    Ahmet YILDIRIM

    2014-07-01

    Full Text Available Individuals in terms of the economy in which we live is one of the most important phenomenon of the century. This phenomenon present itself as the only determinant of people's lives by entering almost makes itself felt. The mo st obvious objective needs of the economy by triggering motive is to induce people to consume . Consumer culture pervades all aspects of the situation are people . Therefore, these people have the blessing of culture , beauty and value all in the name of w hatever is consumed. This is way out of the siege of moral and religious values we have is to go back again . Referred by local cultural and religious values, based on today increasingly come to the fore and the Muslim way of life appears to be close to th e plain / lean preferred by many people life has been a way of life. Even the simple life , a way of life in the Western world , a conception of life , a philosophy, a movement as it has become widely accepted. Here in determining the Muslim way of life Pr ophet. Prophet (sa lived the kind of life a very important model, sample, and determining which direction is known. Religious values, which is the carrier of the prophets, sent to the society they have always been examples and models. Because every aspect of human life, his life style and the surrounding area has a feature. We also value his life that he has unknowingly and without learning and skills and to understand it is not possible to live our religion . We also our presentation, we mainly of Islam o utlook on life and predicted life - style, including the Prophet of Islam 's (sa simple life to scrutinize and lifestyle issues related to reveal , in short Islam's how life has embraced and the Prophet. Prophet's will try to find answers to questions reg arding how to live.

  8. Algorithmic Mechanism Design of Evolutionary Computation.

    Science.gov (United States)

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm.

  9. Design principles for shift current photovoltaics.

    Science.gov (United States)

    Cook, Ashley M; M Fregoso, Benjamin; de Juan, Fernando; Coh, Sinisa; Moore, Joel E

    2017-01-25

    While the basic principles of conventional solar cells are well understood, little attention has gone towards maximizing the efficiency of photovoltaic devices based on shift currents. By analysing effective models, here we outline simple design principles for the optimization of shift currents for frequencies near the band gap. Our method allows us to express the band edge shift current in terms of a few model parameters and to show it depends explicitly on wavefunctions in addition to standard band structure. We use our approach to identify two classes of shift current photovoltaics, ferroelectric polymer films and single-layer orthorhombic monochalcogenides such as GeS, which display the largest band edge responsivities reported so far. Moreover, exploring the parameter space of the tight-binding models that describe them we find photoresponsivities that can exceed 100 mA W -1 . Our results illustrate the great potential of shift current photovoltaics to compete with conventional solar cells.

  10. Maximal frustration as an immunological principle.

    Science.gov (United States)

    de Abreu, F Vistulo; Mostardinha, P

    2009-03-06

    A fundamental problem in immunology is that of understanding how the immune system selects promptly which cells to kill without harming the body. This problem poses an apparent paradox. Strong reactivity against pathogens seems incompatible with perfect tolerance towards self. We propose a different view on cellular reactivity to overcome this paradox: effector functions should be seen as the outcome of cellular decisions which can be in conflict with other cells' decisions. We argue that if cellular systems are frustrated, then extensive cross-reactivity among the elements in the system can decrease the reactivity of the system as a whole and induce perfect tolerance. Using numerical and mathematical analyses, we discuss two simple models that perform optimal pathogenic detection with no autoimmunity if cells are maximally frustrated. This study strongly suggests that a principle of maximal frustration could be used to build artificial immune systems. It would be interesting to test this principle in the real adaptive immune system.

  11. The algorithm design manual

    CERN Document Server

    Skiena, Steven S

    2008-01-01

    Explaining designing algorithms, and analyzing their efficacy and efficiency, this book covers combinatorial algorithms technology, stressing design over analysis. It presents instruction on methods for designing and analyzing computer algorithms. It contains the catalog of algorithmic resources, implementations and a bibliography

  12. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops....... Examining how algorithms make people feel, then, seems crucial if we want to understand their social power....

  13. Simple and Inexpensive Classroom Demonstrations of Nuclear Magnetic Resonance and Magnetic Resonance Imaging.

    Science.gov (United States)

    Olson, Joel A.; Nordell, Karen J.; Chesnik, Marla A.; Landis, Clark R.; Ellis, Arthur B.; Rzchowski, M. S.; Condren, S. Michael; Lisensky, George C.

    2000-01-01

    Describes a set of simple, inexpensive, classical demonstrations of nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI) principles that illustrate the resonance condition associated with magnetic dipoles and the dependence of the resonance frequency on environment. (WRM)

  14. Principles of linear algebra with Mathematica

    CERN Document Server

    Shiskowski, Kenneth M

    2013-01-01

    A hands-on introduction to the theoretical and computational aspects of linear algebra using Mathematica® Many topics in linear algebra are simple, yet computationally intensive, and computer algebra systems such as Mathematica® are essential not only for learning to apply the concepts to computationally challenging problems, but also for visualizing many of the geometric aspects within this field of study. Principles of Linear Algebra with Mathematica uniquely bridges the gap between beginning linear algebra and computational linear algebra that is often encountered in applied settings,

  15. Immersive Algorithms: Better Visualization with Less Information

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li

    2017-01-01

    Visualizing algorithms, such as drawings, slideshow presentations, animations, videos, and software tools, is a key concept to enhance and support student learning. A typical visualization of an algorithm show the data and then perform computation on the data. For instance, a standard visualization......” the full sorted array, but only the single position that it accesses during each step of the computation. To fix this discrepancy we introduce the immersive principle that states that at any point in time, the displayed information should closely match the information accessed by the algorithm. We give...... several examples of immersive visualizations of basic algorithms and data structures, discuss methods for implementing it, and briefly evaluate it....

  16. VLSI PARTITIONING ALGORITHM WITH ADAPTIVE CONTROL PARAMETER

    Directory of Open Access Journals (Sweden)

    P. N. Filippenko

    2013-03-01

    Full Text Available The article deals with the problem of very large-scale integration circuit partitioning. A graph is selected as a mathematical model describing integrated circuit. Modification of ant colony optimization algorithm is presented, which is used to solve graph partitioning problem. Ant colony optimization algorithm is an optimization method based on the principles of self-organization and other useful features of the ants’ behavior. The proposed search system is based on ant colony optimization algorithm with the improved method of the initial distribution and dynamic adjustment of the control search parameters. The experimental results and performance comparison show that the proposed method of very large-scale integration circuit partitioning provides the better search performance over other well known algorithms.

  17. A very simple proof of Pascal's hexagon theorem and some ...

    Indian Academy of Sciences (India)

    In this article we present a simple and elegant algebraic proof of Pascal's hexagon theorem which requires only knowledge of basics on conic sections without theory of projective transformations. Also, we provide an efficient algorithm for finding an equation of the conic containing five given points and a criterion for ...

  18. Efficiency principles of consulting entrepreneurship

    OpenAIRE

    Moroz Yustina S.; Drozdov Igor N.

    2015-01-01

    The article reviews the primary goals and problems of consulting entrepreneurship. The principles defining efficiency of entrepreneurship in the field of consulting are generalized. The special attention is given to the importance of ethical principles of conducting consulting entrepreneurship activity.

  19. Numerical simulation of turbulent flow and heat transfer in a parallel channel. Verification of the field synergy principle

    International Nuclear Information System (INIS)

    Tian Wenxi; Su, G.H.; Qiu Suizheng; Jia Dounan

    2004-01-01

    The field synergy principle was proposed by Guo(1998) which is based on 2-D boundary laminar flow and it resulted from a second look at the mechanism of convective heat transfer. Numerical verification of this principle's validity for turbulent flow has been carried out by very few researchers, and mostly commercial software such as FLUENT, CFX etc. were used in their study. In this paper, numerical simulation of turbulent flow with recirculation was developed using SIMPLE algorithm with two-equation k-ε model. Extension of computational region method and wall function method were quoted to regulate the whole computational region geometrically. Given the inlet Reynold number keeps constant: 10000, by changing the height of the solid obstacle, simulation was conducted and the result showed that the wall heat flux decreased with the angle between the velocity vector and the temperature gradient. Thus it is validated that the field synergy principle based on 2-D boundary laminar flow can also be applied to complex turbulent flow even with recirculation. (author)

  20. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  1. Practical boundary surveying legal and technical principles

    CERN Document Server

    Gay, Paul

    2015-01-01

    This guide to boundary surveying provides landowners, land surveyors, students and others with the necessary foundation to understand boundary surveying techniques and the common legal issues that govern boundary establishment.  Boundary surveying is sometimes mistakenly considered a strictly technical discipline with simple and straightforward technical solutions.  In reality, boundary establishment is often a difficult and complex matter, requiring years of experience and a thorough understanding of boundary law.  This book helps readers to understand the challenges often encountered by boundary surveyors and some of the available solutions. Using only simple and logically explained mathematics, the principles and practice of boundary surveying are demystified for those without prior experience, and the focused coverage of pivotal issues such as easements and setting lot corners will aid even licensed practitioners in untangling thorny cases. Practical advice on using both basic and advanced instruments ...

  2. Principles of visual attention

    DEFF Research Database (Denmark)

    Bundesen, Claus; Habekost, Thomas

    The nature of attention is one of the oldest and most central problems in psychology. A huge amount of research has been produced on this subject in the last half century, especially on attention in the visual modality, but a general explanation has remained elusive. Many still view attention res....... The book explains the TVA model and shows how it accounts for attentional effects observed across all the research areas described. Principles of Visual Attention offers a uniquely integrated view on a central topic in cognitive neuroscience....

  3. Principles of Uncertainty

    CERN Document Server

    Kadane, Joseph B

    2011-01-01

    An intuitive and mathematical introduction to subjective probability and Bayesian statistics. An accessible, comprehensive guide to the theory of Bayesian statistics, Principles of Uncertainty presents the subjective Bayesian approach, which has played a pivotal role in game theory, economics, and the recent boom in Markov Chain Monte Carlo methods. Both rigorous and friendly, the book contains: Introductory chapters examining each new concept or assumption Just-in-time mathematics -- the presentation of ideas just before they are applied Summary and exercises at the end of each chapter Discus

  4. Principles of smile design

    Science.gov (United States)

    Bhuvaneswaran, Mohan

    2010-01-01

    An organized and systematic approach is required to evaluate, diagnose and resolve esthetic problems predictably. It is of prime importance that the final result is not dependent only on the looks alone. Our ultimate goal as clinicians is to achieve pleasing composition in the smile by creating an arrangement of various esthetic elements. This article reviews the various principles that govern the art of smile designing. The literature search was done using PubMed search and Medline. This article will provide a basic knowledge to the reader to bring out a functional stable smile. PMID:21217950

  5. Principles of quantum chemistry

    CERN Document Server

    George, David V

    2013-01-01

    Principles of Quantum Chemistry focuses on the application of quantum mechanics in physical models and experiments of chemical systems.This book describes chemical bonding and its two specific problems - bonding in complexes and in conjugated organic molecules. The very basic theory of spectroscopy is also considered. Other topics include the early development of quantum theory; particle-in-a-box; general formulation of the theory of quantum mechanics; and treatment of angular momentum in quantum mechanics. The examples of solutions of Schroedinger equations; approximation methods in quantum c

  6. Principles of chemical kinetics

    CERN Document Server

    House, James E

    2007-01-01

    James House's revised Principles of Chemical Kinetics provides a clear and logical description of chemical kinetics in a manner unlike any other book of its kind. Clearly written with detailed derivations, the text allows students to move rapidly from theoretical concepts of rates of reaction to concrete applications. Unlike other texts, House presents a balanced treatment of kinetic reactions in gas, solution, and solid states. The entire text has been revised and includes many new sections and an additional chapter on applications of kinetics. The topics covered include quantitative rela

  7. Principles of meteoritics

    CERN Document Server

    Krinov, E L

    1960-01-01

    Principles of Meteoritics examines the significance of meteorites in relation to cosmogony and to the origin of the planetary system. The book discusses the science of meteoritics and the sources of meteorites. Scientists study the morphology of meteorites to determine their motion in the atmosphere. The scope of such study includes all forms of meteorites, the circumstances of their fall to earth, their motion in the atmosphere, and their orbits in space. Meteoric bodies vary in sizes; in calculating their motion in interplanetary space, astronomers apply the laws of Kepler. In the region of

  8. RFID design principles

    CERN Document Server

    Lehpamer, Harvey

    2012-01-01

    This revised edition of the Artech House bestseller, RFID Design Principles, serves as an up-to-date and comprehensive introduction to the subject. The second edition features numerous updates and brand new and expanded material on emerging topics such as the medical applications of RFID and new ethical challenges in the field. This practical book offers you a detailed understanding of RFID design essentials, key applications, and important management issues. The book explores the role of RFID technology in supply chain management, intelligent building design, transportation systems, military

  9. Principles of thermodynamics

    CERN Document Server

    Kaufman, Myron

    2002-01-01

    Ideal for one- or two-semester courses that assume elementary knowledge of calculus, This text presents the fundamental concepts of thermodynamics and applies these to problems dealing with properties of materials, phase transformations, chemical reactions, solutions and surfaces. The author utilizes principles of statistical mechanics to illustrate key concepts from a microscopic perspective, as well as develop equations of kinetic theory. The book provides end-of-chapter question and problem sets, some using Mathcad™ and Mathematica™; a useful glossary containing important symbols, definitions, and units; and appendices covering multivariable calculus and valuable numerical methods.

  10. Statistical Mechanics Algorithms and Computations

    CERN Document Server

    Krauth, Werner

    2006-01-01

    This book discusses the computational approach in modern statistical physics, adopting simple language and an attractive format of many illustrations, tables and printed algorithms. The discussion of key subjects in classical and quantum statistical physics will appeal to students, teachers and researchers in physics and related sciences. The focus is on orientation with implementation details kept to a minimum. - ;This book discusses the computational approach in modern statistical physics in a clear and accessible way and demonstrates its close relation to other approaches in theoretical phy

  11. Entropy Is Simple, Qualitatively

    Science.gov (United States)

    Lambert, Frank L.

    2002-10-01

    Qualitatively, entropy is simple. What it is, why it is useful in understanding the behavior of macro systems or of molecular systems is easy to state: Entropy increase from a macro viewpoint is a measure of the dispersal of energy from localized to spread out at a temperature T. The conventional q in qrev/T is the energy dispersed to or from a substance or a system. On a molecular basis, entropy increase means that a system changes from having fewer accessible microstates to having a larger number of accessible microstates. Fundamentally based on statistical and quantum mechanics, this approach is superior to the non-fundamental "disorder" as a descriptor of entropy change. The foregoing in no way denies the subtlety or the difficulty presented by entropy in thermodynamics—to first-year students or to professionals. However, as an aid to beginners in their quantitative study of thermodynamics, the qualitative conclusions in this article give students the advantage of a clear bird’s-eye view of why entropy increases in a wide variety of basic cases: a substance going from 0 K to T, phase change, gas expansion, mixing of ideal gases or liquids, colligative effects, and the Gibbs equation. See Letter re: this article.

  12. Quasispecies made simple.

    Directory of Open Access Journals (Sweden)

    J J Bull

    2005-11-01

    Full Text Available Quasispecies are clouds of genotypes that appear in a population at mutation-selection balance. This concept has recently attracted the attention of virologists, because many RNA viruses appear to generate high levels of genetic variation that may enhance the evolution of drug resistance and immune escape. The literature on these important evolutionary processes is, however, quite challenging. Here we use simple models to link mutation-selection balance theory to the most novel property of quasispecies: the error threshold-a mutation rate below which populations equilibrate in a traditional mutation-selection balance and above which the population experiences an error catastrophe, that is, the loss of the favored genotype through frequent deleterious mutations. These models show that a single fitness landscape may contain multiple, hierarchically organized error thresholds and that an error threshold is affected by the extent of back mutation and redundancy in the genotype-to-phenotype map. Importantly, an error threshold is distinct from an extinction threshold, which is the complete loss of the population through lethal mutations. Based on this framework, we argue that the lethal mutagenesis of a viral infection by mutation-inducing drugs is not a true error catastophe, but is an extinction catastrophe.

  13. Archimedes' Principle in General Coordinates

    Science.gov (United States)

    Ridgely, Charles T.

    2010-01-01

    Archimedes' principle is well known to state that a body submerged in a fluid is buoyed up by a force equal to the weight of the fluid displaced by the body. Herein, Archimedes' principle is derived from first principles by using conservation of the stress-energy-momentum tensor in general coordinates. The resulting expression for the force is…

  14. Fermat and the Minimum Principle

    Indian Academy of Sciences (India)

    Arguably, least action and minimum principles were offered or applied much earlier. This (or these) principle(s) is/are among the fundamental, basic, unifying or organizing ones used to describe a variety of natural phenomena. It considers the amount of energy expended in performing a given action to be the least required ...

  15. Principles of Mechanical Excavation

    Energy Technology Data Exchange (ETDEWEB)

    Lislerud, A. [Tamrock Corp., Tampere (Finland)

    1997-12-01

    Mechanical excavation of rock today includes several methods such as tunnel boring, raiseboring, roadheading and various continuous mining systems. Of these raiseboring is one potential technique for excavating shafts in the repository for spent nuclear fuel and dry blind boring is promising technique for excavation of deposition holes, as demonstrated in the Research Tunnel at Olkiluoto. In addition, there is potential for use of other mechanical excavation techniques in different parts of the repository. One of the main objectives of this study was to analyze the factors which affect the feasibility of mechanical rock excavation in hard rock conditions and to enhance the understanding of factors which affect rock cutting so as to provide an improved basis for excavator performance prediction modeling. The study included the following four main topics: (a) phenomenological model based on similarity analysis for roller disk cutting, (b) rock mass properties which affect rock cuttability and tool life, (c) principles for linear and field cutting tests and performance prediction modeling and (d) cutter head lacing design procedures and principles. As a conclusion of this study, a test rig was constructed, field tests were planned and started up. The results of the study can be used to improve the performance prediction models used to assess the feasibility of different mechanical excavation techniques at various repository investigation sites. (orig.). 21 refs.

  16. Principle or constructive relativity

    Science.gov (United States)

    Frisch, Mathias

    Appealing to Albert Einstein's distinction between principle and constructive theories, Harvey Brown has argued for an interpretation of the theory of relativity as a dynamic and constructive theory. Brown's view has been challenged by Michel Janssen and in this paper I investigate their dispute. I argue that their disagreement appears larger than it actually is due to the two frameworks used by Brown and Janssen to express their respective views: Brown's appeal to Einstein's principle-constructive distinction and Janssen's framing of the disagreement as one over the question whether relativity provides a kinematic or a dynamic constraint. I appeal to a distinction between types of theories drawn by H. A. Lorentz two decades before Einstein's distinction to argue that Einstein's distinction represents a false dichotomy. I argue further that the disagreement concerning the kinematics-dynamics distinction is a disagreement about labels but not about substance. There remains a genuine disagreement over the explanatory role of spacetime geometry and here I agree with Brown arguing that Janssen sees a pressing need for an explanation of Lorentz invariance where no further explanation is needed.

  17. Principles of Mechanical Excavation

    International Nuclear Information System (INIS)

    Lislerud, A.

    1997-12-01

    Mechanical excavation of rock today includes several methods such as tunnel boring, raiseboring, roadheading and various continuous mining systems. Of these raiseboring is one potential technique for excavating shafts in the repository for spent nuclear fuel and dry blind boring is promising technique for excavation of deposition holes, as demonstrated in the Research Tunnel at Olkiluoto. In addition, there is potential for use of other mechanical excavation techniques in different parts of the repository. One of the main objectives of this study was to analyze the factors which affect the feasibility of mechanical rock excavation in hard rock conditions and to enhance the understanding of factors which affect rock cutting so as to provide an improved basis for excavator performance prediction modeling. The study included the following four main topics: (a) phenomenological model based on similarity analysis for roller disk cutting, (b) rock mass properties which affect rock cuttability and tool life, (c) principles for linear and field cutting tests and performance prediction modeling and (d) cutter head lacing design procedures and principles. As a conclusion of this study, a test rig was constructed, field tests were planned and started up. The results of the study can be used to improve the performance prediction models used to assess the feasibility of different mechanical excavation techniques at various repository investigation sites. (orig.)

  18. Basic economic principles of road pricing: From theory to applications

    NARCIS (Netherlands)

    Rouwendal, J.; Verhoef, E.T.

    2006-01-01

    This paper presents, a non-technical introduction to the economic principles relevant for transport pricing design and analysis. We provide the basic rationale behind pricing of externalities, discuss why simple Pigouvian tax rules that equate charges to marginal external costs are not optimal in

  19. Through the Looking Glass: Symmetry in Behavioral Principles?

    Science.gov (United States)

    Marr, M. Jackson

    2006-01-01

    In this article, the author discusses and presents seven possibilities that describe how symmetry principles are reflected in behavior analysis. First, if there are apparently no functional distinctions to be made between positive and negative reinforcement, then reinforcer effectiveness (by various measures) is invariant under a simple inversion…

  20. Some special features of the le chatelier-braun principle

    Science.gov (United States)

    Nesis, E. I.; Skibin, Yu. N.

    2000-07-01

    The relaxation reaction of a system that follows from the Le Chatelier-Braun principle and weakens the result of an external influence turns out to be more intense under a complex action. A method for quantitative determination of the weakening effect for simple and complex actions is suggested.

  1. Zero Energy Buildings – Design Principles and Built Examples

    DEFF Research Database (Denmark)

    for the development of zero energy houses. These strategies and technologies are illustrated through simple design principles and built examples • identify technical and architectural potentials and challenges related to design strategies of crucial importance to the development of zero energy houses • identify...

  2. Simple inflationary quintessential model

    Science.gov (United States)

    de Haro, Jaume; Amorós, Jaume; Pan, Supriya

    2016-04-01

    In the framework of a flat Friedmann-Lemaître-Robertson-Walker geometry, we present a non-geodesically past complete model of our Universe without the big bang singularity at finite cosmic time, describing its evolution starting from its early inflationary era up to the present accelerating phase. We found that a hydrodynamical fluid with nonlinear equation of state could result in such scenario, which after the end of this inflationary stage, suffers a sudden phase transition and enters into the stiff matter dominated era, and the Universe becomes reheated due to a huge amount of particle production. Finally, it asymptotically enters into the de Sitter phase concluding the present accelerated expansion. Using the reconstruction technique, we also show that this background provides an extremely simple inflationary quintessential potential whose inflationary part is given by the well-known 1-dimensional Higgs potential, i.e., a double well inflationary potential, and the quintessential one by an exponential potential that leads to a deflationary regime after this inflation, and it can depict the current cosmic acceleration at late times. Moreover the Higgs potential leads to a power spectrum of the cosmological perturbations which fit well with the latest Planck estimations. Further, we compared our viable potential with some known inflationary quintessential potential, which shows that our quintessential model, that is, the Higgs potential combined with the exponential one, is an improved version of them because it contains an analytic solution that allows us to perform all analytic calculations. Finally, we have shown that the introduction of a nonzero cosmological constant simplifies the potential considerably with an analytic behavior of the background which again permits us to evaluate all the quantities analytically.

  3. An Educational System for Learning Search Algorithms and Automatically Assessing Student Performance

    Science.gov (United States)

    Grivokostopoulou, Foteini; Perikos, Isidoros; Hatzilygeroudis, Ioannis

    2017-01-01

    In this paper, first we present an educational system that assists students in learning and tutors in teaching search algorithms, an artificial intelligence topic. Learning is achieved through a wide range of learning activities. Algorithm visualizations demonstrate the operational functionality of algorithms according to the principles of active…

  4. The Effect of Swarming on a Voltage Potential-Based Conflict Resolution Algorithm

    NARCIS (Netherlands)

    Maas, J.B.; Sunil, E.; Ellerbroek, J.; Hoekstra, J.M.; Tra, M.A.P.

    2016-01-01

    Several conflict resolution algorithms for airborne self-separation rely on principles derived from the repulsive forces that exist between similarly charged particles. This research investigates whether the performance of the Modified Voltage Potential algorithm, which is based on this algorithm,

  5. Scrutinizing an algorithmic technique: the Bayes classifier as interested reading of reality

    NARCIS (Netherlands)

    Rieder, B.

    2017-01-01

    This paper outlines the notion of ‘algorithmic technique’ as a middle ground between concrete, implemented algorithms and the broader study and theorization of software. Algorithmic techniques specify principles and methods for doing things in the medium of software and they thus constitute units of

  6. Transmission Expansion Planning – A Multiyear Dynamic Approach Using a Discrete Evolutionary Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Saraiva J. T.

    2012-10-01

    Full Text Available The basic objective of Transmission Expansion Planning (TEP is to schedule a number of transmission projects along an extended planning horizon minimizing the network construction and operational costs while satisfying the requirement of delivering power safely and reliably to load centres along the horizon. This principle is quite simple, but the complexity of the problem and the impact on society transforms TEP on a challenging issue. This paper describes a new approach to solve the dynamic TEP problem, based on an improved discrete integer version of the Evolutionary Particle Swarm Optimization (EPSO meta-heuristic algorithm. The paper includes sections describing in detail the EPSO enhanced approach, the mathematical formulation of the TEP problem, including the objective function and the constraints, and a section devoted to the application of the developed approach to this problem. Finally, the use of the developed approach is illustrated using a case study based on the IEEE 24 bus 38 branch test system.

  7. A survey of variational principles

    International Nuclear Information System (INIS)

    Lewins, J.D.

    1993-01-01

    In this article survey of variational principles has been given. Variational principles play a significant role in mathematical theory with emphasis on the physical aspects. There are two principals used i.e. to represent the equation of the system in a succinct way and to enable a particular computation in the system to be carried out with greater accuracy. The survey of variational principles has ranged widely from its starting point in the Lagrange multiplier to optimisation principles. In an age of digital computation, these classic methods can be adapted to improve such calculations. We emphasize particularly the advantage of basic finite element methods on variational principles. (A.B.)

  8. Mach's principle and rotating universes

    International Nuclear Information System (INIS)

    King, D.H.

    1990-01-01

    It is shown that the Bianchi 9 model universe satisfies the Mach principle. These closed rotating universes were previously thought to be counter-examples to the principle. The Mach principle is satisfied because the angular momentum of the rotating matter is compensated by the effective angular momentum of gravitational waves. A new formulation of the Mach principle is given that is based on the field theory interpretation of general relativity. Every closed universe with 3-sphere topology is shown to satisfy this formulation of the Mach principle. It is shown that the total angular momentum of the matter and gravitational waves in a closed 3-sphere topology universe is zero

  9. A simpler and elegant algorithm for computing fractal dimension in ...

    Indian Academy of Sciences (India)

    Conventional algorithms for computing dimension of such systems in higher dimen- sional state space face an unavoidable problem of enormous storage requirement. Here we present an algorithm, which uses a simple but very powerful technique and faces no problem in computing dimension in higher dimensional state ...

  10. Comparison of two (geometric) algorithms for auto OMA

    DEFF Research Database (Denmark)

    Juul, Martin; Olsen, Peter; Balling, Ole

    2018-01-01

    parameters. The two algorithms are compared and illustrated on simulated data. Different choices of distance measures are discussed and evaluated. It is illustrated how a simple distance measure outperforms traditional distance measures from other Auto OMA algorithms. Traditional measures are unable...

  11. An algorithm for learning real-time automata

    NARCIS (Netherlands)

    Verwer, S.E.; De Weerdt, M.M.; Witteveen, C.

    2007-01-01

    We describe an algorithm for learning simple timed automata, known as real-time automata. The transitions of real-time automata can have a temporal constraint on the time of occurrence of the current symbol relative to the previous symbol. The learning algorithm is similar to the redblue fringe

  12. Analysis of Pathfinder SST algorithm for global and regional conditions

    Indian Academy of Sciences (India)

    The initial algorithm was a simple linear combina- tion of the channel 4 and ... in the current operational NLSST (non-linear SST;. Walton et al 1998) and the Miami Pathfinder SST. (see below, and Kilpatrick et al (2001)). Regardless of the form of the algorithm, the SST retrieval coefficients are derived by regression analysis.

  13. THE EQUALITY PRINCIPLE REQUIREMENTS

    Directory of Open Access Journals (Sweden)

    CLAUDIA ANDRIŢOI

    2013-05-01

    Full Text Available The problem premises and the objectives followed: the idea of inserting the equality principle between the freedom and the justice principles is manifested in positive law in two stages, as a general idea of all judicial norms and as requirement of the owner of a subjective right of the applicants of an objective law. Equality in face of the law and of public authorities can not involve the idea of standardization, of uniformity, of enlisting of all citizens under the mark of the same judicial regime, regardless of their natural or socio-professional situation. Through the Beijing Platform and the position documents of the European Commission we have defined the integrative approach of equality as representing an active and visible integration of the gender perspective in all sectors and at all levels. The research methods used are: the conceptualist method, the logical method and the intuitive method necessary as means of reasoning in order to argue our demonstration. We have to underline the fact that the system analysis of the research methods of the judicial phenomenon doesn’t agree with “value ranking”, because one value cannot be generalized in rapport to another. At the same time, we must fight against a methodological extremism. The final purpose of this study is represented by the reaching of the perfecting/excellence stage by all individuals through the promotion of equality and freedom. This supposes the fact that the existence of a non-discrimination favourable frame (fairness represents a means and a condition of self-determination, and the state of perfection/excellency is a result of this self-determination, the condition necessary for the obtaining of this nondiscrimination frame for all of us and in conditions of freedom for all individuals, represents the same condition that promotes the state of perfection/excellency. In conclusion we may state the fact that the equality principle represents a true catalyst of the

  14. Prioritizing Zakat Core Principles Criteria

    Directory of Open Access Journals (Sweden)

    Aam Slamet Rusydiana

    2017-06-01

    Full Text Available Prioriting Zakat Core Principles CriteriaZakat Institution (OPZ is the intermediary organizations based on social. The entire of operating expense is taken from the zakat and infaq funds collected. The Zakat Core Principles are a starting point for the frameworks and standards of zakat-based governance best practices. The Zakat Core Principles is mainly aimed to improve the quality of the zakat systems by identifying such weaknesses in the existing of supervision and regulation. This study try to prioritize the Principles of ZCP and also the essential criteria from each level using Analytic Hierarchy Process (AHP.There are five core principles of charity principle, consisting of: regulation, supervision, collection and disbursement management, risk management, and audit and transparency. From these principles, the main priority is regulation following with audit and transparency.DOI: 10.15408/ess.v7i2.5275

  15. Una historia muy simple

    Directory of Open Access Journals (Sweden)

    Miha Mazzini

    2009-12-01

    Full Text Available Voy a contarles una historia muy simple. Probablemente no les parecerá nada espe- cial y no quiero robarles tiempo, así que voy a tratar de hacerlo lo más rápido posible. Me inscribí en psicología porque eso hizo mi mejor amiga. Éramos compañeras desde jardín de infantes y siempre la seguí en todo. En tercer año de facultad conoció a su novio y continuó sus estudios en otro país; por primera vez no podía seguirla. Cuando rendí todos los exámenes, el profesor me preguntó si estaba interesada en hacer mi tesis sobre el perfil psicológico de los participantes de reality shows. Accedí para no tener que pensar en otro tema, aunque no veía mucho la televisión porque me pasaba las noches entre los libros de estudio. Enseguida me di cuenta de que, evidentemente, el profesor había firmado un contrato con la emisora: a él lo empleaban y y yo era la que iba a tener que trabajar, pero no me importaba. La tesis de licenciatura no es más que eso y hay que hacerla. Yo evaluaba los candidatos y elegía a los participantes que vivirían juntos duran- te algunos meses. Como hacían el programa con una licencia extranjera y ya sabían qué le interesaba a la audiencia, yo tenía preparados los rasgos de los perfiles psicológicos que en ese aislamiento colectivo no resultan bien. Tuve que elegir gente variada, pero dentro de la media; nunca nada verdaderamente especial. Cuando me gradué, tuve las noches libres: de pronto tenía mucho más tiempo y podía haber visto el programa, pero ya había terminado. Pero oí que había sido todo un éxito y que sobre todo a los chicos les había encantado el reality y los participantes elegidos.

  16. Principles of Bioenergetics

    CERN Document Server

    Skulachev, Vladimir P; Kasparinsky, Felix O

    2013-01-01

    Principles of Bioenergetics summarizes one of the quickly growing branches of modern biochemistry. Bioenergetics concerns energy transductions occurring in living systems and this book pays special attention to molecular mechanisms of these processes. The main subject of the book is the "energy coupling membrane" which refers to inner membranes of intracellular organelles, for example, mitochondria and chloroplasts. Cellular cytoplasmic membranes where respiratory and photosynthetic energy transducers, as well as ion-transporting ATP-synthases (ATPases) are also part of this membrane. Significant attention is paid to the alternative function of mitochondria as generators of reactive oxygen species (ROS) that mediate programmed death of cells (apoptosis and necrosis) and organisms (phenoptosis). The latter process is considered as a key mechanism of aging which may be suppressed by mitochondria-targeted antioxidants.

  17. Principles of Lasers

    CERN Document Server

    Svelto, Orazio

    2010-01-01

    This new Fifth Edition of Principles of Lasers incorporates corrections to the previous edition. The text’s essential mission remains the same: to provide a wide-ranging yet unified description of laser behavior, physics, technology, and current applications. Dr. Svelto emphasizes the physical rather than the mathematical aspects of lasers, and presents the subject in the simplest terms compatible with a correct physical understanding. Praise for earlier editions: "Professor Svelto is himself a longtime laser pioneer and his text shows the breadth of his broad acquaintance with all aspects of the field … Anyone mastering the contents of this book will be well prepared to understand advanced treatises and research papers in laser science and technology." (Arthur L. Schawlow, 1981 Nobel Laureate in Physics) "Already well established as a self-contained introduction to the physics and technology of lasers … Professor Svelto’s book, in this lucid translation by David Hanna, can be strongly recommended for...

  18. Principles of modern physics

    CERN Document Server

    Saxena, A K

    2014-01-01

    Principles of Modern Physics, divided into twenty one chapters, begins with quantum ideas followed by discussions on special relativity, atomic structure, basic quantum mechanics, hydrogen atom (and Schrodinger equation) and periodic table, the three statistical distributions, X-rays, physics of solids, imperfections in crystals, magnetic properties of materials, superconductivity, Zeeman-, Stark- and Paschen Back- effects, Lasers, Nuclear physics (Yukawa's meson theory and various nuclear models), radioactivity and nuclear reactions, nuclear fission, fusion and plasma, particle accelerators and detectors, the universe, Elementary particles (classification, eight fold way and quark model, standard model and fundamental interactions), cosmic rays, deuteron problem in nuclear physics, and cathode ray oscilloscope. NEW TO THE FOURTH EDITION: The CO2 Laser Theory of magnetic moments on the basis of shell model Geological dating Laser Induced fusion and laser fusion reactor. Hawking radiation The cosmological red ...

  19. [Principles of PET].

    Science.gov (United States)

    Beuthien-Baumann, B

    2018-04-19

    Positron emission tomography (PET) is a procedure in nuclear medicine, which is applied predominantly in oncological diagnostics. In the form of modern hybrid machines, such as PET computed tomography (PET/CT) and PET magnetic resonance imaging (PET/MRI) it has found wide acceptance and availability. The PET procedure is more than just another imaging technique, but a functional method with the capability for quantification in addition to the distribution pattern of the radiopharmaceutical, the results of which are used for therapeutic decisions. A profound knowledge of the principles of PET including the correct indications, patient preparation, and possible artifacts is mandatory for the correct interpretation of PET results.

  20. Emulsion Science Basic Principles

    CERN Document Server

    Leal-Calderon, Fernando; Schmitt, Véronique

    2007-01-01

    Emulsions are generally made out of two immiscible fluids like oil and water, one being dispersed in the second in the presence of surface-active compounds.They are used as intermediate or end products in a huge range of areas including the food, chemical, cosmetic, pharmaceutical, paint, and coating industries. Besides the broad domain of technological interest, emulsions are raising a variety of fundamental questions at the frontier between physics and chemistry. This book aims to give an overview of the most recent advances in emulsion science. The basic principles, covering aspects of emulsions from their preparation to their destruction, are presented in close relation to both the fundamental physics and the applications of these materials. The book is intended to help scientists and engineers in formulating new materials by giving them the basics of emulsion science.

  1. Principles & practice of physics

    CERN Document Server

    Mazur, Eric; Dourmashkin, Peter A; Pedigo, Daryl; Bieniek, Ronald J

    2015-01-01

    Putting physics first Based on his storied research and teaching, Eric Mazur's Principles & Practice of Physics builds an understanding of physics that is both thorough and accessible. Unique organization and pedagogy allow you to develop a true conceptual understanding of physics alongside the quantitative skills needed in the course. *New learning architecture: The book is structured to help you learn physics in an organized way that encourages comprehension and reduces distraction.*Physics on a contemporary foundation: Traditional texts delay the introduction of ideas that we now see as unifying and foundational. This text builds physics on those unifying foundations, helping you to develop an understanding that is stronger, deeper, and fundamentally simpler.*Research-based instruction: This text uses a range of research-based instructional techniques to teach physics in the most effective manner possible. The result is a groundbreaking book that puts physics first, thereby making it more accessible to...

  2. Kepler and Mach's Principle

    Science.gov (United States)

    Barbour, Julian

    The definitive ideas that led to the creation of general relativity crystallized in Einstein's thinking during 1912 while he was in Prague. At the centenary meeting held there to mark the breakthrough, I was asked to talk about earlier great work of relevance to dynamics done at Prague, above all by Kepler and Mach. The main topics covered in this chapter are: some little known but basic facts about the planetary motions; the conceptual framework and most important discoveries of Ptolemy and Copernicus; the complete change of concepts that Kepler introduced and their role in his discoveries; the significance of them in Newton's work; Mach's realization that Kepler's conceptual revolution needed further development to free Newton's conceptual world of the last vestiges of the purely geometrical Ptolemaic world view; and the precise formulation of Mach's principle required to place GR correctly in the line of conceptual and technical evolution that began with the ancient Greek astronomers.

  3. The quantum gauge principle

    CERN Document Server

    Graudenz, Dirk

    1996-01-01

    We consider the evolution of quantum fields on a classical background space-time, formulated in the language of differential geometry. Time evolution along the worldlines of observers is described by parallel transport operators in an infinite-dimensional vector bundle over the space-time manifold. The time evolution equation and the dynamical equations for the matter fields are invariant under an arbitrary local change of frames along the restriction of the bundle to the worldline of an observer, thus implementing a ``quantum gauge principle''. We derive dynamical equations for the connection and a complex scalar quantum field based on a gauge field action. In the limit of vanishing curvature of the vector bundle, we recover the standard equation of motion of a scalar field in a curved background space-time.

  4. Fault Management Guiding Principles

    Science.gov (United States)

    Newhouse, Marilyn E.; Friberg, Kenneth H.; Fesq, Lorraine; Barley, Bryan

    2011-01-01

    Regardless of the mission type: deep space or low Earth orbit, robotic or human spaceflight, Fault Management (FM) is a critical aspect of NASA space missions. As the complexity of space missions grows, the complexity of supporting FM systems increase in turn. Data on recent NASA missions show that development of FM capabilities is a common driver for significant cost overruns late in the project development cycle. Efforts to understand the drivers behind these cost overruns, spearheaded by NASA's Science Mission Directorate (SMD), indicate that they are primarily caused by the growing complexity of FM systems and the lack of maturity of FM as an engineering discipline. NASA can and does develop FM systems that effectively protect mission functionality and assets. The cost growth results from a lack of FM planning and emphasis by project management, as well the maturity of FM as an engineering discipline, which lags behind the maturity of other engineering disciplines. As a step towards controlling the cost growth associated with FM development, SMD has commissioned a multi-institution team to develop a practitioner's handbook representing best practices for the end-to-end processes involved in engineering FM systems. While currently concentrating primarily on FM for science missions, the expectation is that this handbook will grow into a NASA-wide handbook, serving as a companion to the NASA Systems Engineering Handbook. This paper presents a snapshot of the principles that have been identified to guide FM development from cradle to grave. The principles range from considerations for integrating FM into the project and SE organizational structure, the relationship between FM designs and mission risk, and the use of the various tools of FM (e.g., redundancy) to meet the FM goal of protecting mission functionality and assets.

  5. Dynamical principles in neuroscience

    International Nuclear Information System (INIS)

    Rabinovich, Mikhail I.; Varona, Pablo; Selverston, Allen I.; Abarbanel, Henry D. I.

    2006-01-01

    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?

  6. Principles of rockbolting design

    Directory of Open Access Journals (Sweden)

    Charlie C. Li

    2017-06-01

    Full Text Available This article introduces the principles of underground rockbolting design. The items discussed include underground loading conditions, natural pressure zone around an underground opening, design methodologies, selection of rockbolt types, determination of bolt length and spacing, factor of safety, and compatibility between support elements. Different types of rockbolting used in engineering practise are also presented. The traditional principle of selecting strong rockbolts is valid only in conditions of low in situ stresses in the rock mass. Energy-absorbing rockbolts are preferred in the case of high in situ stresses. A natural pressure arch is formed in the rock at a certain distance behind the tunnel wall. Rockbolts should be long enough to reach the natural pressure arch when the failure zone is small. The bolt length should be at least 1 m beyond the failure zone. In the case of a vast failure zone, tightly spaced short rockbolts are installed to establish an artificial pressure arch within the failure zone and long cables are anchored on the natural pressure arch. In this case, the rockbolts are usually less than 3 m long in mine drifts, but can be up to 7 m in large-scale rock caverns. Bolt spacing is more important than bolt length in the case of establishing an artificial pressure arch. In addition to the factor of safety, the maximum allowable displacement in the tunnel and the ultimate displacement capacity of rockbolts must be also taken into account in the design. Finally, rockbolts should be compatible with other support elements in the same support system in terms of displacement and energy absorption capacities.

  7. A simple depression-filling method for raster and irregular elevation ...

    Indian Academy of Sciences (India)

    DEMs) in order to extract morphological properties of land surfaces. Almost all rely on depression filling to facilitate drainage analysis. This study proposes an intuitive and relatively simple depression-filling algorithm, which is readily applicable ...

  8. Computing a single cell in the overlay of two simple polygons

    NARCIS (Netherlands)

    Berg, M. de; Devillers, O.; Dobrindt, K.T.G.; Schwarzkopf, O.

    1997-01-01

    This note combines the lazy randomized incremental construction scheme with the technique of \\connectivity acceleration" to obtain an O ( n (log ? n ) 2 ) time randomized algorithm to compute a single face in the overlay oftwo simple polygons in the plane.

  9. Pure field theories and MACSYMA algorithms

    Science.gov (United States)

    Ament, W. S.

    1977-01-01

    A pure field theory attempts to describe physical phenomena through singularity-free solutions of field equations resulting from an action principle. The physics goes into forming the action principle and interpreting specific results. Algorithms for the intervening mathematical steps are sketched. Vacuum general relativity is a pure field theory, serving as model and providing checks for generalizations. The fields of general relativity are the 10 components of a symmetric Riemannian metric tensor; those of the Einstein-Straus generalization are the 16 components of a nonsymmetric. Algebraic properties are exploited in top level MACSYMA commands toward performing some of the algorithms of that generalization. The light cone for the theory as left by Einstein and Straus is found and simplifications of that theory are discussed.

  10. A Data-Guided Lexisearch Algorithm for the Asymmetric Traveling Salesman Problem

    Directory of Open Access Journals (Sweden)

    Zakir Hussain Ahmed

    2011-01-01

    Full Text Available A simple lexisearch algorithm that uses path representation method for the asymmetric traveling salesman problem (ATSP is proposed, along with an illustrative example, to obtain exact optimal solution to the problem. Then a data-guided lexisearch algorithm is presented. First, the cost matrix of the problem is transposed depending on the variance of rows and columns, and then the simple lexisearch algorithm is applied. It is shown that this minor preprocessing of the data before the simple lexisearch algorithm is applied improves the computational time substantially. The efficiency of our algorithms to the problem against two existing algorithms has been examined for some TSPLIB and random instances of various sizes. The results show remarkably better performance of our algorithms, especially our data-guided algorithm.

  11. Minimising Computational Complexity of the RRT Algorithm

    DEFF Research Database (Denmark)

    Svenstrup, Mikael; Bak, Thomas; Andersen, Hans Jørgen

    2011-01-01

    method is Rapidly-exploring Random Trees (RRT's). One problem with this method is the nearest neighbour search time, which grows significantly when adding a large number of vertices. We propose an algorithm which decreases the computation time, such that more vertices can be added in the same amount...... of time to generate better trajectories. The algorithm is based on subdividing the configuration space into boxes, where only specific boxes needs to be searched to find the nearest neighbour. It is shown that the computational complexity is lowered from a theoretical point of view. The result...... is an algorithm that can provide better trajectories within a given time period, or alternatively compute trajectories faster. In simulation the algorithm is verified for a simple RRT implementation and in a more specific case where a robot has to plan a path through a human inhabited environment....

  12. Swarm-based algorithm for phase unwrapping.

    Science.gov (United States)

    da Silva Maciel, Lucas; Albertazzi, Armando G

    2014-08-20

    A novel algorithm for phase unwrapping based on swarm intelligence is proposed. The algorithm was designed based on three main goals: maximum coverage of reliable information, focused effort for better efficiency, and reliable unwrapping. Experiments were performed, and a new agent was designed to follow a simple set of five rules in order to collectively achieve these goals. These rules consist of random walking for unwrapping and searching, ambiguity evaluation by comparing unwrapped regions, and a replication behavior responsible for the good distribution of agents throughout the image. The results were comparable with the results from established methods. The swarm-based algorithm was able to suppress ambiguities better than the flood-fill algorithm without relying on lengthy processing times. In addition, future developments such as parallel processing and better-quality evaluation present great potential for the proposed method.

  13. Principles of precision medicine in stroke.

    Science.gov (United States)

    Hinman, Jason D; Rost, Natalia S; Leung, Thomas W; Montaner, Joan; Muir, Keith W; Brown, Scott; Arenillas, Juan F; Feldmann, Edward; Liebeskind, David S

    2017-01-01

    The era of precision medicine has arrived and conveys tremendous potential, particularly for stroke neurology. The diagnosis of stroke, its underlying aetiology, theranostic strategies, recurrence risk and path to recovery are populated by a series of highly individualised questions. Moreover, the phenotypic complexity of a clinical diagnosis of stroke makes a simple genetic risk assessment only partially informative on an individual basis. The guiding principles of precision medicine in stroke underscore the need to identify, value, organise and analyse the multitude of variables obtained from each individual to generate a precise approach to optimise cerebrovascular health. Existing data may be leveraged with novel technologies, informatics and practical clinical paradigms to apply these principles in stroke and realise the promise of precision medicine. Importantly, precision medicine in stroke will only be realised once efforts to collect, value and synthesise the wealth of data collected in clinical trials and routine care starts. Stroke theranostics, the ultimate vision of synchronising tailored therapeutic strategies based on specific diagnostic data, demand cerebrovascular expertise on big data approaches to clinically relevant paradigms. This review considers such challenges and delineates the principles on a roadmap for rational application of precision medicine to stroke and cerebrovascular health. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  14. Core principles of evolutionary medicine

    Science.gov (United States)

    Grunspan, Daniel Z; Nesse, Randolph M; Barnes, M Elizabeth; Brownell, Sara E

    2018-01-01

    Abstract Background and objectives Evolutionary medicine is a rapidly growing field that uses the principles of evolutionary biology to better understand, prevent and treat disease, and that uses studies of disease to advance basic knowledge in evolutionary biology. Over-arching principles of evolutionary medicine have been described in publications, but our study is the first to systematically elicit core principles from a diverse panel of experts in evolutionary medicine. These principles should be useful to advance recent recommendations made by The Association of American Medical Colleges and the Howard Hughes Medical Institute to make evolutionary thinking a core competency for pre-medical education. Methodology The Delphi method was used to elicit and validate a list of core principles for evolutionary medicine. The study included four surveys administered in sequence to 56 expert panelists. The initial open-ended survey created a list of possible core principles; the three subsequent surveys winnowed the list and assessed the accuracy and importance of each principle. Results Fourteen core principles elicited at least 80% of the panelists to agree or strongly agree that they were important core principles for evolutionary medicine. These principles over-lapped with concepts discussed in other articles discussing key concepts in evolutionary medicine. Conclusions and implications This set of core principles will be helpful for researchers and instructors in evolutionary medicine. We recommend that evolutionary medicine instructors use the list of core principles to construct learning goals. Evolutionary medicine is a young field, so this list of core principles will likely change as the field develops further. PMID:29493660

  15. Understanding molecular simulation from algorithms to applications

    CERN Document Server

    Frenkel, Daan

    2001-01-01

    Understanding Molecular Simulation: From Algorithms to Applications explains the physics behind the ""recipes"" of molecular simulation for materials science. Computer simulators are continuously confronted with questions concerning the choice of a particular technique for a given application. A wide variety of tools exist, so the choice of technique requires a good understanding of the basic principles. More importantly, such understanding may greatly improve the efficiency of a simulation program. The implementation of simulation methods is illustrated in pseudocodes and their practic

  16. Social signals and algorithmic trading of Bitcoin

    OpenAIRE

    Garcia, David; Schweitzer, Frank

    2015-01-01

    The availability of data on digital traces is growing to unprecedented sizes, but inferring actionable knowledge from large-scale data is far from being trivial. This is especially important for computational finance, where digital traces of human behavior offer a great potential to drive trading strategies. We contribute to this by providing a consistent approach that integrates various datasources in the design of algorithmic traders. This allows us to derive insights into the principles be...

  17. Approximate iterative algorithms

    CERN Document Server

    Almudevar, Anthony Louis

    2014-01-01

    Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a

  18. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  19. A Simple Demonstration of Atomic and Molecular Orbitals Using Circular Magnets

    Science.gov (United States)

    Chakraborty, Maharudra; Mukhopadhyay, Subrata; Das, Ranendu Sekhar

    2014-01-01

    A quite simple and inexpensive technique is described here to represent the approximate shapes of atomic orbitals and the molecular orbitals formed by them following the principles of the linear combination of atomic orbitals (LCAO) method. Molecular orbitals of a few simple molecules can also be pictorially represented. Instructors can employ the…

  20. An improved Landauer principle with finite-size corrections

    International Nuclear Information System (INIS)

    Reeb, David; Wolf, Michael M

    2014-01-01

    Landauer's principle relates entropy decrease and heat dissipation during logically irreversible processes. Most theoretical justifications of Landauer's principle either use thermodynamic reasoning or rely on specific models based on arguable assumptions. Here, we aim at a general and minimal setup to formulate Landauer's principle in precise terms. We provide a simple and rigorous proof of an improved version of the principle, which is formulated in terms of an equality rather than an inequality. The proof is based on quantum statistical mechanics concepts rather than on thermodynamic argumentation. From this equality version, we obtain explicit improvements of Landauer's bound that depend on the effective size of the thermal reservoir and reduce to Landauer's bound only for infinite-sized reservoirs. (paper)

  1. A verified LLL algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  2. A simple solution to type specialization

    DEFF Research Database (Denmark)

    Danvy, Olivier

    1998-01-01

    all over. Neil Jones has stated that getting rid of these type tags was an open problem, despite possible solutions such as Torben Mogensen's “constructor specialization.” To solve this problem, John Hughes has proposed a new paradigm for partial evaluation, “Type Specialization”, based on type...... inference instead of being based on symbolic interpretation. Type Specialization is very elegant in principle but it also appears non-trivial in practice. Stating the problem in terms of types instead of in terms of type encodings suggests a very simple type-directed solution, namely, to use a projection......Partial evaluation specializes terms, but traditionally this specialization does not apply to the type of these terms. As a result, specializing, e.g., an interpreter written in a typed language, which requires a “universal” type to encode expressible values, yields residual programs with type tags...

  3. A Simple Solution to Type Specialization

    DEFF Research Database (Denmark)

    Danvy, Olivier

    1998-01-01

    all over. Neil Jones has stated that getting rid of these type tags was an open problem, despite possible solutions such as Torben Mogensen's “constructor specialization.” To solve this problem, John Hughes has proposed a new paradigm for partial evaluation, “Type Specialization”, based on type...... inference instead of being based on symbolic interpretation. Type Specialization is very elegant in principle but it also appears non-trivial in practice. Stating the problem in terms of types instead of in terms of type encodings suggests a very simple type-directed solution, namely, to use a projection......Partial evaluation specializes terms, but traditionally this specialization does not apply to the type of these terms. As a result, specializing, e.g., an interpreter written in a typed language, which requires a “universal” type to encode expressible values, yields residual programs with type tags...

  4. Simple Resonance Hierarchy for Surmounting Quantum Uncertainty

    International Nuclear Information System (INIS)

    Amoroso, Richard L.

    2010-01-01

    For a hundred years violation or surmounting the Quantum Uncertainty Principle has remained a Holy Grail of both theoretical and empirical physics. Utilizing an operationally completed form of Quantum Theory cast in a string theoretic Higher Dimensional (HD) form of Dirac covariant polarized vacuum with a complex Einstein energy dependent spacetime metric, M 4 ±C 4 with sufficient degrees of freedom to be causally free of the local quantum state, we present a simple empirical model for ontologically surmounting the phenomenology of uncertainty through a Sagnac Effect RF pulsed Laser Oscillated Vacuum Energy Resonance hierarchy cast within an extended form of a Wheeler-Feynman-Cramer Transactional Calabi-Yau mirror symmetric spacetime bachcloth.

  5. First principles simulations

    International Nuclear Information System (INIS)

    Palummo, M.; Reining, L.; Ballone, P.

    1993-01-01

    In this paper we outline the major features of the ''ab-initio'' simulation scheme of Car and Parrinello, focusing on the physical ideas and computational details at the basis of its efficiency and success. We briefly review the main applications of the method. We discuss the limitations of the standard scheme, as well as recent developments proposed in order to extend the reach of the method. Moreover, we consider more in detail two specific subjects. First, we describe a simple improvement (Gradient Corrections) on the basic approximation of the ''ab-initio'' simulation, i.e. the Local Density Approximation. These corrections can be easily and efficiently included in the Car-Parrinello code, bringing computed structural and cohesive properties significantly closer to their experimental values. Finally, we discuss the choice of the pseudopotential, with special attention to the possibilities and limitations of the last generation of soft pseudopotentials. (orig.)

  6. Principles of correlation counting

    International Nuclear Information System (INIS)

    Mueller, J.W.

    1975-01-01

    A review is given of the various applications which have been made of correlation techniques in the field of nuclear physics, in particular for absolute counting. Whereas in most cases the usual coincidence method will be preferable for its simplicity, correlation counting may be the only possible approach in such cases where the two radiations of the cascade cannot be well separated or when there is a longliving intermediate state. The measurement of half-lives and of count rates of spurious pulses is also briefly discussed. The various experimental situations lead to different ways the correlation method is best applied (covariance technique with one or with two detectors, application of correlation functions, etc.). Formulae are given for some simple model cases, neglecting dead-time corrections

  7. Principles of visual attention

    DEFF Research Database (Denmark)

    Bundesen, Claus; Habekost, Thomas

    The nature of attention is one of the oldest and most central problems in psychology. A huge amount of research has been produced on this subject in the last half century, especially on attention in the visual modality, but a general explanation has remained elusive. Many still view attention...... research as a field that is fundamentally fragmented. This book takes a different perspective and presents a unified theory of visual attention: the TVA model. The TVA model explains the many aspects of visual attention by just two mechanisms for selection of information: filtering and pigeonholing....... These mechanisms are described in a set of simple equations, which allow TVA to mathematically model a large number of classical results in the attention literature. The theory explains psychological and neuroscientific findings by the same equations; TVA is a complete theory of visual attention, linking mind...

  8. Advances in Cryogenic Principles

    Science.gov (United States)

    Barron, R. F.

    During the past 50 years, the use of digital computers has significantly influenced the design and analysis of cryogenic systems. At the time when the first Cryogenic Engineering Conference was held, thermodynamic data were presented in graphical or tabular form (the "steam table" format), whereas thermodynamic data for cryogenic system design is computer generated today. The thermal analysis of cryogenic systems in the 1950s involved analytical solutions, graphical solutions, and relatively simple finite-difference approaches. These approaches have been supplanted by finite-element numerical programs which readily solve complicated thermal problems that could not be solved easily using the methods of the 1950s. In distillation column design, the use of the McCabe-Thiele graphical method for determination of the number of theoretical plates has been replaced by numerical methods that allow consideration of several different components in the feed and product streams.

  9. Archimedes' principle in general coordinates

    International Nuclear Information System (INIS)

    Ridgely, Charles T

    2010-01-01

    Archimedes' principle is well known to state that a body submerged in a fluid is buoyed up by a force equal to the weight of the fluid displaced by the body. Herein, Archimedes' principle is derived from first principles by using conservation of the stress-energy-momentum tensor in general coordinates. The resulting expression for the force is applied in Schwarzschild coordinates and in rotating coordinates. Using Schwarzschild coordinates for the case of a spherical mass suspended within a perfect fluid leads to the familiar expression of Archimedes' principle. Using rotating coordinates produces an expression for a centrifugal buoyancy force that agrees with accepted theory. It is then argued that Archimedes' principle ought to be applicable to non-gravitational phenomena, as well. Conservation of the energy-momentum tensor is then applied to electromagnetic phenomena. It is shown that a charged body submerged in a charged medium experiences a buoyancy force in accordance with an electromagnetic analogue of Archimedes' principle.

  10. Archimedes' principle in general coordinates

    Science.gov (United States)

    Ridgely, Charles T.

    2010-05-01

    Archimedes' principle is well known to state that a body submerged in a fluid is buoyed up by a force equal to the weight of the fluid displaced by the body. Herein, Archimedes' principle is derived from first principles by using conservation of the stress-energy-momentum tensor in general coordinates. The resulting expression for the force is applied in Schwarzschild coordinates and in rotating coordinates. Using Schwarzschild coordinates for the case of a spherical mass suspended within a perfect fluid leads to the familiar expression of Archimedes' principle. Using rotating coordinates produces an expression for a centrifugal buoyancy force that agrees with accepted theory. It is then argued that Archimedes' principle ought to be applicable to non-gravitational phenomena, as well. Conservation of the energy-momentum tensor is then applied to electromagnetic phenomena. It is shown that a charged body submerged in a charged medium experiences a buoyancy force in accordance with an electromagnetic analogue of Archimedes' principle.

  11. Executive Financial Reporting: Seven Principles to Use in Developing Effective Reports.

    Science.gov (United States)

    Jenkins, William A.; Fischer, Mary

    1991-01-01

    Higher education institution business officers need to follow principles of presentation, judgment, and measurement in developing effective executive financial reports. Principles include (1) keep the statement simple; (2) be consistent in reporting from year to year; (3) determine user needs and interests; (4) limit data; (5) provide trend lines;…

  12. Updated treatment algorithm of pulmonary arterial hypertension.

    Science.gov (United States)

    Galiè, Nazzareno; Corris, Paul A; Frost, Adaani; Girgis, Reda E; Granton, John; Jing, Zhi Cheng; Klepetko, Walter; McGoon, Michael D; McLaughlin, Vallerie V; Preston, Ioana R; Rubin, Lewis J; Sandoval, Julio; Seeger, Werner; Keogh, Anne

    2013-12-24

    The demands on a pulmonary arterial hypertension (PAH) treatment algorithm are multiple and in some ways conflicting. The treatment algorithm usually includes different types of recommendations with varying degrees of scientific evidence. In addition, the algorithm is required to be comprehensive but not too complex, informative yet simple and straightforward. The type of information in the treatment algorithm are heterogeneous including clinical, hemodynamic, medical, interventional, pharmacological and regulatory recommendations. Stakeholders (or users) including physicians from various specialties and with variable expertise in PAH, nurses, patients and patients' associations, healthcare providers, regulatory agencies and industry are often interested in the PAH treatment algorithm for different reasons. These are the considerable challenges faced when proposing appropriate updates to the current evidence-based treatment algorithm.The current treatment algorithm may be divided into 3 main areas: 1) general measures, supportive therapy, referral strategy, acute vasoreactivity testing and chronic treatment with calcium channel blockers; 2) initial therapy with approved PAH drugs; and 3) clinical response to the initial therapy, combination therapy, balloon atrial septostomy, and lung transplantation. All three sections will be revisited highlighting information newly available in the past 5 years and proposing updates where appropriate. The European Society of Cardiology grades of recommendation and levels of evidence will be adopted to rank the proposed treatments. Copyright © 2013 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  13. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  14. Pedagogical Principles in Online Teaching

    DEFF Research Database (Denmark)

    Beckmann, Suzanne C.; Uth Thomsen, Thyra; von Wallpach, Sylvia

    of the seven pedagogical principles that govern the teaching at our university. We also present a case study that illustrates how both opportunities and challenges were met in two “first-mover” fully online courses during Fall 2014. The experiences from this case study are discussed in terms of to what extent...... they met the pedagogical principles and observations unrelated to the pedagogical principle are shared....

  15. Principle extremum of full action

    Directory of Open Access Journals (Sweden)

    Solomon I. Khmelnik

    2011-10-01

    Full Text Available A new variational principle extremum of full action is proposed, which extends the Lagrange formalism on dissipative systems. It is shown that this principle is applicable in electrical engineering, electrodynamics, mechanics and hydrodynamics, taking into account the friction forces. The proposed variational principle may be considered as a new formalism used as an universal method of physical equations derivation, and also as a method for solving these equations.

  16. A Parallel Prefix Algorithm for Almost Toeplitz Tridiagonal Systems

    Science.gov (United States)

    Sun, Xian-He; Joslin, Ronald D.

    1995-01-01

    A compact scheme is a discretization scheme that is advantageous in obtaining highly accurate solutions. However, the resulting systems from compact schemes are tridiagonal systems that are difficult to solve efficiently on parallel computers. Considering the almost symmetric Toeplitz structure, a parallel algorithm, simple parallel prefix (SPP), is proposed. The SPP algorithm requires less memory than the conventional LU decomposition and is efficient on parallel machines. It consists of a prefix communication pattern and AXPY operations. Both the computation and the communication can be truncated without degrading the accuracy when the system is diagonally dominant. A formal accuracy study has been conducted to provide a simple truncation formula. Experimental results have been measured on a MasPar MP-1 SIMD machine and on a Cray 2 vector machine. Experimental results show that the simple parallel prefix algorithm is a good algorithm for symmetric, almost symmetric Toeplitz tridiagonal systems and for the compact scheme on high-performance computers.

  17. The iceberg principles

    CERN Document Server

    Spencer-Devlin, Marni

    2013-01-01

    The Iceberg Principles connect spirituality and science in a way that proves that the energy, which is the substance of the Universe, really is Love - not sweet, syrupy, candy-and-roses kind of love but the most powerful force in the Universe. Love without expression is meaningless. This is why the Big Bang was the only logical outcome. Love had to become reflected in dimensionality. With the Big Bang a 4:96 ratio was created between the dimensional and non-dimensional realms. This ratio between visibility and invisibility the ratio of an iceberg also applies to human beings. Only four percent of who we are is visible. Our physical DNA describes us but it does not define us. What defines us are our characteristics, our gifts, and talents - the spiritual DNA. This is invisible but makes up ninety-six percent of who we are. Our talents are not accidental; our life purpose is to express them. Just as the Universe emerges into dimensionality, constantly creating galaxies at millions of miles a minute, we are al...

  18. Principles of Bioremediation Assessment

    Science.gov (United States)

    Madsen, E. L.

    2001-12-01

    Although microorganisms have successfully and spontaneously maintained the biosphere since its inception, industrialized societies now produce undesirable chemical compounds at rates that outpace naturally occurring microbial detoxification processes. This presentation provides an overview of both the complexities of contaminated sites and methodological limitations in environmental microbiology that impede the documentation of biodegradation processes in the field. An essential step toward attaining reliable bioremediation technologies is the development of criteria which prove that microorganisms in contaminated field sites are truly active in metabolizing contaminants of interest. These criteria, which rely upon genetic, biochemical, physiological, and ecological principles and apply to both in situ and ex situ bioremediation strategies include: (i) internal conservative tracers; (ii) added conservative tracers; (iii) added radioactive tracers; (iv) added isotopic tracers; (v) stable isotopic fractionation patterns; (vi) detection of intermediary metabolites; (vii) replicated field plots; (viii) microbial metabolic adaptation; (ix) molecular biological indicators; (x) gradients of coreactants and/or products; (xi) in situ rates of respiration; (xii) mass balances of contaminants, coreactants, and products; and (xiii) computer modeling that incorporates transport and reactive stoichiometries of electron donors and acceptors. The ideal goal is achieving a quantitative understanding of the geochemistry, hydrogeology, and physiology of complex real-world systems.

  19. Neutron diffraction principles

    International Nuclear Information System (INIS)

    Granada, Jose R.

    1998-01-01

    Neutron as research element contributes at present to the understanding and development of almost all aspects related to basic and applied science, even with the relative inaccessibility of neutron sources and the fact that the most intense sources still provide relatively weak neutron beams. The initial discovery of these potentialities and the first works that allowed to convert the neutronic techniques into the actual powerful experimental tool, have been recognized by the adjudication of the Nobel Prize in Physics 1994 to Professors B. Brockhouse and C. Shull. Unfortunately, these tools have not been exploited neither in our country, nor in the Latin American area, with the exception of very limited applications in Materials Science. Although the theoretical principles of neutron scattering techniques have been treated in texts and review works, the aim of this work is to present a compact set of expressions, oriented to sustain and explain the basic forms or the most frequent use for the interpretation of experimental results. The formulation, mostly based on the initial chapters of the Ph.D. Thesis of G.J. Cuello (Instituto Balseiro, 1996), only considers nuclear scattering of neutrons for extension reasons, but it must be taken into account that the experiments designed for the study of the magnetic properties of materials currently play a rol of importance equal to those

  20. Great Lakes Literacy Principles

    Science.gov (United States)

    Fortner, Rosanne W.; Manzo, Lyndsey

    2011-03-01

    Lakes Superior, Huron, Michigan, Ontario, and Erie together form North America's Great Lakes, a region that contains 20% of the world's fresh surface water and is home to roughly one quarter of the U.S. population (Figure 1). Supporting a $4 billion sport fishing industry, plus $16 billion annually in boating, 1.5 million U.S. jobs, and $62 billion in annual wages directly, the Great Lakes form the backbone of a regional economy that is vital to the United States as a whole (see http://www.miseagrant.umich.edu/downloads/economy/11-708-Great-Lakes-Jobs.pdf). Yet the grandeur and importance of this freshwater resource are little understood, not only by people in the rest of the country but also by many in the region itself. To help address this lack of knowledge, the Centers for Ocean Sciences Education Excellence (COSEE) Great Lakes, supported by the U.S. National Science Foundation and the National Oceanic and Atmospheric Administration, developed literacy principles for the Great Lakes to serve as a guide for education of students and the public. These “Great Lakes Literacy Principles” represent an understanding of the Great Lakes' influences on society and society's influences on the Great Lakes.