WorldWideScience

Sample records for evolutionary computation techniques

  1. Evolutionary computation techniques a comparative perspective

    CERN Document Server

    Cuevas, Erik; Oliva, Diego

    2017-01-01

    This book compares the performance of various evolutionary computation (EC) techniques when they are faced with complex optimization problems extracted from different engineering domains. Particularly focusing on recently developed algorithms, it is designed so that each chapter can be read independently. Several comparisons among EC techniques have been reported in the literature, however, they all suffer from one limitation: their conclusions are based on the performance of popular evolutionary approaches over a set of synthetic functions with exact solutions and well-known behaviors, without considering the application context or including recent developments. In each chapter, a complex engineering optimization problem is posed, and then a particular EC technique is presented as the best choice, according to its search characteristics. Lastly, a set of experiments is conducted in order to compare its performance to other popular EC methods.

  2. Evolutionary Computation Techniques for Predicting Atmospheric Corrosion

    Directory of Open Access Journals (Sweden)

    Amine Marref

    2013-01-01

    Full Text Available Corrosion occurs in many engineering structures such as bridges, pipelines, and refineries and leads to the destruction of materials in a gradual manner and thus shortening their lifespan. It is therefore crucial to assess the structural integrity of engineering structures which are approaching or exceeding their designed lifespan in order to ensure their correct functioning, for example, carrying ability and safety. An understanding of corrosion and an ability to predict corrosion rate of a material in a particular environment plays a vital role in evaluating the residual life of the material. In this paper we investigate the use of genetic programming and genetic algorithms in the derivation of corrosion-rate expressions for steel and zinc. Genetic programming is used to automatically evolve corrosion-rate expressions while a genetic algorithm is used to evolve the parameters of an already engineered corrosion-rate expression. We show that both evolutionary techniques yield corrosion-rate expressions that have good accuracy.

  3. Practical Applications of Evolutionary Computation to Financial Engineering Robust Techniques for Forecasting, Trading and Hedging

    CERN Document Server

    Iba, Hitoshi

    2012-01-01

    “Practical Applications of Evolutionary Computation to Financial Engineering” presents the state of the art techniques in Financial Engineering using recent results in Machine Learning and Evolutionary Computation. This book bridges the gap between academics in computer science and traders and explains the basic ideas of the proposed systems and the financial problems in ways that can be understood by readers without previous knowledge on either of the fields. To cement the ideas discussed in the book, software packages are offered that implement the systems described within. The book is structured so that each chapter can be read independently from the others. Chapters 1 and 2 describe evolutionary computation. The third chapter is an introduction to financial engineering problems for readers who are unfamiliar with this area. The following chapters each deal, in turn, with a different problem in the financial engineering field describing each problem in detail and focusing on solutions based on evolutio...

  4. Controller Design of DFIG Based Wind Turbine by Using Evolutionary Soft Computational Techniques

    Directory of Open Access Journals (Sweden)

    O. P. Bharti

    2017-06-01

    Full Text Available This manuscript illustrates the controller design for a doubly fed induction generator based variable speed wind turbine by using a bioinspired scheme. This methodology is based on exploiting two proficient swarm intelligence based evolutionary soft computational procedures. The particle swarm optimization (PSO and bacterial foraging optimization (BFO techniques are employed to design the controller intended for small damping plant of the DFIG. Wind energy overview and DFIG operating principle along with the equivalent circuit model is adequately discussed in this paper. The controller design for DFIG based WECS using PSO and BFO are described comparatively in detail. The responses of the DFIG system regarding terminal voltage, current, active-reactive power, and DC-Link voltage have slightly improved with the evolutionary soft computational procedure. Lastly, the obtained output is equated with a standard technique for performance improvement of DFIG based wind energy conversion system.

  5. Practical advantages of evolutionary computation

    Science.gov (United States)

    Fogel, David B.

    1997-10-01

    Evolutionary computation is becoming a common technique for solving difficult, real-world problems in industry, medicine, and defense. This paper reviews some of the practical advantages to using evolutionary algorithms as compared with classic methods of optimization or artificial intelligence. Specific advantages include the flexibility of the procedures, as well as their ability to self-adapt the search for optimum solutions on the fly. As desktop computers increase in speed, the application of evolutionary algorithms will become routine.

  6. REAL TIME PULVERISED COAL FLOW SOFT SENSOR FOR THERMAL POWER PLANTS USING EVOLUTIONARY COMPUTATION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    B. Raja Singh

    2015-01-01

    Full Text Available Pulverised coal preparation system (Coal mills is the heart of coal-fired power plants. The complex nature of a milling process, together with the complex interactions between coal quality and mill conditions, would lead to immense difficulties for obtaining an effective mathematical model of the milling process. In this paper, vertical spindle coal mills (bowl mill that are widely used in coal-fired power plants, is considered for the model development and its pulverised fuel flow rate is computed using the model. For the steady state coal mill model development, plant measurements such as air-flow rate, differential pressure across mill etc., are considered as inputs/outputs. The mathematical model is derived from analysis of energy, heat and mass balances. An Evolutionary computation technique is adopted to identify the unknown model parameters using on-line plant data. Validation results indicate that this model is accurate enough to represent the whole process of steady state coal mill dynamics. This coal mill model is being implemented on-line in a 210 MW thermal power plant and the results obtained are compared with plant data. The model is found accurate and robust that will work better in power plants for system monitoring. Therefore, the model can be used for online monitoring, fault detection, and control to improve the efficiency of combustion.

  7. Applications of Evolutionary Computation

    NARCIS (Netherlands)

    Mora, Antonio M.; Squillero, Giovanni; Di Chio, C; Agapitos, Alexandros; Cagnoni, Stefano; Cotta, Carlos; Fernández De Vega, F; Di Caro, G A; Drechsler, R.; Ekárt, A; Esparcia-Alcázar, Anna I.; Farooq, M; Langdon, W B; Merelo-Guervós, J.J.; Preuss, M; Richter, O.-M.H.; Silva, Sara; Sim$\\$~oes, A; Squillero, Giovanni; Tarantino, Ernesto; Tettamanzi, Andrea G B; Togelius, J; Urquhart, Neil; Uyar, A S; Yannakakis, G N; Smith, Stephen L; Caserta, Marco; Ramirez, Adriana; Voß, Stefan; Squillero, Giovanni; Burelli, Paolo; Mora, Antonio M.; Squillero, Giovanni; Jan, Mathieu; Matthias, M; Di Chio, C; Agapitos, Alexandros; Cagnoni, Stefano; Cotta, Carlos; Fernández De Vega, F; Di Caro, G A; Drechsler, R.; Ekárt, A; Esparcia-Alcázar, Anna I.; Farooq, M; Langdon, W B; Merelo-Guervós, J.J.; Preuss, M; Richter, O.-M.H.; Silva, Sara; Sim$\\$~oes, A; Squillero, Giovanni; Tarantino, Ernesto; Tettamanzi, Andrea G B; Togelius, J; Urquhart, Neil; Uyar, A S; Yannakakis, G N; Caserta, Marco; Ramirez, Adriana; Voß, Stefan; Squillero, Giovanni; Burelli, Paolo; Esparcia-Alcazar, Anna I; Silva, Sara; Agapitos, Alexandros; Cotta, Carlos; De Falco, Ivanoe; Cioppa, Antonio Della; Diwold, Konrad; Ekart, Aniko; Tarantino, Ernesto; Vega, Francisco Fernandez De; Burelli, Paolo; Sim, Kevin; Cagnoni, Stefano; Simoes, Anabela; Merelo, J.J.; Urquhart, Neil; Haasdijk, Evert; Zhang, Mengjie; Squillero, Giovanni; Eiben, A E; Tettamanzi, Andrea G B; Glette, Kyrre; Rohlfshagen, Philipp; Schaefer, Robert; Caserta, Marco; Ramirez, Adriana; Voß, Stefan

    2015-01-01

    The application of genetic and evolutionary computation to problems in medicine has increased rapidly over the past five years, but there are specific issues and challenges that distinguish it from other real-world applications. Obtaining reliable and coherent patient data, establishing the clinical

  8. Part E: Evolutionary Computation

    DEFF Research Database (Denmark)

    2015-01-01

    of Computational Intelligence. First, comprehensive surveys of genetic algorithms, genetic programming, evolution strategies, parallel evolutionary algorithms are presented, which are readable and constructive so that a large audience might find them useful and – to some extent – ready to use. Some more general...... kinds of evolutionary algorithms, have been prudently analyzed. This analysis was followed by a thorough analysis of various issues involved in stochastic local search algorithms. An interesting survey of various technological and industrial applications in mechanical engineering and design has been...... topics like the estimation of distribution algorithms, indicator-based selection, etc., are also discussed. An important problem, from a theoretical and practical point of view, of learning classifier systems is presented in depth. Multiobjective evolutionary algorithms, which constitute one of the most...

  9. Evolutionary Computing for Intelligent Power System Optimization and Control

    DEFF Research Database (Denmark)

    This new book focuses on how evolutionary computing techniques benefit engineering research and development tasks by converting practical problems of growing complexities into simple formulations, thus largely reducing development efforts. This book begins with an overview of the optimization the...... theory and modern evolutionary computing techniques, and goes on to cover specific applications of evolutionary computing to power system optimization and control problems....

  10. Unscented Sampling Techniques For Evolutionary Computation With Applications To Astrodynamic Optimization

    Science.gov (United States)

    2016-09-01

    factors that can cause the variations in trajectory computation time. First of all, these cases are initially computed using the guess-free mode of DIDO... Goldberg [91]. This concept essentially states that fundamental building blocks, or lower order schemata are pieced together by the genetic algorithms in...in Section 3.13.2. While this idea is very straightforward and logical, Goldberg also later points out that there are deceptive problems where these

  11. Evolutionary computation for reinforcement learning

    NARCIS (Netherlands)

    Whiteson, S.; Wiering, M.; van Otterlo, M.

    2012-01-01

    Algorithms for evolutionary computation, which simulate the process of natural selection to solve optimization problems, are an effective tool for discovering high-performing reinforcement-learning policies. Because they can automatically find good representations, handle continuous action spaces,

  12. Study of natural circulation for the design of a research reactor using computational fluid dynamics and evolutionary computation techniques

    International Nuclear Information System (INIS)

    Oliveira, Andre Felipe da Silva de

    2012-01-01

    Safety is one of the most important and desirable characteristics in a nuclear plant Natural circulation cooling systems are noted for providing passive safety. These systems can be used as mechanism for removing the residual heat from the reactor, or even as the main cooling system for heated sections, such as the core. In this work, a computational fluid dynamics (CFD) code called CFX is used to simulate the process of natural circulation in a research reactor pool after its shutdown. The physical model studied is similar to the Open Pool Australian Light water reactor (OPAL), and contains the core, cooling pool, reflecting tank, circulation pipes and chimney. For best computing performance, the core region was modeled as a porous medium, where the parameters were obtained from a separately detailed CFD analysis. This work also aims to study the viability of the implementation of Differential Evolution algorithm for optimization the physical and operational parameters that, obeying the laws of similarity, lead to a test section on a reduced scale of the reactor pool.

  13. Markov Networks in Evolutionary Computation

    CERN Document Server

    Shakya, Siddhartha

    2012-01-01

    Markov networks and other probabilistic graphical modes have recently received an upsurge in attention from Evolutionary computation community, particularly in the area of Estimation of distribution algorithms (EDAs).  EDAs have arisen as one of the most successful experiences in the application of machine learning methods in optimization, mainly due to their efficiency to solve complex real-world optimization problems and their suitability for theoretical analysis. This book focuses on the different steps involved in the conception, implementation and application of EDAs that use Markov networks, and undirected models in general. It can serve as a general introduction to EDAs but covers also an important current void in the study of these algorithms by explaining the specificities and benefits of modeling optimization problems by means of undirected probabilistic models. All major developments to date in the progressive introduction of Markov networks based EDAs are reviewed in the book. Hot current researc...

  14. Applications of evolutionary computation in image processing and pattern recognition

    CERN Document Server

    Cuevas, Erik; Perez-Cisneros, Marco

    2016-01-01

    This book presents the use of efficient Evolutionary Computation (EC) algorithms for solving diverse real-world image processing and pattern recognition problems. It provides an overview of the different aspects of evolutionary methods in order to enable the reader in reaching a global understanding of the field and, in conducting studies on specific evolutionary techniques that are related to applications in image processing and pattern recognition. It explains the basic ideas of the proposed applications in a way that can also be understood by readers outside of the field. Image processing and pattern recognition practitioners who are not evolutionary computation researchers will appreciate the discussed techniques beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise on such areas. On the other hand, members of the evolutionary computation community can learn the way in which image processing and pattern recognition problems can be translated into an...

  15. Algorithmic Mechanism Design of Evolutionary Computation.

    Science.gov (United States)

    Pei, Yan

    2015-01-01

    We consider algorithmic design, enhancement, and improvement of evolutionary computation as a mechanism design problem. All individuals or several groups of individuals can be considered as self-interested agents. The individuals in evolutionary computation can manipulate parameter settings and operations by satisfying their own preferences, which are defined by an evolutionary computation algorithm designer, rather than by following a fixed algorithm rule. Evolutionary computation algorithm designers or self-adaptive methods should construct proper rules and mechanisms for all agents (individuals) to conduct their evolution behaviour correctly in order to definitely achieve the desired and preset objective(s). As a case study, we propose a formal framework on parameter setting, strategy selection, and algorithmic design of evolutionary computation by considering the Nash strategy equilibrium of a mechanism design in the search process. The evaluation results present the efficiency of the framework. This primary principle can be implemented in any evolutionary computation algorithm that needs to consider strategy selection issues in its optimization process. The final objective of our work is to solve evolutionary computation design as an algorithmic mechanism design problem and establish its fundamental aspect by taking this perspective. This paper is the first step towards achieving this objective by implementing a strategy equilibrium solution (such as Nash equilibrium) in evolutionary computation algorithm.

  16. Evolutionary Computing Methods for Spectral Retrieval

    Science.gov (United States)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  17. Prospective Algorithms for Quantum Evolutionary Computation

    OpenAIRE

    Sofge, Donald A.

    2008-01-01

    This effort examines the intersection of the emerging field of quantum computing and the more established field of evolutionary computation. The goal is to understand what benefits quantum computing might offer to computational intelligence and how computational intelligence paradigms might be implemented as quantum programs to be run on a future quantum computer. We critically examine proposed algorithms and methods for implementing computational intelligence paradigms, primarily focused on ...

  18. Evolutionary Computation and Its Applications in Neural and Fuzzy Systems

    Directory of Open Access Journals (Sweden)

    Biaobiao Zhang

    2011-01-01

    Full Text Available Neural networks and fuzzy systems are two soft-computing paradigms for system modelling. Adapting a neural or fuzzy system requires to solve two optimization problems: structural optimization and parametric optimization. Structural optimization is a discrete optimization problem which is very hard to solve using conventional optimization techniques. Parametric optimization can be solved using conventional optimization techniques, but the solution may be easily trapped at a bad local optimum. Evolutionary computation is a general-purpose stochastic global optimization approach under the universally accepted neo-Darwinian paradigm, which is a combination of the classical Darwinian evolutionary theory, the selectionism of Weismann, and the genetics of Mendel. Evolutionary algorithms are a major approach to adaptation and optimization. In this paper, we first introduce evolutionary algorithms with emphasis on genetic algorithms and evolutionary strategies. Other evolutionary algorithms such as genetic programming, evolutionary programming, particle swarm optimization, immune algorithm, and ant colony optimization are also described. Some topics pertaining to evolutionary algorithms are also discussed, and a comparison between evolutionary algorithms and simulated annealing is made. Finally, the application of EAs to the learning of neural networks as well as to the structural and parametric adaptations of fuzzy systems is also detailed.

  19. International Conference of Intelligence Computation and Evolutionary Computation ICEC 2012

    CERN Document Server

    Intelligence Computation and Evolutionary Computation

    2013-01-01

    2012 International Conference of Intelligence Computation and Evolutionary Computation (ICEC 2012) is held on July 7, 2012 in Wuhan, China. This conference is sponsored by Information Technology & Industrial Engineering Research Center.  ICEC 2012 is a forum for presentation of new research results of intelligent computation and evolutionary computation. Cross-fertilization of intelligent computation, evolutionary computation, evolvable hardware and newly emerging technologies is strongly encouraged. The forum aims to bring together researchers, developers, and users from around the world in both industry and academia for sharing state-of-art results, for exploring new areas of research and development, and to discuss emerging issues facing intelligent computation and evolutionary computation.

  20. Evolutionary computation in zoology and ecology.

    Science.gov (United States)

    Boone, Randall B

    2017-12-01

    Evolutionary computational methods have adopted attributes of natural selection and evolution to solve problems in computer science, engineering, and other fields. The method is growing in use in zoology and ecology. Evolutionary principles may be merged with an agent-based modeling perspective to have individual animals or other agents compete. Four main categories are discussed: genetic algorithms, evolutionary programming, genetic programming, and evolutionary strategies. In evolutionary computation, a population is represented in a way that allows for an objective function to be assessed that is relevant to the problem of interest. The poorest performing members are removed from the population, and remaining members reproduce and may be mutated. The fitness of the members is again assessed, and the cycle continues until a stopping condition is met. Case studies include optimizing: egg shape given different clutch sizes, mate selection, migration of wildebeest, birds, and elk, vulture foraging behavior, algal bloom prediction, and species richness given energy constraints. Other case studies simulate the evolution of species and a means to project shifts in species ranges in response to a changing climate that includes competition and phenotypic plasticity. This introduction concludes by citing other uses of evolutionary computation and a review of the flexibility of the methods. For example, representing species' niche spaces subject to selective pressure allows studies on cladistics, the taxon cycle, neutral versus niche paradigms, fundamental versus realized niches, community structure and order of colonization, invasiveness, and responses to a changing climate.

  1. Integrated evolutionary computation neural network quality controller for automated systems

    Energy Technology Data Exchange (ETDEWEB)

    Patro, S.; Kolarik, W.J. [Texas Tech Univ., Lubbock, TX (United States). Dept. of Industrial Engineering

    1999-06-01

    With increasing competition in the global market, more and more stringent quality standards and specifications are being demands at lower costs. Manufacturing applications of computing power are becoming more common. The application of neural networks to identification and control of dynamic processes has been discussed. The limitations of using neural networks for control purposes has been pointed out and a different technique, evolutionary computation, has been discussed. The results of identifying and controlling an unstable, dynamic process using evolutionary computation methods has been presented. A framework for an integrated system, using both neural networks and evolutionary computation, has been proposed to identify the process and then control the product quality, in a dynamic, multivariable system, in real-time.

  2. Massively parallel evolutionary computation on GPGPUs

    CERN Document Server

    Tsutsui, Shigeyoshi

    2013-01-01

    Evolutionary algorithms (EAs) are metaheuristics that learn from natural collective behavior and are applied to solve optimization problems in domains such as scheduling, engineering, bioinformatics, and finance. Such applications demand acceptable solutions with high-speed execution using finite computational resources. Therefore, there have been many attempts to develop platforms for running parallel EAs using multicore machines, massively parallel cluster machines, or grid computing environments. Recent advances in general-purpose computing on graphics processing units (GPGPU) have opened u

  3. Comparison of evolutionary computation algorithms for solving bi ...

    Indian Academy of Sciences (India)

    failure probability. Multiobjective Evolutionary Computation algorithms (MOEAs) are well-suited for Multiobjective task scheduling on heterogeneous environment. The two Multi-Objective Evolutionary Algorithms such as Multiobjective Genetic. Algorithm (MOGA) and Multiobjective Evolutionary Programming (MOEP) with.

  4. Function Follows Performance in Evolutionary Computational Processing

    DEFF Research Database (Denmark)

    Pasold, Anke; Foged, Isak Worre

    2011-01-01

    As the title ‘Function Follows Performance in Evolutionary Computational Processing’ suggests, this paper explores the potentials of employing multiple design and evaluation criteria within one processing model in order to account for a number of performative parameters desired within varied...

  5. Advances of evolutionary computation methods and operators

    CERN Document Server

    Cuevas, Erik; Oliva Navarro, Diego Alberto

    2016-01-01

    The goal of this book is to present advances that discuss alternative Evolutionary Computation (EC) developments and non-conventional operators which have proved to be effective in the solution of several complex problems. The book has been structured so that each chapter can be read independently from the others. The book contains nine chapters with the following themes: 1) Introduction, 2) the Social Spider Optimization (SSO), 3) the States of Matter Search (SMS), 4) the collective animal behavior (CAB) algorithm, 5) the Allostatic Optimization (AO) method, 6) the Locust Search (LS) algorithm, 7) the Adaptive Population with Reduced Evaluations (APRE) method, 8) the multimodal CAB, 9) the constrained SSO method.

  6. Optimizing a reconfigurable material via evolutionary computation

    Science.gov (United States)

    Wilken, Sam; Miskin, Marc Z.; Jaeger, Heinrich M.

    2015-08-01

    Rapid prototyping by combining evolutionary computation with simulations is becoming a powerful tool for solving complex design problems in materials science. This method of optimization operates in a virtual design space that simulates potential material behaviors and after completion needs to be validated by experiment. However, in principle an evolutionary optimizer can also operate on an actual physical structure or laboratory experiment directly, provided the relevant material parameters can be accessed by the optimizer and information about the material's performance can be updated by direct measurements. Here we provide a proof of concept of such direct, physical optimization by showing how a reconfigurable, highly nonlinear material can be tuned to respond to impact. We report on an entirely computer controlled laboratory experiment in which a 6 ×6 grid of electromagnets creates a magnetic field pattern that tunes the local rigidity of a concentrated suspension of ferrofluid and iron filings. A genetic algorithm is implemented and tasked to find field patterns that minimize the force transmitted through the suspension. Searching within a space of roughly 1010 possible configurations, after testing only 1500 independent trials the algorithm identifies an optimized configuration of layered rigid and compliant regions.

  7. Evolutionary Based Solutions for Green Computing

    CERN Document Server

    Kołodziej, Joanna; Li, Juan; Zomaya, Albert

    2013-01-01

    Today’s highly parameterized large-scale distributed computing systems may be composed  of a large number of various components (computers, databases, etc) and must provide a wide range of services. The users of such systems, located at different (geographical or managerial) network cluster may have a limited access to the system’s services and resources, and different, often conflicting, expectations and requirements. Moreover, the information and data processed in such dynamic environments may be incomplete, imprecise, fragmentary, and overloading. All of the above mentioned issues require some intelligent scalable methodologies for the management of the whole complex structure, which unfortunately may increase the energy consumption of such systems.   This book in its eight chapters, addresses the fundamental issues related to the energy usage and the optimal low-cost system design in high performance ``green computing’’ systems. The recent evolutionary and general metaheuristic-based solutions ...

  8. Regulatory RNA design through evolutionary computation and strand displacement.

    Science.gov (United States)

    Rostain, William; Landrain, Thomas E; Rodrigo, Guillermo; Jaramillo, Alfonso

    2015-01-01

    The discovery and study of a vast number of regulatory RNAs in all kingdoms of life over the past decades has allowed the design of new synthetic RNAs that can regulate gene expression in vivo. Riboregulators, in particular, have been used to activate or repress gene expression. However, to accelerate and scale up the design process, synthetic biologists require computer-assisted design tools, without which riboregulator engineering will remain a case-by-case design process requiring expert attention. Recently, the design of RNA circuits by evolutionary computation and adapting strand displacement techniques from nanotechnology has proven to be suited to the automated generation of DNA sequences implementing regulatory RNA systems in bacteria. Herein, we present our method to carry out such evolutionary design and how to use it to create various types of riboregulators, allowing the systematic de novo design of genetic control systems in synthetic biology.

  9. Computational intelligence synergies of fuzzy logic, neural networks and evolutionary computing

    CERN Document Server

    Siddique, Nazmul

    2013-01-01

    Computational Intelligence: Synergies of Fuzzy Logic, Neural Networks and Evolutionary Computing presents an introduction to some of the cutting edge technological paradigms under the umbrella of computational intelligence. Computational intelligence schemes are investigated with the development of a suitable framework for fuzzy logic, neural networks and evolutionary computing, neuro-fuzzy systems, evolutionary-fuzzy systems and evolutionary neural systems. Applications to linear and non-linear systems are discussed with examples. Key features: Covers all the aspect

  10. Parallel evolutionary computation in bioinformatics applications.

    Science.gov (United States)

    Pinho, Jorge; Sobral, João Luis; Rocha, Miguel

    2013-05-01

    A large number of optimization problems within the field of Bioinformatics require methods able to handle its inherent complexity (e.g. NP-hard problems) and also demand increased computational efforts. In this context, the use of parallel architectures is a necessity. In this work, we propose ParJECoLi, a Java based library that offers a large set of metaheuristic methods (such as Evolutionary Algorithms) and also addresses the issue of its efficient execution on a wide range of parallel architectures. The proposed approach focuses on the easiness of use, making the adaptation to distinct parallel environments (multicore, cluster, grid) transparent to the user. Indeed, this work shows how the development of the optimization library can proceed independently of its adaptation for several architectures, making use of Aspect-Oriented Programming. The pluggable nature of parallelism related modules allows the user to easily configure its environment, adding parallelism modules to the base source code when needed. The performance of the platform is validated with two case studies within biological model optimization. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  11. Evolutionary optimization technique for site layout planning

    KAUST Repository

    El Ansary, Ayman M.; Shalaby, Mohamed

    2014-01-01

    of design requirements. The developed technique is based on genetic algorithm which explores the search space for possible solutions. This study considers two dimensional site planning problems. However, it can be extended to solve three dimensional cases. A

  12. Evolutionary optimization technique for site layout planning

    KAUST Repository

    El Ansary, Ayman M.

    2014-02-01

    Solving the site layout planning problem is a challenging task. It requires an iterative approach to satisfy design requirements (e.g. energy efficiency, skyview, daylight, roads network, visual privacy, and clear access to favorite views). These design requirements vary from one project to another based on location and client preferences. In the Gulf region, the most important socio-cultural factor is the visual privacy in indoor space. Hence, most of the residential houses in this region are surrounded by high fences to provide privacy, which has a direct impact on other requirements (e.g. daylight and direction to a favorite view). This paper introduces a novel technique to optimally locate and orient residential buildings to satisfy a set of design requirements. The developed technique is based on genetic algorithm which explores the search space for possible solutions. This study considers two dimensional site planning problems. However, it can be extended to solve three dimensional cases. A case study is presented to demonstrate the efficiency of this technique in solving the site layout planning of simple residential dwellings. © 2013 Elsevier B.V. All rights reserved.

  13. Conversion Rate Optimization through Evolutionary Computation

    OpenAIRE

    Miikkulainen, Risto; Iscoe, Neil; Shagrin, Aaron; Cordell, Ron; Nazari, Sam; Schoolland, Cory; Brundage, Myles; Epstein, Jonathan; Dean, Randy; Lamba, Gurmeet

    2017-01-01

    Conversion optimization means designing a web interface so that as many users as possible take a desired action on it, such as register or purchase. Such design is usually done by hand, testing one change at a time through A/B testing, or a limited number of combinations through multivariate testing, making it possible to evaluate only a small fraction of designs in a vast design space. This paper describes Sentient Ascend, an automatic conversion optimization system that uses evolutionary op...

  14. Computer Assisted Audit Techniques

    Directory of Open Access Journals (Sweden)

    Eugenia Iancu

    2007-01-01

    Full Text Available From the modern point of view, audit takes intoaccount especially the information systems representingmainly the examination performed by a professional asregards the manner for developing an activity by means ofcomparing it to the quality criteria specific to this activity.Having as reference point this very general definition ofauditing, it must be emphasized that the best known segmentof auditing is the financial audit that had a parallel evolutionto the accountancy one.The present day phase of developing the financial audithas as main trait the internationalization of the accountantprofessional. World wide there are multinational companiesthat offer services in the financial auditing, taxing andconsultancy domain. The auditors, natural persons and auditcompanies, take part at the works of the national andinternational authorities for setting out norms in theaccountancy and auditing domain.The computer assisted audit techniques can be classified inseveral manners according to the approaches used by theauditor. The most well-known techniques are comprised inthe following categories: testing data techniques, integratedtest, parallel simulation, revising the program logics,programs developed upon request, generalized auditsoftware, utility programs and expert systems.

  15. Evolutionary Cell Computing: From Protocells to Self-Organized Computing

    Science.gov (United States)

    Colombano, Silvano; New, Michael H.; Pohorille, Andrew; Scargle, Jeffrey; Stassinopoulos, Dimitris; Pearson, Mark; Warren, James

    2000-01-01

    On the path from inanimate to animate matter, a key step was the self-organization of molecules into protocells - the earliest ancestors of contemporary cells. Studies of the properties of protocells and the mechanisms by which they maintained themselves and reproduced are an important part of astrobiology. These studies also have the potential to greatly impact research in nanotechnology and computer science. Previous studies of protocells have focussed on self-replication. In these systems, Darwinian evolution occurs through a series of small alterations to functional molecules whose identities are stored. Protocells, however, may have been incapable of such storage. We hypothesize that under such conditions, the replication of functions and their interrelationships, rather than the precise identities of the functional molecules, is sufficient for survival and evolution. This process is called non-genomic evolution. Recent breakthroughs in experimental protein chemistry have opened the gates for experimental tests of non-genomic evolution. On the basis of these achievements, we have developed a stochastic model for examining the evolutionary potential of non-genomic systems. In this model, the formation and destruction (hydrolysis) of bonds joining amino acids in proteins occur through catalyzed, albeit possibly inefficient, pathways. Each protein can act as a substrate for polymerization or hydrolysis, or as a catalyst of these chemical reactions. When a protein is hydrolyzed to form two new proteins, or two proteins are joined into a single protein, the catalytic abilities of the product proteins are related to the catalytic abilities of the reactants. We will demonstrate that the catalytic capabilities of such a system can increase. Its evolutionary potential is dependent upon the competition between the formation of bond-forming and bond-cutting catalysts. The degree to which hydrolysis preferentially affects bonds in less efficient, and therefore less well

  16. Evolutionary computing in Nuclear Engineering Institute/CNEN-Brazil

    International Nuclear Information System (INIS)

    Pereira, Claudio M.N.A.; Lapa, Celso M.F.; Lapa, Nelbia da Silva; Mol, Antonio C.

    2000-01-01

    This paper aims to discuss the importance of evolutionary computation (CE) for nuclear engineering and the development of this area in the Instituto de Engenharia Nuclear (IEN) at the last years. Are describe, briefly, the applications realized in this institute by the technical group of CE. For example: nuclear reactor core design optimization, preventive maintenance scheduling optimizing and nuclear reactor transient identifications. It is also shown a novel computational tool to implementation of genetic algorithm that was development in this institute and applied in those works. Some results were presents and the gains obtained with the evolutionary computation were discussing. (author)

  17. 10th International Conference on Genetic and Evolutionary Computing

    CERN Document Server

    Lin, Jerry; Wang, Chia-Hung; Jiang, Xin

    2017-01-01

    This book gathers papers presented at the 10th International Conference on Genetic and Evolutionary Computing (ICGEC 2016). The conference was co-sponsored by Springer, Fujian University of Technology in China, the University of Computer Studies in Yangon, University of Miyazaki in Japan, National Kaohsiung University of Applied Sciences in Taiwan, Taiwan Association for Web Intelligence Consortium, and VSB-Technical University of Ostrava, Czech Republic. The ICGEC 2016, which was held from November 7 to 9, 2016 in Fuzhou City, China, was intended as an international forum for researchers and professionals in all areas of genetic and evolutionary computing.

  18. From evolutionary computation to the evolution of things

    NARCIS (Netherlands)

    Eiben, A.E.; Smith, J.E.

    2015-01-01

    Evolution has provided a source of inspiration for algorithm designers since the birth of computers. The resulting field, evolutionary computation, has been successful in solving engineering tasks ranging in outlook from the molecular to the astronomical. Today, the field is entering a new phase as

  19. Robust and Flexible Scheduling with Evolutionary Computation

    DEFF Research Database (Denmark)

    Jensen, Mikkel T.

    Over the last ten years, there have been numerous applications of evolutionary algorithms to a variety of scheduling problems. Like most other research on heuristic scheduling, the primary aim of the research has been on deterministic formulations of the problems. This is in contrast to real world...... scheduling problems which are usually not deterministic. Usually at the time the schedule is made some information about the problem and processing environment is available, but this information is uncertain and likely to change during schedule execution. Changes frequently encountered in scheduling...... environments include machine breakdowns, uncertain processing times, workers getting sick, materials being delayed and the appearance of new jobs. These possible environmental changes mean that a schedule which was optimal for the information available at the time of scheduling can end up being highly...

  20. Coevolution of Artificial Agents Using Evolutionary Computation in Bargaining Game

    Directory of Open Access Journals (Sweden)

    Sangwook Lee

    2015-01-01

    Full Text Available Analysis of bargaining game using evolutionary computation is essential issue in the field of game theory. This paper investigates the interaction and coevolutionary process among heterogeneous artificial agents using evolutionary computation (EC in the bargaining game. In particular, the game performance with regard to payoff through the interaction and coevolution of agents is studied. We present three kinds of EC based agents (EC-agent participating in the bargaining game: genetic algorithm (GA, particle swarm optimization (PSO, and differential evolution (DE. The agents’ performance with regard to changing condition is compared. From the simulation results it is found that the PSO-agent is superior to the other agents.

  1. Machine learning and evolutionary techniques in interplanetary trajectory design

    OpenAIRE

    Izzo, Dario; Sprague, Christopher; Tailor, Dharmesh

    2018-01-01

    After providing a brief historical overview on the synergies between artificial intelligence research, in the areas of evolutionary computations and machine learning, and the optimal design of interplanetary trajectories, we propose and study the use of deep artificial neural networks to represent, on-board, the optimal guidance profile of an interplanetary mission. The results, limited to the chosen test case of an Earth-Mars orbital transfer, extend the findings made previously for landing ...

  2. Soft computing integrating evolutionary, neural, and fuzzy systems

    CERN Document Server

    Tettamanzi, Andrea

    2001-01-01

    Soft computing encompasses various computational methodologies, which, unlike conventional algorithms, are tolerant of imprecision, uncertainty, and partial truth. Soft computing technologies offer adaptability as a characteristic feature and thus permit the tracking of a problem through a changing environment. Besides some recent developments in areas like rough sets and probabilistic networks, fuzzy logic, evolutionary algorithms, and artificial neural networks are core ingredients of soft computing, which are all bio-inspired and can easily be combined synergetically. This book presents a well-balanced integration of fuzzy logic, evolutionary computing, and neural information processing. The three constituents are introduced to the reader systematically and brought together in differentiated combinations step by step. The text was developed from courses given by the authors and offers numerous illustrations as

  3. Evolutionary Computation Methods and their applications in Statistics

    Directory of Open Access Journals (Sweden)

    Francesco Battaglia

    2013-05-01

    Full Text Available A brief discussion of the genesis of evolutionary computation methods, their relationship to artificial intelligence, and the contribution of genetics and Darwin’s theory of natural evolution is provided. Then, the main evolutionary computation methods are illustrated: evolution strategies, genetic algorithms, estimation of distribution algorithms, differential evolution, and a brief description of some evolutionary behavior methods such as ant colony and particle swarm optimization. We also discuss the role of the genetic algorithm for multivariate probability distribution random generation, rather than as a function optimizer. Finally, some relevant applications of genetic algorithm to statistical problems are reviewed: selection of variables in regression, time series model building, outlier identification, cluster analysis, design of experiments.

  4. Interior spatial layout with soft objectives using evolutionary computation

    NARCIS (Netherlands)

    Chatzikonstantinou, I.; Bengisu, E.

    2016-01-01

    This paper presents the design problem of furniture arrangement in a residential interior living space, and addresses it by means of evolutionary computation. Interior arrangement is an important and interesting problem that occurs commonly when designing living spaces. It entails determining the

  5. MEGA X: Molecular Evolutionary Genetics Analysis across Computing Platforms.

    Science.gov (United States)

    Kumar, Sudhir; Stecher, Glen; Li, Michael; Knyaz, Christina; Tamura, Koichiro

    2018-06-01

    The Molecular Evolutionary Genetics Analysis (Mega) software implements many analytical methods and tools for phylogenomics and phylomedicine. Here, we report a transformation of Mega to enable cross-platform use on Microsoft Windows and Linux operating systems. Mega X does not require virtualization or emulation software and provides a uniform user experience across platforms. Mega X has additionally been upgraded to use multiple computing cores for many molecular evolutionary analyses. Mega X is available in two interfaces (graphical and command line) and can be downloaded from www.megasoftware.net free of charge.

  6. Biomimetic design processes in architecture: morphogenetic and evolutionary computational design

    International Nuclear Information System (INIS)

    Menges, Achim

    2012-01-01

    Design computation has profound impact on architectural design methods. This paper explains how computational design enables the development of biomimetic design processes specific to architecture, and how they need to be significantly different from established biomimetic processes in engineering disciplines. The paper first explains the fundamental difference between computer-aided and computational design in architecture, as the understanding of this distinction is of critical importance for the research presented. Thereafter, the conceptual relation and possible transfer of principles from natural morphogenesis to design computation are introduced and the related developments of generative, feature-based, constraint-based, process-based and feedback-based computational design methods are presented. This morphogenetic design research is then related to exploratory evolutionary computation, followed by the presentation of two case studies focusing on the exemplary development of spatial envelope morphologies and urban block morphologies. (paper)

  7. International Conference on Artificial Intelligence and Evolutionary Computations in Engineering Systems

    CERN Document Server

    Bhaskar, M; Panigrahi, Bijaya; Das, Swagatam

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in the first International Conference on International Conference on Artificial Intelligence and Evolutionary Computations in Engineering Systems (ICAIECES -2015) held at Velammal Engineering College (VEC), Chennai, India during 22 – 23 April 2015. The book discusses wide variety of industrial, engineering and scientific applications of the emerging techniques. Researchers from academic and industry present their original work and exchange ideas, information, techniques and applications in the field of Communication, Computing and Power Technologies.

  8. Statistical physics and computational methods for evolutionary game theory

    CERN Document Server

    Javarone, Marco Alberto

    2018-01-01

    This book presents an introduction to Evolutionary Game Theory (EGT) which is an emerging field in the area of complex systems attracting the attention of researchers from disparate scientific communities. EGT allows one to represent and study several complex phenomena, such as the emergence of cooperation in social systems, the role of conformity in shaping the equilibrium of a population, and the dynamics in biological and ecological systems. Since EGT models belong to the area of complex systems, statistical physics constitutes a fundamental ingredient for investigating their behavior. At the same time, the complexity of some EGT models, such as those realized by means of agent-based methods, often require the implementation of numerical simulations. Therefore, beyond providing an introduction to EGT, this book gives a brief overview of the main statistical physics tools (such as phase transitions and the Ising model) and computational strategies for simulating evolutionary games (such as Monte Carlo algor...

  9. A cyber kill chain based taxonomy of banking Trojans for evolutionary computational intelligence

    OpenAIRE

    Kiwia, D; Dehghantanha, A; Choo, K-KR; Slaughter, J

    2017-01-01

    Malware such as banking Trojans are popular with financially-motivated cybercriminals. Detection of banking Trojans remains a challenging task, due to the constant evolution of techniques used to obfuscate and circumvent existing detection and security solutions. Having a malware taxonomy can facilitate the design of mitigation strategies such as those based on evolutionary computational intelligence. Specifically, in this paper, we propose a cyber kill chain based taxonomy of banking Trojans...

  10. Safety management in NPPs using an evolutionary algorithm technique

    International Nuclear Information System (INIS)

    Mishra, Alok; Patwardhan, Anand; Verma, A.K.

    2007-01-01

    The general goal of safety management in Nuclear Power Plants (NPPs) is to make requirements and activities more risk effective and less costly. The technical specification and maintenance (TS and M) activities in a plant are associated with controlling risk or with satisfying requirements, and are candidates to be evaluated for their resource effectiveness in risk-informed applications. Accordingly, the risk-based analysis of technical specification (RBTS) is being considered in evaluating current TS. The multi-objective optimization of the TS and M requirements of a NPP based on risk and cost, gives the pareto-optimal solutions, from which the utility can pick its decision variables suiting its interest. In this paper, a multi-objective evolutionary algorithm technique has been used to make a trade-off between risk and cost both at the system level and at the plant level for loss of coolant accident (LOCA) and main steam line break (MSLB) as initiating events

  11. 9th International Conference on Genetic and Evolutionary Computing

    CERN Document Server

    Lin, Jerry; Pan, Jeng-Shyang; Tin, Pyke; Yokota, Mitsuhiro; Genetic and Evolutionary Computing

    2016-01-01

    This volume of Advances in Intelligent Systems and Computing contains accepted papers presented at ICGEC 2015, the 9th International Conference on Genetic and Evolutionary Computing. The conference this year was technically co-sponsored by Ministry of Science and Technology, Myanmar, University of Computer Studies, Yangon, University of Miyazaki in Japan, Kaohsiung University of Applied Science in Taiwan, Fujian University of Technology in China and VSB-Technical University of Ostrava. ICGEC 2015 is held from 26-28, August, 2015 in Yangon, Myanmar. Yangon, the most multiethnic and cosmopolitan city in Myanmar, is the main gateway to the country. Despite being the commercial capital of Myanmar, Yangon is a city engulfed by its rich history and culture, an integration of ancient traditions and spiritual heritage. The stunning SHWEDAGON Pagoda is the center piece of Yangon city, which itself is famous for the best British colonial era architecture. Of particular interest in many shops of Bogyoke Aung San Market,...

  12. 8th International Conference on Genetic and Evolutionary Computing

    CERN Document Server

    Yang, Chin-Yu; Lin, Chun-Wei; Pan, Jeng-Shyang; Snasel, Vaclav; Abraham, Ajith

    2015-01-01

    This volume of Advances in Intelligent Systems and Computing contains accepted papers presented at ICGEC 2014, the 8th International Conference on Genetic and Evolutionary Computing. The conference this year was technically co-sponsored by Nanchang Institute of Technology in China, Kaohsiung University of Applied Science in Taiwan, and VSB-Technical University of Ostrava. ICGEC 2014 is held from 18-20 October 2014 in Nanchang, China. Nanchang is one of is the capital of Jiangxi Province in southeastern China, located in the north-central portion of the province. As it is bounded on the west by the Jiuling Mountains, and on the east by Poyang Lake, it is famous for its scenery, rich history and cultural sites. Because of its central location relative to the Yangtze and Pearl River Delta regions, it is a major railroad hub in Southern China. The conference is intended as an international forum for the researchers and professionals in all areas of genetic and evolutionary computing.

  13. 7th International Conference on Genetic and Evolutionary Computing

    CERN Document Server

    Krömer, Pavel; Snášel, Václav

    2014-01-01

    Genetic and Evolutionary Computing This volume of Advances in Intelligent Systems and Computing contains accepted papers presented at ICGEC 2013, the 7th International Conference on Genetic and Evolutionary Computing. The conference this year was technically co-sponsored by The Waseda University in Japan, Kaohsiung University of Applied Science in Taiwan, and VSB-Technical University of Ostrava. ICGEC 2013 was held in Prague, Czech Republic. Prague is one of the most beautiful cities in the world whose magical atmosphere has been shaped over ten centuries. Places of the greatest tourist interest are on the Royal Route running from the Powder Tower through Celetna Street to Old Town Square, then across Charles Bridge through the Lesser Town up to the Hradcany Castle. One should not miss the Jewish Town, and the National Gallery with its fine collection of Czech Gothic art, collection of old European art, and a beautiful collection of French art. The conference was intended as an international forum for the res...

  14. Higher-order techniques in computational electromagnetics

    CERN Document Server

    Graglia, Roberto D

    2016-01-01

    Higher-Order Techniques in Computational Electromagnetics explains 'high-order' techniques that can significantly improve the accuracy, computational cost, and reliability of computational techniques for high-frequency electromagnetics, such as antennas, microwave devices and radar scattering applications.

  15. Recent advances in swarm intelligence and evolutionary computation

    CERN Document Server

    2015-01-01

    This timely review volume summarizes the state-of-the-art developments in nature-inspired algorithms and applications with the emphasis on swarm intelligence and bio-inspired computation. Topics include the analysis and overview of swarm intelligence and evolutionary computation, hybrid metaheuristic algorithms, bat algorithm, discrete cuckoo search, firefly algorithm, particle swarm optimization, and harmony search as well as convergent hybridization. Application case studies have focused on the dehydration of fruits and vegetables by the firefly algorithm and goal programming, feature selection by the binary flower pollination algorithm, job shop scheduling, single row facility layout optimization, training of feed-forward neural networks, damage and stiffness identification, synthesis of cross-ambiguity functions by the bat algorithm, web document clustering, truss analysis, water distribution networks, sustainable building designs and others. As a timely review, this book can serve as an ideal reference f...

  16. International Conference on Artificial Intelligence and Evolutionary Computations in Engineering Systems

    CERN Document Server

    Vijayakumar, K; Panigrahi, Bijaya; Das, Swagatam

    2017-01-01

    The volume is a collection of high-quality peer-reviewed research papers presented in the International Conference on Artificial Intelligence and Evolutionary Computation in Engineering Systems (ICAIECES 2016) held at SRM University, Chennai, Tamilnadu, India. This conference is an international forum for industry professionals and researchers to deliberate and state their research findings, discuss the latest advancements and explore the future directions in the emerging areas of engineering and technology. The book presents original work and novel ideas, information, techniques and applications in the field of communication, computing and power technologies.

  17. Solution of Fractional Order System of Bagley-Torvik Equation Using Evolutionary Computational Intelligence

    Directory of Open Access Journals (Sweden)

    Muhammad Asif Zahoor Raja

    2011-01-01

    Full Text Available A stochastic technique has been developed for the solution of fractional order system represented by Bagley-Torvik equation. The mathematical model of the equation was developed with the help of feed-forward artificial neural networks. The training of the networks was made with evolutionary computational intelligence based on genetic algorithm hybrid with pattern search technique. Designed scheme was successfully applied to different forms of the equation. Results are compared with standard approximate analytic, stochastic numerical solvers and exact solutions.

  18. Computer animation algorithms and techniques

    CERN Document Server

    Parent, Rick

    2012-01-01

    Driven by the demands of research and the entertainment industry, the techniques of animation are pushed to render increasingly complex objects with ever-greater life-like appearance and motion. This rapid progression of knowledge and technique impacts professional developers, as well as students. Developers must maintain their understanding of conceptual foundations, while their animation tools become ever more complex and specialized. The second edition of Rick Parent's Computer Animation is an excellent resource for the designers who must meet this challenge. The first edition establ

  19. Protein 3D structure computed from evolutionary sequence variation.

    Directory of Open Access Journals (Sweden)

    Debora S Marks

    Full Text Available The evolutionary trajectory of a protein through sequence space is constrained by its function. Collections of sequence homologs record the outcomes of millions of evolutionary experiments in which the protein evolves according to these constraints. Deciphering the evolutionary record held in these sequences and exploiting it for predictive and engineering purposes presents a formidable challenge. The potential benefit of solving this challenge is amplified by the advent of inexpensive high-throughput genomic sequencing.In this paper we ask whether we can infer evolutionary constraints from a set of sequence homologs of a protein. The challenge is to distinguish true co-evolution couplings from the noisy set of observed correlations. We address this challenge using a maximum entropy model of the protein sequence, constrained by the statistics of the multiple sequence alignment, to infer residue pair couplings. Surprisingly, we find that the strength of these inferred couplings is an excellent predictor of residue-residue proximity in folded structures. Indeed, the top-scoring residue couplings are sufficiently accurate and well-distributed to define the 3D protein fold with remarkable accuracy.We quantify this observation by computing, from sequence alone, all-atom 3D structures of fifteen test proteins from different fold classes, ranging in size from 50 to 260 residues, including a G-protein coupled receptor. These blinded inferences are de novo, i.e., they do not use homology modeling or sequence-similar fragments from known structures. The co-evolution signals provide sufficient information to determine accurate 3D protein structure to 2.7-4.8 Å C(α-RMSD error relative to the observed structure, over at least two-thirds of the protein (method called EVfold, details at http://EVfold.org. This discovery provides insight into essential interactions constraining protein evolution and will facilitate a comprehensive survey of the universe of

  20. Fundamentals of computational intelligence neural networks, fuzzy systems, and evolutionary computation

    CERN Document Server

    Keller, James M; Fogel, David B

    2016-01-01

    This book covers the three fundamental topics that form the basis of computational intelligence: neural networks, fuzzy systems, and evolutionary computation. The text focuses on inspiration, design, theory, and practical aspects of implementing procedures to solve real-world problems. While other books in the three fields that comprise computational intelligence are written by specialists in one discipline, this book is co-written by current former Editor-in-Chief of IEEE Transactions on Neural Networks and Learning Systems, a former Editor-in-Chief of IEEE Transactions on Fuzzy Systems, and the founding Editor-in-Chief of IEEE Transactions on Evolutionary Computation. The coverage across the three topics is both uniform and consistent in style and notation. Discusses single-layer and multilayer neural networks, radial-basi function networks, and recurrent neural networks Covers fuzzy set theory, fuzzy relations, fuzzy logic interference, fuzzy clustering and classification, fuzzy measures and fuzz...

  1. Optimization and Assessment of Wavelet Packet Decompositions with Evolutionary Computation

    Directory of Open Access Journals (Sweden)

    Schell Thomas

    2003-01-01

    Full Text Available In image compression, the wavelet transformation is a state-of-the-art component. Recently, wavelet packet decomposition has received quite an interest. A popular approach for wavelet packet decomposition is the near-best-basis algorithm using nonadditive cost functions. In contrast to additive cost functions, the wavelet packet decomposition of the near-best-basis algorithm is only suboptimal. We apply methods from the field of evolutionary computation (EC to test the quality of the near-best-basis results. We observe a phenomenon: the results of the near-best-basis algorithm are inferior in terms of cost-function optimization but are superior in terms of rate/distortion performance compared to EC methods.

  2. An evolutionary outlook of air traffic flow management techniques

    Science.gov (United States)

    Kistan, Trevor; Gardi, Alessandro; Sabatini, Roberto; Ramasamy, Subramanian; Batuwangala, Eranga

    2017-01-01

    In recent years Air Traffic Flow Management (ATFM) has become pertinent even in regions without sustained overload conditions caused by dense traffic operations. Increasing traffic volumes in the face of constrained resources has created peak congestion at specific locations and times in many areas of the world. Increased environmental awareness and economic drivers have combined to create a resurgent interest in ATFM as evidenced by a spate of recent ATFM conferences and workshops mediated by official bodies such as ICAO, IATA, CANSO the FAA and Eurocontrol. Significant ATFM acquisitions in the last 5 years include South Africa, Australia and India. Singapore, Thailand and Korea are all expected to procure ATFM systems within a year while China is expected to develop a bespoke system. Asia-Pacific nations are particularly pro-active given the traffic growth projections for the region (by 2050 half of all air traffic will be to, from or within the Asia-Pacific region). National authorities now have access to recently published international standards to guide the development of national and regional operational concepts for ATFM, geared to Communications, Navigation, Surveillance/Air Traffic Management and Avionics (CNS+A) evolutions. This paper critically reviews the field to determine which ATFM research and development efforts hold the best promise for practical technological implementations, offering clear benefits both in terms of enhanced safety and efficiency in times of growing air traffic. An evolutionary approach is adopted starting from an ontology of current ATFM techniques and proceeding to identify the technological and regulatory evolutions required in the future CNS+A context, as the aviation industry moves forward with a clearer understanding of emerging operational needs, the geo-political realities of regional collaboration and the impending needs of global harmonisation.

  3. Crowd Computing as a Cooperation Problem: An Evolutionary Approach

    Science.gov (United States)

    Christoforou, Evgenia; Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A.; Sánchez, Angel

    2013-05-01

    Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive—conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.

  4. An evolutionary computing frame work toward object extraction from satellite images

    Directory of Open Access Journals (Sweden)

    P.V. Arun

    2013-12-01

    Full Text Available Image interpretation domains have witnessed the application of many intelligent methodologies over the past decade; however the effective use of evolutionary computing techniques for feature detection has been less explored. In this paper, we critically analyze the possibility of using cellular neural network for accurate feature detection. Contextual knowledge has been effectively represented by incorporating spectral and spatial aspects using adaptive kernel strategy. Developed methodology has been compared with traditional approaches in an object based context and investigations revealed that considerable success has been achieved with the procedure. Intelligent interpretation, automatic interpolation, and effective contextual representations are the features of the system.

  5. An evolutionary computation approach to examine functional brain plasticity

    Directory of Open Access Journals (Sweden)

    Arnab eRoy

    2016-04-01

    Full Text Available One common research goal in systems neurosciences is to understand how the functional relationship between a pair of regions of interest (ROIs evolves over time. Examining neural connectivity in this way is well-suited for the study of developmental processes, learning, and even in recovery or treatment designs in response to injury. For most fMRI based studies, the strength of the functional relationship between two ROIs is defined as the correlation between the average signal representing each region. The drawback to this approach is that much information is lost due to averaging heterogeneous voxels, and therefore, the functional relationship between a ROI-pair that evolve at a spatial scale much finer than the ROIs remain undetected. To address this shortcoming, we introduce a novel evolutionary computation (EC based voxel-level procedure to examine functional plasticity between an investigator defined ROI-pair by simultaneously using subject-specific BOLD-fMRI data collected from two sessions seperated by finite duration of time. This data-driven procedure detects a sub-region composed of spatially connected voxels from each ROI (a so-called sub-regional-pair such that the pair shows a significant gain/loss of functional relationship strength across the two time points. The procedure is recursive and iteratively finds all statistically significant sub-regional-pairs within the ROIs. Using this approach, we examine functional plasticity between the default mode network (DMN and the executive control network (ECN during recovery from traumatic brain injury (TBI; the study includes 14 TBI and 12 healthy control subjects. We demonstrate that the EC based procedure is able to detect functional plasticity where a traditional averaging based approach fails. The subject-specific plasticity estimates obtained using the EC-procedure are highly consistent across multiple runs. Group-level analyses using these plasticity estimates showed an increase in

  6. Power system dynamic state estimation using prediction based evolutionary technique

    International Nuclear Information System (INIS)

    Basetti, Vedik; Chandel, Ashwani K.; Chandel, Rajeevan

    2016-01-01

    In this paper, a new robust LWS (least winsorized square) estimator is proposed for dynamic state estimation of a power system. One of the main advantages of this estimator is that it has an inbuilt bad data rejection property and is less sensitive to bad data measurements. In the proposed approach, Brown's double exponential smoothing technique has been utilised for its reliable performance at the prediction step. The state estimation problem is solved as an optimisation problem using a new jDE-self adaptive differential evolution with prediction based population re-initialisation technique at the filtering step. This new stochastic search technique has been embedded with different state scenarios using the predicted state. The effectiveness of the proposed LWS technique is validated under different conditions, namely normal operation, bad data, sudden load change, and loss of transmission line conditions on three different IEEE test bus systems. The performance of the proposed approach is compared with the conventional extended Kalman filter. On the basis of various performance indices, the results thus obtained show that the proposed technique increases the accuracy and robustness of power system dynamic state estimation performance. - Highlights: • To estimate the states of the power system under dynamic environment. • The performance of the EKF method is degraded during anomaly conditions. • The proposed method remains robust towards anomalies. • The proposed method provides precise state estimates even in the presence of anomalies. • The results show that prediction accuracy is enhanced by using the proposed model.

  7. EVOLVE : a Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation II

    CERN Document Server

    Coello, Carlos; Tantar, Alexandru-Adrian; Tantar, Emilia; Bouvry, Pascal; Moral, Pierre; Legrand, Pierrick; EVOLVE 2012

    2013-01-01

    This book comprises a selection of papers from the EVOLVE 2012 held in Mexico City, Mexico. The aim of the EVOLVE is to build a bridge between probability, set oriented numerics and evolutionary computing, as to identify new common and challenging research aspects. The conference is also intended to foster a growing interest for robust and efficient methods with a sound theoretical background. EVOLVE is intended to unify theory-inspired methods and cutting-edge techniques ensuring performance guarantee factors. By gathering researchers with different backgrounds, a unified view and vocabulary can emerge where the theoretical advancements may echo in different domains. Summarizing, the EVOLVE focuses on challenging aspects arising at the passage from theory to new paradigms and aims to provide a unified view while raising questions related to reliability,  performance guarantees and modeling. The papers of the EVOLVE 2012 make a contribution to this goal. 

  8. Constrained Optimization Based on Hybrid Evolutionary Algorithm and Adaptive Constraint-Handling Technique

    DEFF Research Database (Denmark)

    Wang, Yong; Cai, Zixing; Zhou, Yuren

    2009-01-01

    A novel approach to deal with numerical and engineering constrained optimization problems, which incorporates a hybrid evolutionary algorithm and an adaptive constraint-handling technique, is presented in this paper. The hybrid evolutionary algorithm simultaneously uses simplex crossover and two...... mutation operators to generate the offspring population. Additionally, the adaptive constraint-handling technique consists of three main situations. In detail, at each situation, one constraint-handling mechanism is designed based on current population state. Experiments on 13 benchmark test functions...... and four well-known constrained design problems verify the effectiveness and efficiency of the proposed method. The experimental results show that integrating the hybrid evolutionary algorithm with the adaptive constraint-handling technique is beneficial, and the proposed method achieves competitive...

  9. Grid computing techniques and applications

    CERN Document Server

    Wilkinson, Barry

    2009-01-01

    ''… the most outstanding aspect of this book is its excellent structure: it is as though we have been given a map to help us move around this technology from the base to the summit … I highly recommend this book …''Jose Lloret, Computing Reviews, March 2010

  10. Soft computing techniques in engineering applications

    CERN Document Server

    Zhong, Baojiang

    2014-01-01

    The Soft Computing techniques, which are based on the information processing of biological systems are now massively used in the area of pattern recognition, making prediction & planning, as well as acting on the environment. Ideally speaking, soft computing is not a subject of homogeneous concepts and techniques; rather, it is an amalgamation of distinct methods that confirms to its guiding principle. At present, the main aim of soft computing is to exploit the tolerance for imprecision and uncertainty to achieve tractability, robustness and low solutions cost. The principal constituents of soft computing techniques are probabilistic reasoning, fuzzy logic, neuro-computing, genetic algorithms, belief networks, chaotic systems, as well as learning theory. This book covers contributions from various authors to demonstrate the use of soft computing techniques in various applications of engineering.  

  11. Computing the Quartet Distance Between Evolutionary Trees in Time O(n log n)

    DEFF Research Database (Denmark)

    Brodal, Gerth Sølfting; Fagerberg, Rolf; Pedersen, Christian Nørgaard Storm

    2003-01-01

    Evolutionary trees describing the relationship for a set of species are central in evolutionary biology, and quantifying differences between evolutionary trees is therefore an important task. The quartet distance is a distance measure between trees previously proposed by Estabrook, McMorris, and ...... unrooted evolutionary trees of n species, where all internal nodes have degree three, in time O(n log n. The previous best algorithm for the problem uses time O(n 2).......Evolutionary trees describing the relationship for a set of species are central in evolutionary biology, and quantifying differences between evolutionary trees is therefore an important task. The quartet distance is a distance measure between trees previously proposed by Estabrook, Mc......Morris, and Meacham. The quartet distance between two unrooted evolutionary trees is the number of quartet topology differences between the two trees, where a quartet topology is the topological subtree induced by four species. In this paper we present an algorithm for computing the quartet distance between two...

  12. Simplified Drift Analysis for Proving Lower Bounds in Evolutionary Computation

    DEFF Research Database (Denmark)

    Oliveto, Pietro S.; Witt, Carsten

    2011-01-01

    Drift analysis is a powerful tool used to bound the optimization time of evolutionary algorithms (EAs). Various previous works apply a drift theorem going back to Hajek in order to show exponential lower bounds on the optimization time of EAs. However, this drift theorem is tedious to read...... and to apply since it requires two bounds on the moment-generating (exponential) function of the drift. A recent work identifies a specialization of this drift theorem that is much easier to apply. Nevertheless, it is not as simple and not as general as possible. The present paper picks up Hajek’s line...

  13. A hybrid finite element analysis and evolutionary computation method for the design of lightweight lattice components with optimized strut diameter

    DEFF Research Database (Denmark)

    Salonitis, Konstantinos; Chantzis, Dimitrios; Kappatos, Vasileios

    2017-01-01

    approaches or with the use of topology optimization methodologies. An optimization approach utilizing multipurpose optimization algorithms has not been proposed yet. This paper presents a novel user-friendly method for the design optimization of lattice components towards weight minimization, which combines...... finite element analysis and evolutionary computation. The proposed method utilizes the cell homogenization technique in order to reduce the computational cost of the finite element analysis and a genetic algorithm in order to search for the most lightweight lattice configuration. A bracket consisting...

  14. Basic emotions and adaptation. A computational and evolutionary model.

    Science.gov (United States)

    Pacella, Daniela; Ponticorvo, Michela; Gigliotta, Onofrio; Miglino, Orazio

    2017-01-01

    The core principles of the evolutionary theories of emotions declare that affective states represent crucial drives for action selection in the environment and regulated the behavior and adaptation of natural agents in ancestrally recurrent situations. While many different studies used autonomous artificial agents to simulate emotional responses and the way these patterns can affect decision-making, few are the approaches that tried to analyze the evolutionary emergence of affective behaviors directly from the specific adaptive problems posed by the ancestral environment. A model of the evolution of affective behaviors is presented using simulated artificial agents equipped with neural networks and physically inspired on the architecture of the iCub humanoid robot. We use genetic algorithms to train populations of virtual robots across generations, and investigate the spontaneous emergence of basic emotional behaviors in different experimental conditions. In particular, we focus on studying the emotion of fear, therefore the environment explored by the artificial agents can contain stimuli that are safe or dangerous to pick. The simulated task is based on classical conditioning and the agents are asked to learn a strategy to recognize whether the environment is safe or represents a threat to their lives and select the correct action to perform in absence of any visual cues. The simulated agents have special input units in their neural structure whose activation keep track of their actual "sensations" based on the outcome of past behavior. We train five different neural network architectures and then test the best ranked individuals comparing their performances and analyzing the unit activations in each individual's life cycle. We show that the agents, regardless of the presence of recurrent connections, spontaneously evolved the ability to cope with potentially dangerous environment by collecting information about the environment and then switching their behavior

  15. Statistical and Computational Techniques in Manufacturing

    CERN Document Server

    2012-01-01

    In recent years, interest in developing statistical and computational techniques for applied manufacturing engineering has been increased. Today, due to the great complexity of manufacturing engineering and the high number of parameters used, conventional approaches are no longer sufficient. Therefore, in manufacturing, statistical and computational techniques have achieved several applications, namely, modelling and simulation manufacturing processes, optimization manufacturing parameters, monitoring and control, computer-aided process planning, etc. The present book aims to provide recent information on statistical and computational techniques applied in manufacturing engineering. The content is suitable for final undergraduate engineering courses or as a subject on manufacturing at the postgraduate level. This book serves as a useful reference for academics, statistical and computational science researchers, mechanical, manufacturing and industrial engineers, and professionals in industries related to manu...

  16. A security mechanism based on evolutionary game in fog computing.

    Science.gov (United States)

    Sun, Yan; Lin, Fuhong; Zhang, Nan

    2018-02-01

    Fog computing is a distributed computing paradigm at the edge of the network and requires cooperation of users and sharing of resources. When users in fog computing open their resources, their devices are easily intercepted and attacked because they are accessed through wireless network and present an extensive geographical distribution. In this study, a credible third party was introduced to supervise the behavior of users and protect the security of user cooperation. A fog computing security mechanism based on human nervous system is proposed, and the strategy for a stable system evolution is calculated. The MATLAB simulation results show that the proposed mechanism can reduce the number of attack behaviors effectively and stimulate users to cooperate in application tasks positively.

  17. A security mechanism based on evolutionary game in fog computing

    Directory of Open Access Journals (Sweden)

    Yan Sun

    2018-02-01

    Full Text Available Fog computing is a distributed computing paradigm at the edge of the network and requires cooperation of users and sharing of resources. When users in fog computing open their resources, their devices are easily intercepted and attacked because they are accessed through wireless network and present an extensive geographical distribution. In this study, a credible third party was introduced to supervise the behavior of users and protect the security of user cooperation. A fog computing security mechanism based on human nervous system is proposed, and the strategy for a stable system evolution is calculated. The MATLAB simulation results show that the proposed mechanism can reduce the number of attack behaviors effectively and stimulate users to cooperate in application tasks positively.

  18. Basic emotions and adaptation. A computational and evolutionary model.

    Directory of Open Access Journals (Sweden)

    Daniela Pacella

    Full Text Available The core principles of the evolutionary theories of emotions declare that affective states represent crucial drives for action selection in the environment and regulated the behavior and adaptation of natural agents in ancestrally recurrent situations. While many different studies used autonomous artificial agents to simulate emotional responses and the way these patterns can affect decision-making, few are the approaches that tried to analyze the evolutionary emergence of affective behaviors directly from the specific adaptive problems posed by the ancestral environment. A model of the evolution of affective behaviors is presented using simulated artificial agents equipped with neural networks and physically inspired on the architecture of the iCub humanoid robot. We use genetic algorithms to train populations of virtual robots across generations, and investigate the spontaneous emergence of basic emotional behaviors in different experimental conditions. In particular, we focus on studying the emotion of fear, therefore the environment explored by the artificial agents can contain stimuli that are safe or dangerous to pick. The simulated task is based on classical conditioning and the agents are asked to learn a strategy to recognize whether the environment is safe or represents a threat to their lives and select the correct action to perform in absence of any visual cues. The simulated agents have special input units in their neural structure whose activation keep track of their actual "sensations" based on the outcome of past behavior. We train five different neural network architectures and then test the best ranked individuals comparing their performances and analyzing the unit activations in each individual's life cycle. We show that the agents, regardless of the presence of recurrent connections, spontaneously evolved the ability to cope with potentially dangerous environment by collecting information about the environment and then

  19. Design of a computation tool for neutron spectrometry and dosimetry through evolutionary neural networks

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Vega C, H. R.; Martinez B, M. R.; Gallego, E.

    2009-10-01

    The neutron dosimetry is one of the most complicated tasks of radiation protection, due to it is a complex technique and highly dependent of neutron energy. One of the first devices used to perform neutron spectrometry is the system known as spectrometric system of Bonner spheres, that continuous being one of spectrometers most commonly used. This system has disadvantages such as: the components weight, the low resolution of spectrum, long and drawn out procedure for the spectra reconstruction, which require an expert user in system management, the need of use a reconstruction code as BUNKIE, SAND, etc., which are based on an iterative reconstruction algorithm and whose greatest inconvenience is that for the spectrum reconstruction, are needed to provide to system and initial spectrum as close as possible to the desired spectrum get. Consequently, researchers have mentioned the need to developed alternative measurement techniques to improve existing monitoring systems for workers. Among these alternative techniques have been reported several reconstruction procedures based on artificial intelligence techniques such as genetic algorithms, artificial neural networks and hybrid systems of evolutionary artificial neural networks using genetic algorithms. However, the use of these techniques in the nuclear science area is not free of problems, so it has been suggested that more research is conducted in such a way as to solve these disadvantages. Because they are emerging technologies, there are no tools for the results analysis, so in this paper we present first the design of a computation tool that allow to analyze the neutron spectra and equivalent doses, obtained through the hybrid technology of neural networks and genetic algorithms. This tool provides an user graphical environment, friendly, intuitive and easy of operate. The speed of program operation is high, executing the analysis in a few seconds, so it may storage and or print the obtained information for

  20. A sense of life: computational and experimental investigations with models of biochemical and evolutionary processes.

    Science.gov (United States)

    Mishra, Bud; Daruwala, Raoul-Sam; Zhou, Yi; Ugel, Nadia; Policriti, Alberto; Antoniotti, Marco; Paxia, Salvatore; Rejali, Marc; Rudra, Archisman; Cherepinsky, Vera; Silver, Naomi; Casey, William; Piazza, Carla; Simeoni, Marta; Barbano, Paolo; Spivak, Marina; Feng, Jiawu; Gill, Ofer; Venkatesh, Mysore; Cheng, Fang; Sun, Bing; Ioniata, Iuliana; Anantharaman, Thomas; Hubbard, E Jane Albert; Pnueli, Amir; Harel, David; Chandru, Vijay; Hariharan, Ramesh; Wigler, Michael; Park, Frank; Lin, Shih-Chieh; Lazebnik, Yuri; Winkler, Franz; Cantor, Charles R; Carbone, Alessandra; Gromov, Mikhael

    2003-01-01

    We collaborate in a research program aimed at creating a rigorous framework, experimental infrastructure, and computational environment for understanding, experimenting with, manipulating, and modifying a diverse set of fundamental biological processes at multiple scales and spatio-temporal modes. The novelty of our research is based on an approach that (i) requires coevolution of experimental science and theoretical techniques and (ii) exploits a certain universality in biology guided by a parsimonious model of evolutionary mechanisms operating at the genomic level and manifesting at the proteomic, transcriptomic, phylogenic, and other higher levels. Our current program in "systems biology" endeavors to marry large-scale biological experiments with the tools to ponder and reason about large, complex, and subtle natural systems. To achieve this ambitious goal, ideas and concepts are combined from many different fields: biological experimentation, applied mathematical modeling, computational reasoning schemes, and large-scale numerical and symbolic simulations. From a biological viewpoint, the basic issues are many: (i) understanding common and shared structural motifs among biological processes; (ii) modeling biological noise due to interactions among a small number of key molecules or loss of synchrony; (iii) explaining the robustness of these systems in spite of such noise; and (iv) cataloging multistatic behavior and adaptation exhibited by many biological processes.

  1. Computational Modeling of Teaching and Learning through Application of Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    Richard Lamb

    2015-09-01

    Full Text Available Within the mind, there are a myriad of ideas that make sense within the bounds of everyday experience, but are not reflective of how the world actually exists; this is particularly true in the domain of science. Classroom learning with teacher explanation are a bridge through which these naive understandings can be brought in line with scientific reality. The purpose of this paper is to examine how the application of a Multiobjective Evolutionary Algorithm (MOEA can work in concert with an existing computational-model to effectively model critical-thinking in the science classroom. An evolutionary algorithm is an algorithm that iteratively optimizes machine learning based computational models. The research question is, does the application of an evolutionary algorithm provide a means to optimize the Student Task and Cognition Model (STAC-M and does the optimized model sufficiently represent and predict teaching and learning outcomes in the science classroom? Within this computational study, the authors outline and simulate the effect of teaching on the ability of a “virtual” student to solve a Piagetian task. Using the Student Task and Cognition Model (STAC-M a computational model of student cognitive processing in science class developed in 2013, the authors complete a computational experiment which examines the role of cognitive retraining on student learning. Comparison of the STAC-M and the STAC-M with inclusion of the Multiobjective Evolutionary Algorithm shows greater success in solving the Piagetian science-tasks post cognitive retraining with the Multiobjective Evolutionary Algorithm. This illustrates the potential uses of cognitive and neuropsychological computational modeling in educational research. The authors also outline the limitations and assumptions of computational modeling.

  2. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment.

    Science.gov (United States)

    Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che

    2014-01-16

    To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high

  3. New coding technique for computer generated holograms.

    Science.gov (United States)

    Haskell, R. E.; Culver, B. C.

    1972-01-01

    A coding technique is developed for recording computer generated holograms on a computer controlled CRT in which each resolution cell contains two beam spots of equal size and equal intensity. This provides a binary hologram in which only the position of the two dots is varied from cell to cell. The amplitude associated with each resolution cell is controlled by selectively diffracting unwanted light into a higher diffraction order. The recording of the holograms is fast and simple.

  4. Strength Pareto Evolutionary Algorithm using Self-Organizing Data Analysis Techniques

    Directory of Open Access Journals (Sweden)

    Ionut Balan

    2015-03-01

    Full Text Available Multiobjective optimization is widely used in problems solving from a variety of areas. To solve such problems there was developed a set of algorithms, most of them based on evolutionary techniques. One of the algorithms from this class, which gives quite good results is SPEA2, method which is the basis of the proposed algorithm in this paper. Results from this paper are obtained by running these two algorithms on a flow-shop problem.

  5. Epigenetic Tracking, a Method to Generate Arbitrary Shapes By Using Evolutionary-Developmental Techniques

    OpenAIRE

    Fontana, Alessandro

    2008-01-01

    This paper describes an Artificial Embryology method (called ``Epigenetic Tracking'') to generate predefined arbitrarily shaped 2-dimensional arrays of cells by means of evolutionary techniques. It is based on a model of development, whose key features are: i) the distinction bewteen ``normal'' and ``driver'' cells, being the latter able to receive guidance from the genome, ii) the implementation of the proliferation/apoptosis events in such a way that many cells are created/deleted at once, ...

  6. Computational techniques of the simplex method

    CERN Document Server

    Maros, István

    2003-01-01

    Computational Techniques of the Simplex Method is a systematic treatment focused on the computational issues of the simplex method. It provides a comprehensive coverage of the most important and successful algorithmic and implementation techniques of the simplex method. It is a unique source of essential, never discussed details of algorithmic elements and their implementation. On the basis of the book the reader will be able to create a highly advanced implementation of the simplex method which, in turn, can be used directly or as a building block in other solution algorithms.

  7. Computational intelligence techniques in health care

    CERN Document Server

    Zhou, Wengang; Satheesh, P

    2016-01-01

    This book presents research on emerging computational intelligence techniques and tools, with a particular focus on new trends and applications in health care. Healthcare is a multi-faceted domain, which incorporates advanced decision-making, remote monitoring, healthcare logistics, operational excellence and modern information systems. In recent years, the use of computational intelligence methods to address the scale and the complexity of the problems in healthcare has been investigated. This book discusses various computational intelligence methods that are implemented in applications in different areas of healthcare. It includes contributions by practitioners, technology developers and solution providers.

  8. Investigating the Multi-memetic Mind Evolutionary Computation Algorithm Efficiency

    Directory of Open Access Journals (Sweden)

    M. K. Sakharov

    2017-01-01

    Full Text Available In solving practically significant problems of global optimization, the objective function is often of high dimensionality and computational complexity and of nontrivial landscape as well. Studies show that often one optimization method is not enough for solving such problems efficiently - hybridization of several optimization methods is necessary.One of the most promising contemporary trends in this field are memetic algorithms (MA, which can be viewed as a combination of the population-based search for a global optimum and the procedures for a local refinement of solutions (memes, provided by a synergy. Since there are relatively few theoretical studies concerning the MA configuration, which is advisable for use to solve the black-box optimization problems, many researchers tend just to adaptive algorithms, which for search select the most efficient methods of local optimization for the certain domains of the search space.The article proposes a multi-memetic modification of a simple SMEC algorithm, using random hyper-heuristics. Presents the software algorithm and memes used (Nelder-Mead method, method of random hyper-sphere surface search, Hooke-Jeeves method. Conducts a comparative study of the efficiency of the proposed algorithm depending on the set and the number of memes. The study has been carried out using Rastrigin, Rosenbrock, and Zakharov multidimensional test functions. Computational experiments have been carried out for all possible combinations of memes and for each meme individually.According to results of study, conducted by the multi-start method, the combinations of memes, comprising the Hooke-Jeeves method, were successful. These results prove a rapid convergence of the method to a local optimum in comparison with other memes, since all methods perform the fixed number of iterations at the most.The analysis of the average number of iterations shows that using the most efficient sets of memes allows us to find the optimal

  9. Fuzzy Controller Design Using Evolutionary Techniques for Twin Rotor MIMO System: A Comparative Study

    Directory of Open Access Journals (Sweden)

    H. A. Hashim

    2015-01-01

    Full Text Available This paper presents a comparative study of fuzzy controller design for the twin rotor multi-input multioutput (MIMO system (TRMS considering most promising evolutionary techniques. These are gravitational search algorithm (GSA, particle swarm optimization (PSO, artificial bee colony (ABC, and differential evolution (DE. In this study, the gains of four fuzzy proportional derivative (PD controllers for TRMS have been optimized using the considered techniques. The optimization techniques are developed to identify the optimal control parameters for system stability enhancement, to cancel high nonlinearities in the model, to reduce the coupling effect, and to drive TRMS pitch and yaw angles into the desired tracking trajectory efficiently and accurately. The most effective technique in terms of system response due to different disturbances has been investigated. In this work, it is observed that GSA is the most effective technique in terms of solution quality and convergence speed.

  10. Comparison of radiographic technique by computer simulation

    International Nuclear Information System (INIS)

    Brochi, M.A.C.; Ghilardi Neto, T.

    1989-01-01

    A computational algorithm to compare radiographic techniques (KVp, mAs and filters) is developed based in the fixation of parameters that defines the images, such as optical density and constrast. Before the experience, the results were used in a radiography of thorax. (author) [pt

  11. Application of computer technique in SMCAMS

    International Nuclear Information System (INIS)

    Lu Deming

    2001-01-01

    A series of applications of computer technique in SMCAMS physics design and magnetic field measurement is described, including digital calculation of electric-magnetic field, beam dynamics, calculation of beam injection and extraction, and mapping and shaping of the magnetic field

  12. Approximate Computing Techniques for Iterative Graph Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh; Kalyanaraman, Anantharaman; Chavarria Miranda, Daniel G.; Krishnamoorthy, Sriram

    2017-12-18

    Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with low impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.

  13. Practical techniques for pediatric computed tomography

    International Nuclear Information System (INIS)

    Fitz, C.R.; Harwood-Nash, D.C.; Kirks, D.R.; Kaufman, R.A.; Berger, P.E.; Kuhn, J.P.; Siegel, M.J.

    1983-01-01

    Dr. Donald Kirks has assembled this section on Practical Techniques for Pediatric Computed Tomography. The material is based on a presentation in the Special Interest session at the 25th Annual Meeting of the Society for Pediatric Radiology in New Orleans, Louisiana, USA in 1982. Meticulous attention to detail and technique is required to ensure an optimal CT examination. CT techniques specifically applicable to infants and children have not been disseminated in the radiology literature and in this respect it may rightly be observed that ''the child is not a small adult''. What follows is a ''cookbook'' prepared by seven participants and it is printed in Pediatric Radiology, in outline form, as a statement of individual preferences for pediatric CT techniques. This outline gives concise explanation of techniques and permits prompt dissemination of information. (orig.)

  14. Operator support system using computational intelligence techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bueno, Elaine Inacio, E-mail: ebueno@ifsp.edu.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Sao Paulo (IFSP), Sao Paulo, SP (Brazil); Pereira, Iraci Martinez, E-mail: martinez@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Computational Intelligence Systems have been widely applied in Monitoring and Fault Detection Systems in several processes and in different kinds of applications. These systems use interdependent components ordered in modules. It is a typical behavior of such systems to ensure early detection and diagnosis of faults. Monitoring and Fault Detection Techniques can be divided into two categories: estimative and pattern recognition methods. The estimative methods use a mathematical model, which describes the process behavior. The pattern recognition methods use a database to describe the process. In this work, an operator support system using Computational Intelligence Techniques was developed. This system will show the information obtained by different CI techniques in order to help operators to take decision in real time and guide them in the fault diagnosis before the normal alarm limits are reached. (author)

  15. Operator support system using computational intelligence techniques

    International Nuclear Information System (INIS)

    Bueno, Elaine Inacio; Pereira, Iraci Martinez

    2015-01-01

    Computational Intelligence Systems have been widely applied in Monitoring and Fault Detection Systems in several processes and in different kinds of applications. These systems use interdependent components ordered in modules. It is a typical behavior of such systems to ensure early detection and diagnosis of faults. Monitoring and Fault Detection Techniques can be divided into two categories: estimative and pattern recognition methods. The estimative methods use a mathematical model, which describes the process behavior. The pattern recognition methods use a database to describe the process. In this work, an operator support system using Computational Intelligence Techniques was developed. This system will show the information obtained by different CI techniques in order to help operators to take decision in real time and guide them in the fault diagnosis before the normal alarm limits are reached. (author)

  16. Computed Radiography: An Innovative Inspection Technique

    International Nuclear Information System (INIS)

    Klein, William A.; Councill, Donald L.

    2002-01-01

    Florida Power and Light Company's (FPL) Nuclear Division combined two diverse technologies to create an innovative inspection technique, Computed Radiography, that improves personnel safety and unit reliability while reducing inspection costs. This technique was pioneered in the medical field and applied in the Nuclear Division initially to detect piping degradation due to flow-accelerated corrosion. Component degradation can be detected by this additional technique. This approach permits FPL to reduce inspection costs, perform on line examinations (no generation curtailment), and to maintain or improve both personnel safety and unit reliability. Computed Radiography is a very versatile tool capable of other uses: - improving the external corrosion program by permitting inspections underneath insulation, and - diagnosing system and component problems such as valve positions, without the need to shutdown or disassemble the component. (authors)

  17. An Encoding Technique for Multiobjective Evolutionary Algorithms Applied to Power Distribution System Reconfiguration

    Directory of Open Access Journals (Sweden)

    J. L. Guardado

    2014-01-01

    Full Text Available Network reconfiguration is an alternative to reduce power losses and optimize the operation of power distribution systems. In this paper, an encoding scheme for evolutionary algorithms is proposed in order to search efficiently for the Pareto-optimal solutions during the reconfiguration of power distribution systems considering multiobjective optimization. The encoding scheme is based on the edge window decoder (EWD technique, which was embedded in the Strength Pareto Evolutionary Algorithm 2 (SPEA2 and the Nondominated Sorting Genetic Algorithm II (NSGA-II. The effectiveness of the encoding scheme was proved by solving a test problem for which the true Pareto-optimal solutions are known in advance. In order to prove the practicability of the encoding scheme, a real distribution system was used to find the near Pareto-optimal solutions for different objective functions to optimize.

  18. An encoding technique for multiobjective evolutionary algorithms applied to power distribution system reconfiguration.

    Science.gov (United States)

    Guardado, J L; Rivas-Davalos, F; Torres, J; Maximov, S; Melgoza, E

    2014-01-01

    Network reconfiguration is an alternative to reduce power losses and optimize the operation of power distribution systems. In this paper, an encoding scheme for evolutionary algorithms is proposed in order to search efficiently for the Pareto-optimal solutions during the reconfiguration of power distribution systems considering multiobjective optimization. The encoding scheme is based on the edge window decoder (EWD) technique, which was embedded in the Strength Pareto Evolutionary Algorithm 2 (SPEA2) and the Nondominated Sorting Genetic Algorithm II (NSGA-II). The effectiveness of the encoding scheme was proved by solving a test problem for which the true Pareto-optimal solutions are known in advance. In order to prove the practicability of the encoding scheme, a real distribution system was used to find the near Pareto-optimal solutions for different objective functions to optimize.

  19. Computational Intelligence Techniques for New Product Design

    CERN Document Server

    Chan, Kit Yan; Dillon, Tharam S

    2012-01-01

    Applying computational intelligence for product design is a fast-growing and promising research area in computer sciences and industrial engineering. However, there is currently a lack of books, which discuss this research area. This book discusses a wide range of computational intelligence techniques for implementation on product design. It covers common issues on product design from identification of customer requirements in product design, determination of importance of customer requirements, determination of optimal design attributes, relating design attributes and customer satisfaction, integration of marketing aspects into product design, affective product design, to quality control of new products. Approaches for refinement of computational intelligence are discussed, in order to address different issues on product design. Cases studies of product design in terms of development of real-world new products are included, in order to illustrate the design procedures, as well as the effectiveness of the com...

  20. New computing techniques in physics research

    International Nuclear Information System (INIS)

    Becks, Karl-Heinz; Perret-Gallix, Denis

    1994-01-01

    New techniques were highlighted by the ''Third International Workshop on Software Engineering, Artificial Intelligence and Expert Systems for High Energy and Nuclear Physics'' in Oberammergau, Bavaria, Germany, from October 4 to 8. It was the third workshop in the series; the first was held in Lyon in 1990 and the second at France-Telecom site near La Londe les Maures in 1992. This series of workshops covers a broad spectrum of problems. New, highly sophisticated experiments demand new techniques in computing, in hardware as well as in software. Software Engineering Techniques could in principle satisfy the needs for forthcoming accelerator experiments. The growing complexity of detector systems demands new techniques in experimental error diagnosis and repair suggestions; Expert Systems seem to offer a way of assisting the experimental crew during data-taking

  1. Artificial Intelligence, Evolutionary Computing and Metaheuristics In the Footsteps of Alan Turing

    CERN Document Server

    2013-01-01

    Alan Turing pioneered many research areas such as artificial intelligence, computability, heuristics and pattern formation.  Nowadays at the information age, it is hard to imagine how the world would be without computers and the Internet. Without Turing's work, especially the core concept of Turing Machine at the heart of every computer, mobile phone and microchip today, so many things on which we are so dependent would be impossible. 2012 is the Alan Turing year -- a centenary celebration of the life and work of Alan Turing. To celebrate Turing's legacy and follow the footsteps of this brilliant mind, we take this golden opportunity to review the latest developments in areas of artificial intelligence, evolutionary computation and metaheuristics, and all these areas can be traced back to Turing's pioneer work. Topics include Turing test, Turing machine, artificial intelligence, cryptography, software testing, image processing, neural networks, nature-inspired algorithms such as bat algorithm and cuckoo sear...

  2. Computer technique for evaluating collimator performance

    International Nuclear Information System (INIS)

    Rollo, F.D.

    1975-01-01

    A computer program has been developed to theoretically evaluate the overall performance of collimators used with radioisotope scanners and γ cameras. The first step of the program involves the determination of the line spread function (LSF) and geometrical efficiency from the fundamental parameters of the collimator being evaluated. The working equations can be applied to any plane of interest. The resulting LSF is applied to subroutine computer programs which compute corresponding modulation transfer function and contrast efficiency functions. The latter function is then combined with appropriate geometrical efficiency data to determine the performance index function. The overall computer program allows one to predict from the physical parameters of the collimator alone how well the collimator will reproduce various sized spherical voids of activity in the image plane. The collimator performance program can be used to compare the performance of various collimator types, to study the effects of source depth on collimator performance, and to assist in the design of collimators. The theory of the collimator performance equation is discussed, a comparison between the experimental and theoretical LSF values is made, and examples of the application of the technique are presented

  3. Artificial intelligence in peer review: How can evolutionary computation support journal editors?

    Science.gov (United States)

    Mrowinski, Maciej J; Fronczak, Piotr; Fronczak, Agata; Ausloos, Marcel; Nedic, Olgica

    2017-01-01

    With the volume of manuscripts submitted for publication growing every year, the deficiencies of peer review (e.g. long review times) are becoming more apparent. Editorial strategies, sets of guidelines designed to speed up the process and reduce editors' workloads, are treated as trade secrets by publishing houses and are not shared publicly. To improve the effectiveness of their strategies, editors in small publishing groups are faced with undertaking an iterative trial-and-error approach. We show that Cartesian Genetic Programming, a nature-inspired evolutionary algorithm, can dramatically improve editorial strategies. The artificially evolved strategy reduced the duration of the peer review process by 30%, without increasing the pool of reviewers (in comparison to a typical human-developed strategy). Evolutionary computation has typically been used in technological processes or biological ecosystems. Our results demonstrate that genetic programs can improve real-world social systems that are usually much harder to understand and control than physical systems.

  4. Artificial intelligence in peer review: How can evolutionary computation support journal editors?

    Directory of Open Access Journals (Sweden)

    Maciej J Mrowinski

    Full Text Available With the volume of manuscripts submitted for publication growing every year, the deficiencies of peer review (e.g. long review times are becoming more apparent. Editorial strategies, sets of guidelines designed to speed up the process and reduce editors' workloads, are treated as trade secrets by publishing houses and are not shared publicly. To improve the effectiveness of their strategies, editors in small publishing groups are faced with undertaking an iterative trial-and-error approach. We show that Cartesian Genetic Programming, a nature-inspired evolutionary algorithm, can dramatically improve editorial strategies. The artificially evolved strategy reduced the duration of the peer review process by 30%, without increasing the pool of reviewers (in comparison to a typical human-developed strategy. Evolutionary computation has typically been used in technological processes or biological ecosystems. Our results demonstrate that genetic programs can improve real-world social systems that are usually much harder to understand and control than physical systems.

  5. Genomicus 2018: karyotype evolutionary trees and on-the-fly synteny computing.

    Science.gov (United States)

    Nguyen, Nga Thi Thuy; Vincens, Pierre; Roest Crollius, Hugues; Louis, Alexandra

    2018-01-04

    Since 2010, the Genomicus web server is available online at http://genomicus.biologie.ens.fr/genomicus. This graphical browser provides access to comparative genomic analyses in four different phyla (Vertebrate, Plants, Fungi, and non vertebrate Metazoans). Users can analyse genomic information from extant species, as well as ancestral gene content and gene order for vertebrates and flowering plants, in an integrated evolutionary context. New analyses and visualization tools have recently been implemented in Genomicus Vertebrate. Karyotype structures from several genomes can now be compared along an evolutionary pathway (Multi-KaryotypeView), and synteny blocks can be computed and visualized between any two genomes (PhylDiagView). © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. Bayer Digester Optimization Studies using Computer Techniques

    Science.gov (United States)

    Kotte, Jan J.; Schleider, Victor H.

    Theoretically required heat transfer performance by the multistaged flash heat reclaim system of a high pressure Bayer digester unit is determined for various conditions of discharge temperature, excess flash vapor and indirect steam addition. Solution of simultaneous heat balances around the digester vessels and the heat reclaim system yields the magnitude of available heat for representation of each case on a temperature-enthalpy diagram, where graphical fit of the number of flash stages fixes the heater requirements. Both the heat balances and the trial-and-error graphical solution are adapted to solution by digital computer techniques.

  7. Measuring techniques in emission computed tomography

    International Nuclear Information System (INIS)

    Jordan, K.; Knoop, B.

    1988-01-01

    The chapter reviews the historical development of the emission computed tomography and its basic principles, proceeds to SPECT and PET, special techniques of emission tomography, and concludes with a comprehensive discussion of the mathematical fundamentals of the reconstruction and the quantitative activity determination in vivo, dealing with radon transformation and the projection slice theorem, methods of image reconstruction such as analytical and algebraic methods, limiting conditions in real systems such as limited number of measured data, noise enhancement, absorption, stray radiation, and random coincidence. (orig./HP) With 111 figs., 6 tabs [de

  8. Mathematics in computed tomography and related techniques

    International Nuclear Information System (INIS)

    Sawicka, B.

    1992-01-01

    The mathematical basis of computed tomography (CT) was formulated in 1917 by Radon. His theorem states that the 2-D function f(x,y) can be determined at all points from a complete set of its line integrals. Modern methods of image reconstruction include three approaches: algebraic reconstruction techniques with simultaneous iterative reconstruction or simultaneous algebraic reconstruction; convolution back projection; and the Fourier transform method. There is no one best approach. Because the experimental data do not strictly satisfy theoretical models, a number of effects have to be taken into account; in particular, the problems of beam geometry, finite beam dimensions and distribution, beam scattering, and the radiation source spectrum. Tomography with truncated data is of interest, employing mathematical approximations to compensate for the unmeasured projection data. Mathematical techniques in image processing and data analysis are also extensively used. 13 refs

  9. Multi-objective optimization of HVAC system with an evolutionary computation algorithm

    International Nuclear Information System (INIS)

    Kusiak, Andrew; Tang, Fan; Xu, Guanglin

    2011-01-01

    A data-mining approach for the optimization of a HVAC (heating, ventilation, and air conditioning) system is presented. A predictive model of the HVAC system is derived by data-mining algorithms, using a dataset collected from an experiment conducted at a research facility. To minimize the energy while maintaining the corresponding IAQ (indoor air quality) within a user-defined range, a multi-objective optimization model is developed. The solutions of this model are set points of the control system derived with an evolutionary computation algorithm. The controllable input variables - supply air temperature and supply air duct static pressure set points - are generated to reduce the energy use. The results produced by the evolutionary computation algorithm show that the control strategy saves energy by optimizing operations of an HVAC system. -- Highlights: → A data-mining approach for the optimization of a heating, ventilation, and air conditioning (HVAC) system is presented. → The data used in the project has been collected from an experiment conducted at an energy research facility. → The approach presented in the paper leads to accomplishing significant energy savings without compromising the indoor air quality. → The energy savings are accomplished by computing set points for the supply air temperature and the supply air duct static pressure.

  10. Assessment of traffic noise levels in urban areas using different soft computing techniques.

    Science.gov (United States)

    Tomić, J; Bogojević, N; Pljakić, M; Šumarac-Pavlović, D

    2016-10-01

    Available traffic noise prediction models are usually based on regression analysis of experimental data, and this paper presents the application of soft computing techniques in traffic noise prediction. Two mathematical models are proposed and their predictions are compared to data collected by traffic noise monitoring in urban areas, as well as to predictions of commonly used traffic noise models. The results show that application of evolutionary algorithms and neural networks may improve process of development, as well as accuracy of traffic noise prediction.

  11. Discovering Unique, Low-Energy Transition States Using Evolutionary Molecular Memetic Computing

    DEFF Research Database (Denmark)

    Ellabaan, Mostafa M Hashim; Ong, Y.S.; Handoko, S.D.

    2013-01-01

    In the last few decades, identification of transition states has experienced significant growth in research interests from various scientific communities. As per the transition states theory, reaction paths and landscape analysis as well as many thermodynamic properties of biochemical systems can...... be accurately identified through the transition states. Transition states describe the paths of molecular systems in transiting across stable states. In this article, we present the discovery of unique, low-energy transition states and showcase the efficacy of their identification using the memetic computing...... paradigm under a Molecular Memetic Computing (MMC) framework. In essence, the MMC is equipped with the tree-based representation of non-cyclic molecules and the covalent-bond-driven evolutionary operators, in addition to the typical backbone of memetic algorithms. Herein, we employ genetic algorithm...

  12. Parallel computing techniques for rotorcraft aerodynamics

    Science.gov (United States)

    Ekici, Kivanc

    The modification of unsteady three-dimensional Navier-Stokes codes for application on massively parallel and distributed computing environments is investigated. The Euler/Navier-Stokes code TURNS (Transonic Unsteady Rotor Navier-Stokes) was chosen as a test bed because of its wide use by universities and industry. For the efficient implementation of TURNS on parallel computing systems, two algorithmic changes are developed. First, main modifications to the implicit operator, Lower-Upper Symmetric Gauss Seidel (LU-SGS) originally used in TURNS, is performed. Second, application of an inexact Newton method, coupled with a Krylov subspace iterative method (Newton-Krylov method) is carried out. Both techniques have been tried previously for the Euler equations mode of the code. In this work, we have extended the methods to the Navier-Stokes mode. Several new implicit operators were tried because of convergence problems of traditional operators with the high cell aspect ratio (CAR) grids needed for viscous calculations on structured grids. Promising results for both Euler and Navier-Stokes cases are presented for these operators. For the efficient implementation of Newton-Krylov methods to the Navier-Stokes mode of TURNS, efficient preconditioners must be used. The parallel implicit operators used in the previous step are employed as preconditioners and the results are compared. The Message Passing Interface (MPI) protocol has been used because of its portability to various parallel architectures. It should be noted that the proposed methodology is general and can be applied to several other CFD codes (e.g. OVERFLOW).

  13. A heuristic ranking approach on capacity benefit margin determination using Pareto-based evolutionary programming technique.

    Science.gov (United States)

    Othman, Muhammad Murtadha; Abd Rahman, Nurulazmi; Musirin, Ismail; Fotuhi-Firuzabad, Mahmud; Rajabi-Ghahnavieh, Abbas

    2015-01-01

    This paper introduces a novel multiobjective approach for capacity benefit margin (CBM) assessment taking into account tie-line reliability of interconnected systems. CBM is the imperative information utilized as a reference by the load-serving entities (LSE) to estimate a certain margin of transfer capability so that a reliable access to generation through interconnected system could be attained. A new Pareto-based evolutionary programming (EP) technique is used to perform a simultaneous determination of CBM for all areas of the interconnected system. The selection of CBM at the Pareto optimal front is proposed to be performed by referring to a heuristic ranking index that takes into account system loss of load expectation (LOLE) in various conditions. Eventually, the power transfer based available transfer capability (ATC) is determined by considering the firm and nonfirm transfers of CBM. A comprehensive set of numerical studies are conducted on the modified IEEE-RTS79 and the performance of the proposed method is numerically investigated in detail. The main advantage of the proposed technique is in terms of flexibility offered to an independent system operator in selecting an appropriate solution of CBM simultaneously for all areas.

  14. A Heuristic Ranking Approach on Capacity Benefit Margin Determination Using Pareto-Based Evolutionary Programming Technique

    Directory of Open Access Journals (Sweden)

    Muhammad Murtadha Othman

    2015-01-01

    Full Text Available This paper introduces a novel multiobjective approach for capacity benefit margin (CBM assessment taking into account tie-line reliability of interconnected systems. CBM is the imperative information utilized as a reference by the load-serving entities (LSE to estimate a certain margin of transfer capability so that a reliable access to generation through interconnected system could be attained. A new Pareto-based evolutionary programming (EP technique is used to perform a simultaneous determination of CBM for all areas of the interconnected system. The selection of CBM at the Pareto optimal front is proposed to be performed by referring to a heuristic ranking index that takes into account system loss of load expectation (LOLE in various conditions. Eventually, the power transfer based available transfer capability (ATC is determined by considering the firm and nonfirm transfers of CBM. A comprehensive set of numerical studies are conducted on the modified IEEE-RTS79 and the performance of the proposed method is numerically investigated in detail. The main advantage of the proposed technique is in terms of flexibility offered to an independent system operator in selecting an appropriate solution of CBM simultaneously for all areas.

  15. New computing techniques in physics research

    International Nuclear Information System (INIS)

    Perret-Gallix, D.; Wojcik, W.

    1990-01-01

    These proceedings relate in a pragmatic way the use of methods and techniques of software engineering and artificial intelligence in high energy and nuclear physics. Such fundamental research can only be done through the design, the building and the running of equipments and systems among the most complex ever undertaken by mankind. The use of these new methods is mandatory in such an environment. However their proper integration in these real applications raise some unsolved problems. Their solution, beyond the research field, will lead to a better understanding of some fundamental aspects of software engineering and artificial intelligence. Here is a sample of subjects covered in the proceedings : Software engineering in a multi-users, multi-versions, multi-systems environment, project management, software validation and quality control, data structure and management object oriented languages, multi-languages application, interactive data analysis, expert systems for diagnosis, expert systems for real-time applications, neural networks for pattern recognition, symbolic manipulation for automatic computation of complex processes

  16. Computational Evolutionary Methodology for Knowledge Discovery and Forecasting in Epidemiology and Medicine

    International Nuclear Information System (INIS)

    Rao, Dhananjai M.; Chernyakhovsky, Alexander; Rao, Victoria

    2008-01-01

    Humanity is facing an increasing number of highly virulent and communicable diseases such as avian influenza. Researchers believe that avian influenza has potential to evolve into one of the deadliest pandemics. Combating these diseases requires in-depth knowledge of their epidemiology. An effective methodology for discovering epidemiological knowledge is to utilize a descriptive, evolutionary, ecological model and use bio-simulations to study and analyze it. These types of bio-simulations fall under the category of computational evolutionary methods because the individual entities participating in the simulation are permitted to evolve in a natural manner by reacting to changes in the simulated ecosystem. This work describes the application of the aforementioned methodology to discover epidemiological knowledge about avian influenza using a novel eco-modeling and bio-simulation environment called SEARUMS. The mathematical principles underlying SEARUMS, its design, and the procedure for using SEARUMS are discussed. The bio-simulations and multi-faceted case studies conducted using SEARUMS elucidate its ability to pinpoint timelines, epicenters, and socio-economic impacts of avian influenza. This knowledge is invaluable for proactive deployment of countermeasures in order to minimize negative socioeconomic impacts, combat the disease, and avert a pandemic

  17. Towards a Population Dynamics Theory for Evolutionary Computing: Learning from Biological Population Dynamics in Nature

    Science.gov (United States)

    Ma, Zhanshan (Sam)

    In evolutionary computing (EC), population size is one of the critical parameters that a researcher has to deal with. Hence, it was no surprise that the pioneers of EC, such as De Jong (1975) and Holland (1975), had already studied the population sizing from the very beginning of EC. What is perhaps surprising is that more than three decades later, we still largely depend on the experience or ad-hoc trial-and-error approach to set the population size. For example, in a recent monograph, Eiben and Smith (2003) indicated: "In almost all EC applications, the population size is constant and does not change during the evolutionary search." Despite enormous research on this issue in recent years, we still lack a well accepted theory for population sizing. In this paper, I propose to develop a population dynamics theory forEC with the inspiration from the population dynamics theory of biological populations in nature. Essentially, the EC population is considered as a dynamic system over time (generations) and space (search space or fitness landscape), similar to the spatial and temporal dynamics of biological populations in nature. With this conceptual mapping, I propose to 'transplant' the biological population dynamics theory to EC via three steps: (i) experimentally test the feasibility—whether or not emulating natural population dynamics improves the EC performance; (ii) comparatively study the underlying mechanisms—why there are improvements, primarily via statistical modeling analysis; (iii) conduct theoretical analysis with theoretical models such as percolation theory and extended evolutionary game theory that are generally applicable to both EC and natural populations. This article is a summary of a series of studies we have performed to achieve the general goal [27][30]-[32]. In the following, I start with an extremely brief introduction on the theory and models of natural population dynamics (Sections 1 & 2). In Sections 4 to 6, I briefly discuss three

  18. Introducing E-Learning in a Norwegian Service Company with Participatory Design and Evolutionary Prototyping Techniques

    OpenAIRE

    Mørch , Anders I.; Engen , Bård Ketil; Hansen Åsand , Hege-René; Brynhildsen , Camilla; Tødenes , Ida

    2004-01-01

    Over a 2-year period, we have participated in the introduction of e-learning in a Norwegian service company, a gas station division of an oil company. This company has an advanced computer network infrastructure for communication and information sharing, but the primary task of the employees is serving customers. We identify some challenges to introducing e-learning in this kind of environment. A primary emphasis has been on using participatory design techniques during the planning stages and...

  19. Industrial Applications of Evolutionary Algorithms

    CERN Document Server

    Sanchez, Ernesto; Tonda, Alberto

    2012-01-01

    This book is intended as a reference both for experienced users of evolutionary algorithms and for researchers that are beginning to approach these fascinating optimization techniques. Experienced users will find interesting details of real-world problems, and advice on solving issues related to fitness computation, modeling and setting appropriate parameters to reach optimal solutions. Beginners will find a thorough introduction to evolutionary computation, and a complete presentation of all evolutionary algorithms exploited to solve different problems. The book could fill the gap between the

  20. Techniques and environments for big data analysis parallel, cloud, and grid computing

    CERN Document Server

    Dehuri, Satchidananda; Kim, Euiwhan; Wang, Gi-Name

    2016-01-01

    This volume is aiming at a wide range of readers and researchers in the area of Big Data by presenting the recent advances in the fields of Big Data Analysis, as well as the techniques and tools used to analyze it. The book includes 10 distinct chapters providing a concise introduction to Big Data Analysis and recent Techniques and Environments for Big Data Analysis. It gives insight into how the expensive fitness evaluation of evolutionary learning can play a vital role in big data analysis by adopting Parallel, Grid, and Cloud computing environments.

  1. On techniques of ATR lattice computation

    International Nuclear Information System (INIS)

    1997-08-01

    Lattice computation is to compute the average nuclear constants of unit fuel lattice which are required for computing core nuclear characteristics such as core power distribution and reactivity characteristics. The main nuclear constants are infinite multiplying rate, neutron movement area, cross section for diffusion computation, local power distribution and isotope composition. As for the lattice computation code, WIMS-ATR is used, which is based on the WIMS-D code developed in U.K., and for the purpose of heightening the accuracy of analysis, which was improved by adding heavy water scattering cross section considering the temperature dependence by Honeck model. For the computation of the neutron absorption by control rods, LOIEL BLUE code is used. The extrapolation distance of neutron flux on control rod surfaces is computed by using THERMOS and DTF codes, and the lattice constants of adjoining lattices are computed by using the WIMS-ATR code. As for the WIMS-ATR code, the computation flow and nuclear data library, and as for the LOIEL BLUE code, the computation flow are explained. The local power distribution in fuel assemblies determined by the WIMS-ATR code was verified with the measured data, and the results are reported. (K.I.)

  2. Methods and experimental techniques in computer engineering

    CERN Document Server

    Schiaffonati, Viola

    2014-01-01

    Computing and science reveal a synergic relationship. On the one hand, it is widely evident that computing plays an important role in the scientific endeavor. On the other hand, the role of scientific method in computing is getting increasingly important, especially in providing ways to experimentally evaluate the properties of complex computing systems. This book critically presents these issues from a unitary conceptual and methodological perspective by addressing specific case studies at the intersection between computing and science. The book originates from, and collects the experience of, a course for PhD students in Information Engineering held at the Politecnico di Milano. Following the structure of the course, the book features contributions from some researchers who are working at the intersection between computing and science.

  3. New Information Dispersal Techniques for Trustworthy Computing

    Science.gov (United States)

    Parakh, Abhishek

    2011-01-01

    Information dispersal algorithms (IDA) are used for distributed data storage because they simultaneously provide security, reliability and space efficiency, constituting a trustworthy computing framework for many critical applications, such as cloud computing, in the information society. In the most general sense, this is achieved by dividing data…

  4. Teachers of Advertising Media Courses Describe Techniques, Show Computer Applications.

    Science.gov (United States)

    Lancaster, Kent M.; Martin, Thomas C.

    1989-01-01

    Reports on a survey of university advertising media teachers regarding textbooks and instructional aids used, teaching techniques, computer applications, student placement, instructor background, and faculty publishing. (SR)

  5. Intention recognition, commitment and their roles in the evolution of cooperation from artificial intelligence techniques to evolutionary game theory models

    CERN Document Server

    Han, The Anh

    2013-01-01

    This original and timely monograph describes a unique self-contained excursion that reveals to the readers the roles of two basic cognitive abilities, i.e. intention recognition and arranging commitments, in the evolution of cooperative behavior. This book analyses intention recognition, an important ability that helps agents predict others’ behavior, in its artificial intelligence and evolutionary computational modeling aspects, and proposes a novel intention recognition method. Furthermore, the book presents a new framework for intention-based decision making and illustrates several ways in which an ability to recognize intentions of others can enhance a decision making process. By employing the new intention recognition method and the tools of evolutionary game theory, this book introduces computational models demonstrating that intention recognition promotes the emergence of cooperation within populations of self-regarding agents. Finally, the book describes how commitment provides a pathway to the evol...

  6. Cloud Computing Techniques for Space Mission Design

    Science.gov (United States)

    Arrieta, Juan; Senent, Juan

    2014-01-01

    The overarching objective of space mission design is to tackle complex problems producing better results, and faster. In developing the methods and tools to fulfill this objective, the user interacts with the different layers of a computing system.

  7. Bringing Advanced Computational Techniques to Energy Research

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Julie C

    2012-11-17

    Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.

  8. Soft Computing Techniques in Vision Science

    CERN Document Server

    Yang, Yeon-Mo

    2012-01-01

    This Special Edited Volume is a unique approach towards Computational solution for the upcoming field of study called Vision Science. From a scientific firmament Optics, Ophthalmology, and Optical Science has surpassed an Odyssey of optimizing configurations of Optical systems, Surveillance Cameras and other Nano optical devices with the metaphor of Nano Science and Technology. Still these systems are falling short of its computational aspect to achieve the pinnacle of human vision system. In this edited volume much attention has been given to address the coupling issues Computational Science and Vision Studies.  It is a comprehensive collection of research works addressing various related areas of Vision Science like Visual Perception and Visual system, Cognitive Psychology, Neuroscience, Psychophysics and Ophthalmology, linguistic relativity, color vision etc. This issue carries some latest developments in the form of research articles and presentations. The volume is rich of contents with technical tools ...

  9. Computer Architecture Techniques for Power-Efficiency

    CERN Document Server

    Kaxiras, Stefanos

    2008-01-01

    In the last few years, power dissipation has become an important design constraint, on par with performance, in the design of new computer systems. Whereas in the past, the primary job of the computer architect was to translate improvements in operating frequency and transistor count into performance, now power efficiency must be taken into account at every step of the design process. While for some time, architects have been successful in delivering 40% to 50% annual improvement in processor performance, costs that were previously brushed aside eventually caught up. The most critical of these

  10. Evolutionary Statistical Procedures

    CERN Document Server

    Baragona, Roberto; Poli, Irene

    2011-01-01

    This proposed text appears to be a good introduction to evolutionary computation for use in applied statistics research. The authors draw from a vast base of knowledge about the current literature in both the design of evolutionary algorithms and statistical techniques. Modern statistical research is on the threshold of solving increasingly complex problems in high dimensions, and the generalization of its methodology to parameters whose estimators do not follow mathematically simple distributions is underway. Many of these challenges involve optimizing functions for which analytic solutions a

  11. Inclusion of the fitness sharing technique in an evolutionary algorithm to analyze the fitness landscape of the genetic code adaptability.

    Science.gov (United States)

    Santos, José; Monteagudo, Ángel

    2017-03-27

    The canonical code, although prevailing in complex genomes, is not universal. It was shown the canonical genetic code superior robustness compared to random codes, but it is not clearly determined how it evolved towards its current form. The error minimization theory considers the minimization of point mutation adverse effect as the main selection factor in the evolution of the code. We have used simulated evolution in a computer to search for optimized codes, which helps to obtain information about the optimization level of the canonical code in its evolution. A genetic algorithm searches for efficient codes in a fitness landscape that corresponds with the adaptability of possible hypothetical genetic codes. The lower the effects of errors or mutations in the codon bases of a hypothetical code, the more efficient or optimal is that code. The inclusion of the fitness sharing technique in the evolutionary algorithm allows the extent to which the canonical genetic code is in an area corresponding to a deep local minimum to be easily determined, even in the high dimensional spaces considered. The analyses show that the canonical code is not in a deep local minimum and that the fitness landscape is not a multimodal fitness landscape with deep and separated peaks. Moreover, the canonical code is clearly far away from the areas of higher fitness in the landscape. Given the non-presence of deep local minima in the landscape, although the code could evolve and different forces could shape its structure, the fitness landscape nature considered in the error minimization theory does not explain why the canonical code ended its evolution in a location which is not an area of a localized deep minimum of the huge fitness landscape.

  12. Exploiting Analytics Techniques in CMS Computing Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Bonacorsi, D. [Bologna U.; Kuznetsov, V. [Cornell U.; Magini, N. [Fermilab; Repečka, A. [Vilnius U.; Vaandering, E. [Fermilab

    2017-11-22

    The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster for further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.

  13. Exploiting Genomic Knowledge in Optimising Molecular Breeding Programmes: Algorithms from Evolutionary Computing

    Science.gov (United States)

    O'Hagan, Steve; Knowles, Joshua; Kell, Douglas B.

    2012-01-01

    Comparatively few studies have addressed directly the question of quantifying the benefits to be had from using molecular genetic markers in experimental breeding programmes (e.g. for improved crops and livestock), nor the question of which organisms should be mated with each other to best effect. We argue that this requires in silico modelling, an approach for which there is a large literature in the field of evolutionary computation (EC), but which has not really been applied in this way to experimental breeding programmes. EC seeks to optimise measurable outcomes (phenotypic fitnesses) by optimising in silico the mutation, recombination and selection regimes that are used. We review some of the approaches from EC, and compare experimentally, using a biologically relevant in silico landscape, some algorithms that have knowledge of where they are in the (genotypic) search space (G-algorithms) with some (albeit well-tuned ones) that do not (F-algorithms). For the present kinds of landscapes, F- and G-algorithms were broadly comparable in quality and effectiveness, although we recognise that the G-algorithms were not equipped with any ‘prior knowledge’ of epistatic pathway interactions. This use of algorithms based on machine learning has important implications for the optimisation of experimental breeding programmes in the post-genomic era when we shall potentially have access to the full genome sequence of every organism in a breeding population. The non-proprietary code that we have used is made freely available (via Supplementary information). PMID:23185279

  14. Computational optimization techniques applied to microgrids planning

    DEFF Research Database (Denmark)

    Gamarra, Carlos; Guerrero, Josep M.

    2015-01-01

    Microgrids are expected to become part of the next electric power system evolution, not only in rural and remote areas but also in urban communities. Since microgrids are expected to coexist with traditional power grids (such as district heating does with traditional heating systems......), their planning process must be addressed to economic feasibility, as a long-term stability guarantee. Planning a microgrid is a complex process due to existing alternatives, goals, constraints and uncertainties. Usually planning goals conflict each other and, as a consequence, different optimization problems...... appear along the planning process. In this context, technical literature about optimization techniques applied to microgrid planning have been reviewed and the guidelines for innovative planning methodologies focused on economic feasibility can be defined. Finally, some trending techniques and new...

  15. The Security Challenges in the IoT Enabled Cyber-Physical Systems and Opportunities for Evolutionary Computing & Other Computational Intelligence

    OpenAIRE

    He, H.; Maple, C.; Watson, T.; Tiwari, A.; Mehnen, J.; Jin, Y.; Gabrys, Bogdan

    2016-01-01

    Internet of Things (IoT) has given rise to the fourth industrial revolution (Industrie 4.0), and it brings great benefits by connecting people, processes and data. However, cybersecurity has become a critical challenge in the IoT enabled cyber physical systems, from connected supply chain, Big Data produced by huge amount of IoT devices, to industry control systems. Evolutionary computation combining with other computational intelligence will play an important role for cybersecurity, such as ...

  16. Transport modeling and advanced computer techniques

    International Nuclear Information System (INIS)

    Wiley, J.C.; Ross, D.W.; Miner, W.H. Jr.

    1988-11-01

    A workshop was held at the University of Texas in June 1988 to consider the current state of transport codes and whether improved user interfaces would make the codes more usable and accessible to the fusion community. Also considered was the possibility that a software standard could be devised to ease the exchange of routines between groups. It was noted that two of the major obstacles to exchanging routines now are the variety of geometrical representation and choices of units. While the workshop formulated no standards, it was generally agreed that good software engineering would aid in the exchange of routines, and that a continued exchange of ideas between groups would be worthwhile. It seems that before we begin to discuss software standards we should review the current state of computer technology --- both hardware and software to see what influence recent advances might have on our software goals. This is done in this paper

  17. Probability, statistics, and associated computing techniques

    International Nuclear Information System (INIS)

    James, F.

    1983-01-01

    This chapter attempts to explore the extent to which it is possible for the experimental physicist to find optimal statistical techniques to provide a unique and unambiguous quantitative measure of the significance of raw data. Discusses statistics as the inverse of probability; normal theory of parameter estimation; normal theory (Gaussian measurements); the universality of the Gaussian distribution; real-life resolution functions; combination and propagation of uncertainties; the sum or difference of 2 variables; local theory, or the propagation of small errors; error on the ratio of 2 discrete variables; the propagation of large errors; confidence intervals; classical theory; Bayesian theory; use of the likelihood function; the second derivative of the log-likelihood function; multiparameter confidence intervals; the method of MINOS; least squares; the Gauss-Markov theorem; maximum likelihood for uniform error distribution; the Chebyshev fit; the parameter uncertainties; the efficiency of the Chebyshev estimator; error symmetrization; robustness vs. efficiency; testing of hypotheses (e.g., the Neyman-Pearson test); goodness-of-fit; distribution-free tests; comparing two one-dimensional distributions; comparing multidimensional distributions; and permutation tests for comparing two point sets

  18. An Artificial Immune System-Inspired Multiobjective Evolutionary Algorithm with Application to the Detection of Distributed Computer Network Intrusions

    Science.gov (United States)

    2007-03-01

    Optimization Coello, Van Veldhuizen , and Lamont define global optimization as, “the process of finding the global minimum4 within some search space S [CVL02...Technology, Shapes Markets, and Manages People, Simon & Schuster, New York, 1995. [CVL02] Coello, C., Van Veldhuizen , D., Lamont, G.B., Evolutionary...Anomaly Detection, Technical Report CS- 2003-02, Computer Science Department, Florida Institute of Technology, 2003. [Marmelstein99] Marmelstein, R., Van

  19. THE COMPUTATIONAL INTELLIGENCE TECHNIQUES FOR PREDICTIONS - ARTIFICIAL NEURAL NETWORKS

    OpenAIRE

    Mary Violeta Bar

    2014-01-01

    The computational intelligence techniques are used in problems which can not be solved by traditional techniques when there is insufficient data to develop a model problem or when they have errors.Computational intelligence, as he called Bezdek (Bezdek, 1992) aims at modeling of biological intelligence. Artificial Neural Networks( ANNs) have been applied to an increasing number of real world problems of considerable complexity. Their most important advantage is solving problems that are too c...

  20. Numerical Computational Technique for Scattering from Underwater Objects

    OpenAIRE

    T. Ratna Mani; Raj Kumar; Odamapally Vijay Kumar

    2013-01-01

    This paper presents a computational technique for mono-static and bi-static scattering from underwater objects of different shape such as submarines. The scatter has been computed using finite element time domain (FETD) method, based on the superposition of reflections, from the different elements reaching the receiver at a particular instant in time. The results calculated by this method has been verified with the published results based on ramp response technique. An in-depth parametric s...

  1. Optimum topology for radial networks by using evolutionary computer programming; Topologia optima de redes radiais utilizando programacao evolucionaria

    Energy Technology Data Exchange (ETDEWEB)

    Pinto, Joao Luis [Instituto de Engenhariade Sistemas e Computadores (INESC), Porto (Portugal). E-mail: jpinto@duque.inescn.pt; Proenca, Luis Miguel [Instituto Superior de Linguas e Administracao (ISLA), Gaia (Portugal). E-mail: lproenca@inescn.pt

    1999-07-01

    This paper describes the using of Evolutionary Programming techniques for determination of the radial electric network topology, considering investment costs and losses. The work aims to demonstrate the particular easiness of coding and implementation and the parallelism implicit to the method as well, giving outstanding performance levels. As test example, a 43 bars and 75 alternative lines network has been used by describing an implementation of the algorithm in an Object Oriented platform.

  2. Future trends in power plant process computer techniques

    International Nuclear Information System (INIS)

    Dettloff, K.

    1975-01-01

    The development of new concepts of the process computer technique has advanced in great steps. The steps are in the three sections: hardware, software, application concept. New computers with a new periphery such as, e.g., colour layer equipment, have been developed in hardware. In software, a decisive step in the sector 'automation software' has been made. Through these components, a step forwards has also been made in the question of incorporating the process computer in the structure of the whole power plant control technique. (orig./LH) [de

  3. Open Issues in Evolutionary Robotics.

    Science.gov (United States)

    Silva, Fernando; Duarte, Miguel; Correia, Luís; Oliveira, Sancho Moura; Christensen, Anders Lyhne

    2016-01-01

    One of the long-term goals in evolutionary robotics is to be able to automatically synthesize controllers for real autonomous robots based only on a task specification. While a number of studies have shown the applicability of evolutionary robotics techniques for the synthesis of behavioral control, researchers have consistently been faced with a number of issues preventing the widespread adoption of evolutionary robotics for engineering purposes. In this article, we review and discuss the open issues in evolutionary robotics. First, we analyze the benefits and challenges of simulation-based evolution and subsequent deployment of controllers versus evolution on real robotic hardware. Second, we discuss specific evolutionary computation issues that have plagued evolutionary robotics: (1) the bootstrap problem, (2) deception, and (3) the role of genomic encoding and genotype-phenotype mapping in the evolution of controllers for complex tasks. Finally, we address the absence of standard research practices in the field. We also discuss promising avenues of research. Our underlying motivation is the reduction of the current gap between evolutionary robotics and mainstream robotics, and the establishment of evolutionary robotics as a canonical approach for the engineering of autonomous robots.

  4. Multi-Detector Computed Tomography Imaging Techniques in Arterial Injuries

    Directory of Open Access Journals (Sweden)

    Cameron Adler

    2018-04-01

    Full Text Available Cross-sectional imaging has become a critical aspect in the evaluation of arterial injuries. In particular, angiography using computed tomography (CT is the imaging of choice. A variety of techniques and options are available when evaluating for arterial injuries. Techniques involve contrast bolus, various phases of contrast enhancement, multiplanar reconstruction, volume rendering, and maximum intensity projection. After the images are rendered, a variety of features may be seen that diagnose the injury. This article provides a general overview of the techniques, important findings, and pitfalls in cross sectional imaging of arterial imaging, particularly in relation to computed tomography. In addition, the future directions of computed tomography, including a few techniques in the process of development, is also discussed.

  5. A comparative analysis of soft computing techniques for gene prediction.

    Science.gov (United States)

    Goel, Neelam; Singh, Shailendra; Aseri, Trilok Chand

    2013-07-01

    The rapid growth of genomic sequence data for both human and nonhuman species has made analyzing these sequences, especially predicting genes in them, very important and is currently the focus of many research efforts. Beside its scientific interest in the molecular biology and genomics community, gene prediction is of considerable importance in human health and medicine. A variety of gene prediction techniques have been developed for eukaryotes over the past few years. This article reviews and analyzes the application of certain soft computing techniques in gene prediction. First, the problem of gene prediction and its challenges are described. These are followed by different soft computing techniques along with their application to gene prediction. In addition, a comparative analysis of different soft computing techniques for gene prediction is given. Finally some limitations of the current research activities and future research directions are provided. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. An evolutionary algorithm technique for intelligence, surveillance, and reconnaissance plan optimization

    Science.gov (United States)

    Langton, John T.; Caroli, Joseph A.; Rosenberg, Brad

    2008-04-01

    To support an Effects Based Approach to Operations (EBAO), Intelligence, Surveillance, and Reconnaissance (ISR) planners must optimize collection plans within an evolving battlespace. A need exists for a decision support tool that allows ISR planners to rapidly generate and rehearse high-performing ISR plans that balance multiple objectives and constraints to address dynamic collection requirements for assessment. To meet this need we have designed an evolutionary algorithm (EA)-based "Integrated ISR Plan Analysis and Rehearsal System" (I2PARS) to support Effects-based Assessment (EBA). I2PARS supports ISR mission planning and dynamic replanning to coordinate assets and optimize their routes, allocation and tasking. It uses an evolutionary algorithm to address the large parametric space of route-finding problems which is sometimes discontinuous in the ISR domain because of conflicting objectives such as minimizing asset utilization yet maximizing ISR coverage. EAs are uniquely suited for generating solutions in dynamic environments and also allow user feedback. They are therefore ideal for "streaming optimization" and dynamic replanning of ISR mission plans. I2PARS uses the Non-dominated Sorting Genetic Algorithm (NSGA-II) to automatically generate a diverse set of high performing collection plans given multiple objectives, constraints, and assets. Intended end users of I2PARS include ISR planners in the Combined Air Operations Centers and Joint Intelligence Centers. Here we show the feasibility of applying the NSGA-II algorithm and EAs in general to the ISR planning domain. Unique genetic representations and operators for optimization within the ISR domain are presented along with multi-objective optimization criteria for ISR planning. Promising results of the I2PARS architecture design, early software prototype, and limited domain testing of the new algorithm are discussed. We also present plans for future research and development, as well as technology

  7. High Efficiency Computation of the Variances of Structural Evolutionary Random Responses

    Directory of Open Access Journals (Sweden)

    J.H. Lin

    2000-01-01

    Full Text Available For structures subjected to stationary or evolutionary white/colored random noise, their various response variances satisfy algebraic or differential Lyapunov equations. The solution of these Lyapunov equations used to be very difficult. A precise integration method is proposed in the present paper, which solves such Lyapunov equations accurately and very efficiently.

  8. [Cardiac computed tomography: new applications of an evolving technique].

    Science.gov (United States)

    Martín, María; Corros, Cecilia; Calvo, Juan; Mesa, Alicia; García-Campos, Ana; Rodríguez, María Luisa; Barreiro, Manuel; Rozado, José; Colunga, Santiago; de la Hera, Jesús M; Morís, César; Luyando, Luis H

    2015-01-01

    During the last years we have witnessed an increasing development of imaging techniques applied in Cardiology. Among them, cardiac computed tomography is an emerging and evolving technique. With the current possibility of very low radiation studies, the applications have expanded and go further coronariography In the present article we review the technical developments of cardiac computed tomography and its new applications. Copyright © 2014 Instituto Nacional de Cardiología Ignacio Chávez. Published by Masson Doyma México S.A. All rights reserved.

  9. Modeling with data tools and techniques for scientific computing

    CERN Document Server

    Klemens, Ben

    2009-01-01

    Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods

  10. Mono and multi-objective optimization techniques applied to a large range of industrial test cases using Metamodel assisted Evolutionary Algorithms

    Science.gov (United States)

    Fourment, Lionel; Ducloux, Richard; Marie, Stéphane; Ejday, Mohsen; Monnereau, Dominique; Massé, Thomas; Montmitonnet, Pierre

    2010-06-01

    The use of material processing numerical simulation allows a strategy of trial and error to improve virtual processes without incurring material costs or interrupting production and therefore save a lot of money, but it requires user time to analyze the results, adjust the operating conditions and restart the simulation. Automatic optimization is the perfect complement to simulation. Evolutionary Algorithm coupled with metamodelling makes it possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. Ten industrial partners have been selected to cover the different area of the mechanical forging industry and provide different examples of the forming simulation tools. It aims to demonstrate that it is possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. The large computational time is handled by a metamodel approach. It allows interpolating the objective function on the entire parameter space by only knowing the exact function values at a reduced number of "master points". Two algorithms are used: an evolution strategy combined with a Kriging metamodel and a genetic algorithm combined with a Meshless Finite Difference Method. The later approach is extended to multi-objective optimization. The set of solutions, which corresponds to the best possible compromises between the different objectives, is then computed in the same way. The population based approach allows using the parallel capabilities of the utilized computer with a high efficiency. An optimization module, fully embedded within the Forge2009 IHM, makes possible to cover all the defined examples, and the use of new multi-core hardware to compute several simulations at the same time reduces the needed time dramatically. The presented examples

  11. Cloud computing and digital media fundamentals, techniques, and applications

    CERN Document Server

    Li, Kuan-Ching; Shih, Timothy K

    2014-01-01

    Cloud Computing and Digital Media: Fundamentals, Techniques, and Applications presents the fundamentals of cloud and media infrastructure, novel technologies that integrate digital media with cloud computing, and real-world applications that exemplify the potential of cloud computing for next-generation digital media. It brings together technologies for media/data communication, elastic media/data storage, security, authentication, cross-network media/data fusion, interdevice media interaction/reaction, data centers, PaaS, SaaS, and more.The book covers resource optimization for multimedia clo

  12. Computed tomography of the llama head: technique and normal anatomy

    International Nuclear Information System (INIS)

    Hathcock, J.T.; Pugh, D.G.; Cartee, R.E.; Hammond, L.

    1996-01-01

    Computed tomography was performed on the head of 6 normal adult llamas. The animals were under general anesthesia and positioned in dorsal recumbency on the scanning table. The area scanned was from the external occipital protuberance to the rostral portion of the nasal passage, and the images are presented in both a bone window and a soft tissue window to allow evaluation and identification of the anatomy of the head. Computed tomography of the llama head can be accomplished by most computed tomography scanners utilizing a technique similar to that used in small animals with minor modification of the scanning table

  13. Techniques involving extreme environment, nondestructive techniques, computer methods in metals research, and data analysis

    International Nuclear Information System (INIS)

    Bunshah, R.F.

    1976-01-01

    A number of different techniques which range over several different aspects of materials research are covered in this volume. They are concerned with property evaluation of 4 0 K and below, surface characterization, coating techniques, techniques for the fabrication of composite materials, computer methods, data evaluation and analysis, statistical design of experiments and non-destructive test techniques. Topics covered in this part include internal friction measurements; nondestructive testing techniques; statistical design of experiments and regression analysis in metallurgical research; and measurement of surfaces of engineering materials

  14. At the crossroads of evolutionary computation and music: self-programming synthesizers, swarm orchestras and the origins of melody.

    Science.gov (United States)

    Miranda, Eduardo Reck

    2004-01-01

    This paper introduces three approaches to using Evolutionary Computation (EC) in Music (namely, engineering, creative and musicological approaches) and discusses examples of representative systems that have been developed within the last decade, with emphasis on more recent and innovative works. We begin by reviewing engineering applications of EC in Music Technology such as Genetic Algorithms and Cellular Automata sound synthesis, followed by an introduction to applications where EC has been used to generate musical compositions. Next, we introduce ongoing research into EC models to study the origins of music and detail our own research work on modelling the evolution of melody. Copryright 2004 Massachusetts Institute of Technology

  15. Optimal design of a spherical parallel manipulator based on kinetostatic performance using evolutionary techniques

    Energy Technology Data Exchange (ETDEWEB)

    Daneshmand, Morteza [University of Tartu, Tartu (Estonia); Saadatzi, Mohammad Hossein [Colorado School of Mines, Golden (United States); Kaloorazi, Mohammad Hadi [École de Technologie Supérieur, Montréal (Canada); Masouleh, Mehdi Tale [University of Tehran, Tehran (Iran, Islamic Republic of); Anbarjafari, Gholamreza [Hasan Kalyoncu University, Gaziantep (Turkmenistan)

    2016-03-15

    This study aims to provide an optimal design for a Spherical parallel manipulator (SPM), namely, the Agile Eye. This aim is approached by investigating kinetostatic performance and workspace and searching for the most promising design. Previously recommended designs are examined to determine whether they provide acceptable kinetostatic performance and workspace. Optimal designs are provided according to different kinetostatic performance indices, especially kinematic sensitivity. The optimization process is launched based on the concept of the genetic algorithm. A single-objective process is implemented in accordance with the guidelines of an evolutionary algorithm called differential evolution. A multi-objective procedure is then provided following the reasoning of the nondominated sorting genetic algorithm-II. This process results in several sets of Pareto points for reconciliation between kinetostatic performance indices and workspace. The concept of numerous kinetostatic performance indices and the results of optimization algorithms are elaborated. The conclusions provide hints on the provided set of designs and their credibility to provide a well-conditioned workspace and acceptable kinetostatic performance for the SPM under study, which can be well extended to other types of SPMs.

  16. A Multi Agent System for Flow-Based Intrusion Detection Using Reputation and Evolutionary Computation

    Science.gov (United States)

    2011-03-01

    pertinent example of the application of Evolutionary Algorithms to pattern recognition comes from Radtke et al. [130]. The authors apply Multi- Objective...J., T. Zseby, and B. Claise. S. Zander,” Requirements for IP Flow Information Export (IPFIX). Technical report, RFC 3917, October 2004. [130] Radtke ...hal.inria.fr/inria-00104200/en/. [131] Radtke , P.V.W., T. Wong, and R. Sabourin. “A multi-objective memetic al- gorithm for intelligent feature extraction

  17. Evolutionary optimization of neural networks with heterogeneous computation: study and implementation

    OpenAIRE

    FE, JORGE DEOLINDO; Aliaga Varea, Ramón José; Gadea Gironés, Rafael

    2015-01-01

    In the optimization of artificial neural networks (ANNs) via evolutionary algorithms and the implementation of the necessary training for the objective function, there is often a trade-off between efficiency and flexibility. Pure software solutions on general-purpose processors tend to be slow because they do not take advantage of the inherent parallelism, whereas hardware realizations usually rely on optimizations that reduce the range of applicable network topologies, or they...

  18. Solving Linear Equations by Classical Jacobi-SR Based Hybrid Evolutionary Algorithm with Uniform Adaptation Technique

    OpenAIRE

    Jamali, R. M. Jalal Uddin; Hashem, M. M. A.; Hasan, M. Mahfuz; Rahman, Md. Bazlar

    2013-01-01

    Solving a set of simultaneous linear equations is probably the most important topic in numerical methods. For solving linear equations, iterative methods are preferred over the direct methods especially when the coefficient matrix is sparse. The rate of convergence of iteration method is increased by using Successive Relaxation (SR) technique. But SR technique is very much sensitive to relaxation factor, {\\omega}. Recently, hybridization of classical Gauss-Seidel based successive relaxation t...

  19. APPLYING ARTIFICIAL INTELLIGENCE TECHNIQUES TO HUMAN-COMPUTER INTERFACES

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.

    1988-01-01

    A description is given of UIMS (User Interface Management System), a system using a variety of artificial intelligence techniques to build knowledge-based user interfaces combining functionality and information from a variety of computer systems that maintain, test, and configure customer telephone...... and data networks. Three artificial intelligence (AI) techniques used in UIMS are discussed, namely, frame representation, object-oriented programming languages, and rule-based systems. The UIMS architecture is presented, and the structure of the UIMS is explained in terms of the AI techniques....

  20. Computer Tomography: A Novel Diagnostic Technique used in Horses

    African Journals Online (AJOL)

    In Veterinary Medicine, Computer Tomography (CT scan) is used more often in dogs and cats than in large animals due to their small size and ease of manipulation. This paper, however, illustrates the use of the technique in horses. CT scan was used in the diagnosis of two conditions of the head and limbs, namely alveolar ...

  1. A survey of energy saving techniques for mobile computers

    NARCIS (Netherlands)

    Smit, Gerardus Johannes Maria; Havinga, Paul J.M.

    1997-01-01

    Portable products such as pagers, cordless and digital cellular telephones, personal audio equipment, and laptop computers are increasingly being used. Because these applications are battery powered, reducing power consumption is vital. In this report we first give a survey of techniques for

  2. Fusion of neural computing and PLS techniques for load estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lu, M.; Xue, H.; Cheng, X. [Northwestern Polytechnical Univ., Xi' an (China); Zhang, W. [Xi' an Inst. of Post and Telecommunication, Xi' an (China)

    2007-07-01

    A method to predict the electric load of a power system in real time was presented. The method is based on neurocomputing and partial least squares (PLS). Short-term load forecasts for power systems are generally determined by conventional statistical methods and Computational Intelligence (CI) techniques such as neural computing. However, statistical modeling methods often require the input of questionable distributional assumptions, and neural computing is weak, particularly in determining topology. In order to overcome the problems associated with conventional techniques, the authors developed a CI hybrid model based on neural computation and PLS techniques. The theoretical foundation for the designed CI hybrid model was presented along with its application in a power system. The hybrid model is suitable for nonlinear modeling and latent structure extracting. It can automatically determine the optimal topology to maximize the generalization. The CI hybrid model provides faster convergence and better prediction results compared to the abductive networks model because it incorporates a load conversion technique as well as new transfer functions. In order to demonstrate the effectiveness of the hybrid model, load forecasting was performed on a data set obtained from the Puget Sound Power and Light Company. Compared with the abductive networks model, the CI hybrid model reduced the forecast error by 32.37 per cent on workday, and by an average of 27.18 per cent on the weekend. It was concluded that the CI hybrid model has a more powerful predictive ability. 7 refs., 1 tab., 3 figs.

  3. Visualization of Minkowski operations by computer graphics techniques

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Blaauwgeers, G.S.M.; Serra, J; Soille, P

    1994-01-01

    We consider the problem of visualizing 3D objects defined as a Minkowski addition or subtraction of elementary objects. It is shown that such visualizations can be obtained by using techniques from computer graphics such as ray tracing and Constructive Solid Geometry. Applications of the method are

  4. Bone tissue engineering scaffolding: computer-aided scaffolding techniques.

    Science.gov (United States)

    Thavornyutikarn, Boonlom; Chantarapanich, Nattapon; Sitthiseripratip, Kriskrai; Thouas, George A; Chen, Qizhi

    Tissue engineering is essentially a technique for imitating nature. Natural tissues consist of three components: cells, signalling systems (e.g. growth factors) and extracellular matrix (ECM). The ECM forms a scaffold for its cells. Hence, the engineered tissue construct is an artificial scaffold populated with living cells and signalling molecules. A huge effort has been invested in bone tissue engineering, in which a highly porous scaffold plays a critical role in guiding bone and vascular tissue growth and regeneration in three dimensions. In the last two decades, numerous scaffolding techniques have been developed to fabricate highly interconnective, porous scaffolds for bone tissue engineering applications. This review provides an update on the progress of foaming technology of biomaterials, with a special attention being focused on computer-aided manufacturing (Andrade et al. 2002) techniques. This article starts with a brief introduction of tissue engineering (Bone tissue engineering and scaffolds) and scaffolding materials (Biomaterials used in bone tissue engineering). After a brief reviews on conventional scaffolding techniques (Conventional scaffolding techniques), a number of CAM techniques are reviewed in great detail. For each technique, the structure and mechanical integrity of fabricated scaffolds are discussed in detail. Finally, the advantaged and disadvantage of these techniques are compared (Comparison of scaffolding techniques) and summarised (Summary).

  5. Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.

    Science.gov (United States)

    Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander

    2018-04-10

    A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing

  6. MEGA-CC: computing core of molecular evolutionary genetics analysis program for automated and iterative data analysis.

    Science.gov (United States)

    Kumar, Sudhir; Stecher, Glen; Peterson, Daniel; Tamura, Koichiro

    2012-10-15

    There is a growing need in the research community to apply the molecular evolutionary genetics analysis (MEGA) software tool for batch processing a large number of datasets and to integrate it into analysis workflows. Therefore, we now make available the computing core of the MEGA software as a stand-alone executable (MEGA-CC), along with an analysis prototyper (MEGA-Proto). MEGA-CC provides users with access to all the computational analyses available through MEGA's graphical user interface version. This includes methods for multiple sequence alignment, substitution model selection, evolutionary distance estimation, phylogeny inference, substitution rate and pattern estimation, tests of natural selection and ancestral sequence inference. Additionally, we have upgraded the source code for phylogenetic analysis using the maximum likelihood methods for parallel execution on multiple processors and cores. Here, we describe MEGA-CC and outline the steps for using MEGA-CC in tandem with MEGA-Proto for iterative and automated data analysis. http://www.megasoftware.net/.

  7. A review of metaheuristic scheduling techniques in cloud computing

    Directory of Open Access Journals (Sweden)

    Mala Kalra

    2015-11-01

    Full Text Available Cloud computing has become a buzzword in the area of high performance distributed computing as it provides on-demand access to shared pool of resources over Internet in a self-service, dynamically scalable and metered manner. Cloud computing is still in its infancy, so to reap its full benefits, much research is required across a broad array of topics. One of the important research issues which need to be focused for its efficient performance is scheduling. The goal of scheduling is to map tasks to appropriate resources that optimize one or more objectives. Scheduling in cloud computing belongs to a category of problems known as NP-hard problem due to large solution space and thus it takes a long time to find an optimal solution. There are no algorithms which may produce optimal solution within polynomial time to solve these problems. In cloud environment, it is preferable to find suboptimal solution, but in short period of time. Metaheuristic based techniques have been proved to achieve near optimal solutions within reasonable time for such problems. In this paper, we provide an extensive survey and comparative analysis of various scheduling algorithms for cloud and grid environments based on three popular metaheuristic techniques: Ant Colony Optimization (ACO, Genetic Algorithm (GA and Particle Swarm Optimization (PSO, and two novel techniques: League Championship Algorithm (LCA and BAT algorithm.

  8. Application of Neural Network Optimized by Mind Evolutionary Computation in Building Energy Prediction

    Science.gov (United States)

    Song, Chen; Zhong-Cheng, Wu; Hong, Lv

    2018-03-01

    Building Energy forecasting plays an important role in energy management and plan. Using mind evolutionary algorithm to find the optimal network weights and threshold, to optimize the BP neural network, can overcome the problem of the BP neural network into a local minimum point. The optimized network is used for time series prediction, and the same month forecast, to get two predictive values. Then two kinds of predictive values are put into neural network, to get the final forecast value. The effectiveness of the method was verified by experiment with the energy value of three buildings in Hefei.

  9. Training Software in Artificial-Intelligence Computing Techniques

    Science.gov (United States)

    Howard, Ayanna; Rogstad, Eric; Chalfant, Eugene

    2005-01-01

    The Artificial Intelligence (AI) Toolkit is a computer program for training scientists, engineers, and university students in three soft-computing techniques (fuzzy logic, neural networks, and genetic algorithms) used in artificial-intelligence applications. The program promotes an easily understandable tutorial interface, including an interactive graphical component through which the user can gain hands-on experience in soft-computing techniques applied to realistic example problems. The tutorial provides step-by-step instructions on the workings of soft-computing technology, whereas the hands-on examples allow interaction and reinforcement of the techniques explained throughout the tutorial. In the fuzzy-logic example, a user can interact with a robot and an obstacle course to verify how fuzzy logic is used to command a rover traverse from an arbitrary start to the goal location. For the genetic algorithm example, the problem is to determine the minimum-length path for visiting a user-chosen set of planets in the solar system. For the neural-network example, the problem is to decide, on the basis of input data on physical characteristics, whether a person is a man, woman, or child. The AI Toolkit is compatible with the Windows 95,98, ME, NT 4.0, 2000, and XP operating systems. A computer having a processor speed of at least 300 MHz, and random-access memory of at least 56MB is recommended for optimal performance. The program can be run on a slower computer having less memory, but some functions may not be executed properly.

  10. New Flutter Analysis Technique for Time-Domain Computational Aeroelasticity

    Science.gov (United States)

    Pak, Chan-Gi; Lung, Shun-Fat

    2017-01-01

    A new time-domain approach for computing flutter speed is presented. Based on the time-history result of aeroelastic simulation, the unknown unsteady aerodynamics model is estimated using a system identification technique. The full aeroelastic model is generated via coupling the estimated unsteady aerodynamic model with the known linear structure model. The critical dynamic pressure is computed and used in the subsequent simulation until the convergence of the critical dynamic pressure is achieved. The proposed method is applied to a benchmark cantilevered rectangular wing.

  11. Three-dimensional integrated CAE system applying computer graphic technique

    International Nuclear Information System (INIS)

    Kato, Toshisada; Tanaka, Kazuo; Akitomo, Norio; Obata, Tokayasu.

    1991-01-01

    A three-dimensional CAE system for nuclear power plant design is presented. This system utilizes high-speed computer graphic techniques for the plant design review, and an integrated engineering database for handling the large amount of nuclear power plant engineering data in a unified data format. Applying this system makes it possible to construct a nuclear power plant using only computer data from the basic design phase to the manufacturing phase, and it increases the productivity and reliability of the nuclear power plants. (author)

  12. Electricity demand and spot price forecasting using evolutionary computation combined with chaotic nonlinear dynamic model

    International Nuclear Information System (INIS)

    Unsihuay-Vila, C.; Zambroni de Souza, A.C.; Marangon-Lima, J.W.; Balestrassi, P.P.

    2010-01-01

    This paper proposes a new hybrid approach based on nonlinear chaotic dynamics and evolutionary strategy to forecast electricity loads and prices. The main idea is to develop a new training or identification stage in a nonlinear chaotic dynamic based predictor. In the training stage five optimal parameters for a chaotic based predictor are searched through an optimization model based on evolutionary strategy. The objective function of the optimization model is the mismatch minimization between the multi-step-ahead forecasting of predictor and observed data such as it is done in identification problems. The first contribution of this paper is that the proposed approach is capable of capturing the complex dynamic of demand and price time series considered resulting in a more accuracy forecasting. The second contribution is that the proposed approach run on-line manner, i.e. the optimal set of parameters and prediction is executed automatically which can be used to prediction in real-time, it is an advantage in comparison with other models, where the choice of their input parameters are carried out off-line, following qualitative/experience-based recipes. A case study of load and price forecasting is presented using data from New England, Alberta, and Spain. A comparison with other methods such as autoregressive integrated moving average (ARIMA) and artificial neural network (ANN) is shown. The results show that the proposed approach provides a more accurate and effective forecasting than ARIMA and ANN methods. (author)

  13. Computer-Assisted Technique for Surgical Tooth Extraction

    Directory of Open Access Journals (Sweden)

    Hosamuddin Hamza

    2016-01-01

    Full Text Available Introduction. Surgical tooth extraction is a common procedure in dentistry. However, numerous extraction cases show a high level of difficulty in practice. This difficulty is usually related to inadequate visualization, improper instrumentation, or other factors related to the targeted tooth (e.g., ankyloses or presence of bony undercut. Methods. In this work, the author presents a new technique for surgical tooth extraction based on 3D imaging, computer planning, and a new concept of computer-assisted manufacturing. Results. The outcome of this work is a surgical guide made by 3D printing of plastics and CNC of metals (hybrid outcome. In addition, the conventional surgical cutting tools (surgical burs are modified with a number of stoppers adjusted to avoid any excessive drilling that could harm bone or other vital structures. Conclusion. The present outcome could provide a minimally invasive technique to overcome the routine complications facing dental surgeons in surgical extraction procedures.

  14. Experimental data processing techniques by a personal computer

    International Nuclear Information System (INIS)

    Matsuura, Kiyokata; Tsuda, Kenzo; Abe, Yoshihiko; Kojima, Tsuyoshi; Nishikawa, Akira; Shimura, Hitoshi; Hyodo, Hiromi; Yamagishi, Shigeru.

    1989-01-01

    A personal computer (16-bit, about 1 MB memory) can be used at a low cost in the experimental data processing. This report surveys the important techniques on A/D and D/A conversion, display, store and transfer of the experimental data. It is also discussed the items to be considered in the software. Practical softwares programed BASIC and Assembler language are given as examples. Here, we present some techniques to get faster process in BASIC language and show that the system composed of BASIC and Assembler is useful in a practical experiment. The system performance such as processing speed and flexibility in setting operation condition will depend strongly on programming language. We have made test for processing speed by some typical programming languages; BASIC(interpreter), C, FORTRAN and Assembler. As for the calculation, FORTRAN has the best performance which is comparable to or better than Assembler even in the personal computer. (author)

  15. International Conference on Soft Computing Techniques and Engineering Application

    CERN Document Server

    Li, Xiaolong

    2014-01-01

    The main objective of ICSCTEA 2013 is to provide a platform for researchers, engineers and academicians from all over the world to present their research results and development activities in soft computing techniques and engineering application. This conference provides opportunities for them to exchange new ideas and application experiences face to face, to establish business or research relations and to find global partners for future collaboration.

  16. Applications of soft computing in time series forecasting simulation and modeling techniques

    CERN Document Server

    Singh, Pritpal

    2016-01-01

    This book reports on an in-depth study of fuzzy time series (FTS) modeling. It reviews and summarizes previous research work in FTS modeling and also provides a brief introduction to other soft-computing techniques, such as artificial neural networks (ANNs), rough sets (RS) and evolutionary computing (EC), focusing on how these techniques can be integrated into different phases of the FTS modeling approach. In particular, the book describes novel methods resulting from the hybridization of FTS modeling approaches with neural networks and particle swarm optimization. It also demonstrates how a new ANN-based model can be successfully applied in the context of predicting Indian summer monsoon rainfall. Thanks to its easy-to-read style and the clear explanations of the models, the book can be used as a concise yet comprehensive reference guide to fuzzy time series modeling, and will be valuable not only for graduate students, but also for researchers and professionals working for academic, business and governmen...

  17. Development of a computational technique to measure cartilage contact area.

    Science.gov (United States)

    Willing, Ryan; Lapner, Michael; Lalone, Emily A; King, Graham J W; Johnson, James A

    2014-03-21

    Computational measurement of joint contact distributions offers the benefit of non-invasive measurements of joint contact without the use of interpositional sensors or casting materials. This paper describes a technique for indirectly measuring joint contact based on overlapping of articular cartilage computer models derived from CT images and positioned using in vitro motion capture data. The accuracy of this technique when using the physiological nonuniform cartilage thickness distribution, or simplified uniform cartilage thickness distributions, is quantified through comparison with direct measurements of contact area made using a casting technique. The efficacy of using indirect contact measurement techniques for measuring the changes in contact area resulting from hemiarthroplasty at the elbow is also quantified. Using the physiological nonuniform cartilage thickness distribution reliably measured contact area (ICC=0.727), but not better than the assumed bone specific uniform cartilage thicknesses (ICC=0.673). When a contact pattern agreement score (s(agree)) was used to assess the accuracy of cartilage contact measurements made using physiological nonuniform or simplified uniform cartilage thickness distributions in terms of size, shape and location, their accuracies were not significantly different (p>0.05). The results of this study demonstrate that cartilage contact can be measured indirectly based on the overlapping of cartilage contact models. However, the results also suggest that in some situations, inter-bone distance measurement and an assumed cartilage thickness may suffice for predicting joint contact patterns. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Jet-images: computer vision inspired techniques for jet tagging

    Energy Technology Data Exchange (ETDEWEB)

    Cogan, Josh; Kagan, Michael; Strauss, Emanuel; Schwarztman, Ariel [SLAC National Accelerator Laboratory,Menlo Park, CA 94028 (United States)

    2015-02-18

    We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon-initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.

  19. Jet-images: computer vision inspired techniques for jet tagging

    International Nuclear Information System (INIS)

    Cogan, Josh; Kagan, Michael; Strauss, Emanuel; Schwarztman, Ariel

    2015-01-01

    We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon-initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.

  20. Determining flexor-tendon repair techniques via soft computing

    Science.gov (United States)

    Johnson, M.; Firoozbakhsh, K.; Moniem, M.; Jamshidi, M.

    2001-01-01

    An SC-based multi-objective decision-making method for determining the optimal flexor-tendon repair technique from experimental and clinical survey data, and with variable circumstances, was presented. Results were compared with those from the Taguchi method. Using the Taguchi method results in the need to perform ad-hoc decisions when the outcomes for individual objectives are contradictory to a particular preference or circumstance, whereas the SC-based multi-objective technique provides a rigorous straightforward computational process in which changing preferences and importance of differing objectives are easily accommodated. Also, adding more objectives is straightforward and easily accomplished. The use of fuzzy-set representations of information categories provides insight into their performance throughout the range of their universe of discourse. The ability of the technique to provide a "best" medical decision given a particular physician, hospital, patient, situation, and other criteria was also demonstrated.

  1. Development of computer-aided auto-ranging technique for a computed radiography system

    International Nuclear Information System (INIS)

    Ishida, M.; Shimura, K.; Nakajima, N.; Kato, H.

    1988-01-01

    For a computed radiography system, the authors developed a computer-aided autoranging technique in which the clinically useful image data are automatically mapped to the available display range. The preread image data are inspected to determine the location of collimation. A histogram of the pixels inside the collimation is evaluated regarding characteristic values such as maxima and minima, and then the optimal density and contrast are derived for the display image. The effect of the autoranging technique was investigated at several hospitals in Japan. The average rate of films lost due to undesirable density or contrast was about 0.5%

  2. A computer code to simulate X-ray imaging techniques

    International Nuclear Information System (INIS)

    Duvauchelle, Philippe; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel

    2000-01-01

    A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests

  3. A computer code to simulate X-ray imaging techniques

    Energy Technology Data Exchange (ETDEWEB)

    Duvauchelle, Philippe E-mail: philippe.duvauchelle@insa-lyon.fr; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel

    2000-09-01

    A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests.

  4. Computational methods using weighed-extreme learning machine to predict protein self-interactions with protein evolutionary information.

    Science.gov (United States)

    An, Ji-Yong; Zhang, Lei; Zhou, Yong; Zhao, Yu-Jun; Wang, Da-Fu

    2017-08-18

    Self-interactions Proteins (SIPs) is important for their biological activity owing to the inherent interaction amongst their secondary structures or domains. However, due to the limitations of experimental Self-interactions detection, one major challenge in the study of prediction SIPs is how to exploit computational approaches for SIPs detection based on evolutionary information contained protein sequence. In the work, we presented a novel computational approach named WELM-LAG, which combined the Weighed-Extreme Learning Machine (WELM) classifier with Local Average Group (LAG) to predict SIPs based on protein sequence. The major improvement of our method lies in presenting an effective feature extraction method used to represent candidate Self-interactions proteins by exploring the evolutionary information embedded in PSI-BLAST-constructed position specific scoring matrix (PSSM); and then employing a reliable and robust WELM classifier to carry out classification. In addition, the Principal Component Analysis (PCA) approach is used to reduce the impact of noise. The WELM-LAG method gave very high average accuracies of 92.94 and 96.74% on yeast and human datasets, respectively. Meanwhile, we compared it with the state-of-the-art support vector machine (SVM) classifier and other existing methods on human and yeast datasets, respectively. Comparative results indicated that our approach is very promising and may provide a cost-effective alternative for predicting SIPs. In addition, we developed a freely available web server called WELM-LAG-SIPs to predict SIPs. The web server is available at http://219.219.62.123:8888/WELMLAG/ .

  5. Application of computational fluid dynamics and surrogate-coupled evolutionary computing to enhance centrifugal-pump performance

    Directory of Open Access Journals (Sweden)

    Sayed Ahmed Imran Bellary

    2016-01-01

    Full Text Available To reduce the total design and optimization time, numerical analysis with surrogate-based approaches is being used in turbomachinery optimization. In this work, multiple surrogates are coupled with an evolutionary genetic algorithm to find the Pareto optimal fronts (PoFs of two centrifugal pumps with different specifications in order to enhance their performance. The two pumps were used a centrifugal pump commonly used in industry (Case I and an electrical submersible pump used in the petroleum industry (Case II. The objectives are to enhance head and efficiency of the pumps at specific flow rates. Surrogates such as response surface approximation (RSA, Kriging (KRG, neural networks and weighted-average surrogates (WASs were used to determine the PoFs. To obtain the objective functions’ values and to understand the flow physics, Reynolds-averaged Navier–Stokes equations were solved. It is found that the WAS performs better for both the objectives than any other individual surrogate. The best individual surrogates or the best predicted error sum of squares (PRESS surrogate (BPS obtained from cross-validation (CV error estimations produced better PoFs but was still unable to compete with the WAS. The high CV error-producing surrogate produced the worst PoFs. The performance improvement in this study is due to the change in flow pattern in the passage of the impeller of the pumps.

  6. Computational intelligence techniques for biological data mining: An overview

    Science.gov (United States)

    Faye, Ibrahima; Iqbal, Muhammad Javed; Said, Abas Md; Samir, Brahim Belhaouari

    2014-10-01

    Computational techniques have been successfully utilized for a highly accurate analysis and modeling of multifaceted and raw biological data gathered from various genome sequencing projects. These techniques are proving much more effective to overcome the limitations of the traditional in-vitro experiments on the constantly increasing sequence data. However, most critical problems that caught the attention of the researchers may include, but not limited to these: accurate structure and function prediction of unknown proteins, protein subcellular localization prediction, finding protein-protein interactions, protein fold recognition, analysis of microarray gene expression data, etc. To solve these problems, various classification and clustering techniques using machine learning have been extensively used in the published literature. These techniques include neural network algorithms, genetic algorithms, fuzzy ARTMAP, K-Means, K-NN, SVM, Rough set classifiers, decision tree and HMM based algorithms. Major difficulties in applying the above algorithms include the limitations found in the previous feature encoding and selection methods while extracting the best features, increasing classification accuracy and decreasing the running time overheads of the learning algorithms. The application of this research would be potentially useful in the drug design and in the diagnosis of some diseases. This paper presents a concise overview of the well-known protein classification techniques.

  7. GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique

    Science.gov (United States)

    Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.

    2015-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).

  8. Experimental and Computational Techniques in Soft Condensed Matter Physics

    Science.gov (United States)

    Olafsen, Jeffrey

    2010-09-01

    1. Microscopy of soft materials Eric R. Weeks; 2. Computational methods to study jammed Systems Carl F. Schrek and Corey S. O'Hern; 3. Soft random solids: particulate gels, compressed emulsions and hybrid materials Anthony D. Dinsmore; 4. Langmuir monolayers Michael Dennin; 5. Computer modeling of granular rheology Leonardo E. Silbert; 6. Rheological and microrheological measurements of soft condensed matter John R. de Bruyn and Felix K. Oppong; 7. Particle-based measurement techniques for soft matter Nicholas T. Ouellette; 8. Cellular automata models of granular flow G. William Baxter; 9. Photoelastic materials Brian Utter; 10. Image acquisition and analysis in soft condensed matter Jeffrey S. Olafsen; 11. Structure and patterns in bacterial colonies Nicholas C. Darnton.

  9. Phase behavior of multicomponent membranes: Experimental and computational techniques

    DEFF Research Database (Denmark)

    Bagatolli, Luis; Kumar, P.B. Sunil

    2009-01-01

    Recent developments in biology seems to indicate that the Fluid Mosaic model of membrane proposed by Singer and Nicolson, with lipid bilayer functioning only as medium to support protein machinery, may be too simple to be realistic. Many protein functions are now known to depend on the compositio....... This review includes basic foundations on membrane model systems and experimental approaches applied in the membrane research area, stressing on recent advances in the experimental and computational techniques....... membranes. Current increase in interest in the domain formation in multicomponent membranes also stems from the experiments demonstrating liquid ordered-liquid disordered coexistence in mixtures of lipids and cholesterol and the success of several computational models in predicting their behavior...

  10. Soft computing techniques toward modeling the water supplies of Cyprus.

    Science.gov (United States)

    Iliadis, L; Maris, F; Tachos, S

    2011-10-01

    This research effort aims in the application of soft computing techniques toward water resources management. More specifically, the target is the development of reliable soft computing models capable of estimating the water supply for the case of "Germasogeia" mountainous watersheds in Cyprus. Initially, ε-Regression Support Vector Machines (ε-RSVM) and fuzzy weighted ε-RSVMR models have been developed that accept five input parameters. At the same time, reliable artificial neural networks have been developed to perform the same job. The 5-fold cross validation approach has been employed in order to eliminate bad local behaviors and to produce a more representative training data set. Thus, the fuzzy weighted Support Vector Regression (SVR) combined with the fuzzy partition has been employed in an effort to enhance the quality of the results. Several rational and reliable models have been produced that can enhance the efficiency of water policy designers. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. A Computer Based Moire Technique To Measure Very Small Displacements

    Science.gov (United States)

    Sciammarella, Cesar A.; Amadshahi, Mansour A.; Subbaraman, B.

    1987-02-01

    The accuracy that can be achieved in the measurement of very small displacements in techniques such as moire, holography and speckle is limited by the noise inherent to the utilized optical devices. To reduce the noise to signal ratio, the moire method can be utilized. Two system of carrier fringes are introduced, an initial system before the load is applied and a final system when the load is applied. The moire pattern of these two systems contains the sought displacement information and the noise common to the two patterns is eliminated. The whole process is performed by a computer on digitized versions of the patterns. Examples of application are given.

  12. APPLICATION OF OBJECT ORIENTED PROGRAMMING TECHNIQUES IN FRONT END COMPUTERS

    International Nuclear Information System (INIS)

    SKELLY, J.F.

    1997-01-01

    The Front End Computer (FEC) environment imposes special demands on software, beyond real time performance and robustness. FEC software must manage a diverse inventory of devices with individualistic timing requirements and hardware interfaces. It must implement network services which export device access to the control system at large, interpreting a uniform network communications protocol into the specific control requirements of the individual devices. Object oriented languages provide programming techniques which neatly address these challenges, and also offer benefits in terms of maintainability and flexibility. Applications are discussed which exhibit the use of inheritance, multiple inheritance and inheritance trees, and polymorphism to address the needs of FEC software

  13. Computer vision techniques for the diagnosis of skin cancer

    CERN Document Server

    Celebi, M

    2014-01-01

    The goal of this volume is to summarize the state-of-the-art in the utilization of computer vision techniques in the diagnosis of skin cancer. Malignant melanoma is one of the most rapidly increasing cancers in the world. Early diagnosis is particularly important since melanoma can be cured with a simple excision if detected early. In recent years, dermoscopy has proved valuable in visualizing the morphological structures in pigmented lesions. However, it has also been shown that dermoscopy is difficult to learn and subjective. Newer technologies such as infrared imaging, multispectral imaging, and confocal microscopy, have recently come to the forefront in providing greater diagnostic accuracy. These imaging technologies presented in this book can serve as an adjunct to physicians and  provide automated skin cancer screening. Although computerized techniques cannot as yet provide a definitive diagnosis, they can be used to improve biopsy decision-making as well as early melanoma detection, especially for pa...

  14. Template matching techniques in computer vision theory and practice

    CERN Document Server

    Brunelli, Roberto

    2009-01-01

    The detection and recognition of objects in images is a key research topic in the computer vision community.  Within this area, face recognition and interpretation has attracted increasing attention owing to the possibility of unveiling human perception mechanisms, and for the development of practical biometric systems. This book and the accompanying website, focus on template matching, a subset of object recognition techniques of wide applicability, which has proved to be particularly effective for face recognition applications. Using examples from face processing tasks throughout the book to illustrate more general object recognition approaches, Roberto Brunelli: examines the basics of digital image formation, highlighting points critical to the task of template matching;presents basic and  advanced template matching techniques, targeting grey-level images, shapes and point sets;discusses recent pattern classification paradigms from a template matching perspective;illustrates the development of a real fac...

  15. Genetic characterization and evolutionary inference of TNF-α through computational analysis

    Directory of Open Access Journals (Sweden)

    Gauri Awasthi

    Full Text Available TNF-α is an important human cytokine that imparts dualism in malaria pathogenicity. At high dosages, TNF-α is believed to provoke pathogenicity in cerebral malaria; while at lower dosages TNF-α is protective against severe human malaria. In order to understand the human TNF-α gene and to ascertain evolutionary aspects of its dualistic nature for malaria pathogenicity, we characterized this gene in detail in six different mammalian taxa. The avian taxon, Gallus gallus was included in our study, as TNF-α is not present in birds; therefore, a tandemly placed duplicate of TNF-α (LT-α or TNF-β was included. A comparative study was made of nucleotide length variations, intron and exon sizes and number variations, differential compositions of coding to non-coding bases, etc., to look for similarities/dissimilarities in the TNF-α gene across all seven taxa. A phylogenetic analysis revealed the pattern found in other genes, as humans, chimpanzees and rhesus monkeys were placed in a single clade, and rats and mice in another; the chicken was in a clearly separate branch. We further focused on these three taxa and aligned the amino acid sequences; there were small differences between humans and chimpanzees; both were more different from the rhesus monkey. Further, comparison of coding and non-coding nucleotide length variations and coding to non-coding nucleotide ratio between TNF-α and TNF-β among these three mammalian taxa provided a first-hand indication of the role of the TNF-α gene, but not of TNF-β in the dualistic nature of TNF-α in malaria pathogenicity.

  16. Neuro-evolutionary computing paradigm for Painlevé equation-II in nonlinear optics

    Science.gov (United States)

    Ahmad, Iftikhar; Ahmad, Sufyan; Awais, Muhammad; Ul Islam Ahmad, Siraj; Asif Zahoor Raja, Muhammad

    2018-05-01

    The aim of this study is to investigate the numerical treatment of the Painlevé equation-II arising in physical models of nonlinear optics through artificial intelligence procedures by incorporating a single layer structure of neural networks optimized with genetic algorithms, sequential quadratic programming and active set techniques. We constructed a mathematical model for the nonlinear Painlevé equation-II with the help of networks by defining an error-based cost function in mean square sense. The performance of the proposed technique is validated through statistical analyses by means of the one-way ANOVA test conducted on a dataset generated by a large number of independent runs.

  17. Soft Computing Techniques for the Protein Folding Problem on High Performance Computing Architectures.

    Science.gov (United States)

    Llanes, Antonio; Muñoz, Andrés; Bueno-Crespo, Andrés; García-Valverde, Teresa; Sánchez, Antonia; Arcas-Túnez, Francisco; Pérez-Sánchez, Horacio; Cecilia, José M

    2016-01-01

    The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.

  18. A computational technique for turbulent flow of wastewater sludge.

    Science.gov (United States)

    Bechtel, Tom B

    2005-01-01

    A computational fluid dynamics (CFD) technique applied to the turbulent flow of wastewater sludge in horizontal, smooth-wall, circular pipes is presented. The technique uses the Crank-Nicolson finite difference method in conjunction with the variable secant method, an algorithm for determining the pressure gradient of the flow. A simple algebraic turbulence model is used. A Bingham-plastic rheological model is used to describe the shear stress/shear rate relationship for the wastewater sludge. The method computes velocity gradient and head loss, given a fixed volumetric flow, pipe size, and solids concentration. Solids concentrations ranging from 3 to 10% (by weight) and nominal pipe sizes from 0.15 m (6 in.) to 0.36 m (14 in.) are studied. Comparison of the CFD results for water to established values serves to validate the numerical method. The head loss results are presented in terms of a head loss ratio, R(hl), which is the ratio of sludge head loss to water head loss. An empirical equation relating R(hl) to pipe velocity and solids concentration, derived from the results of the CFD calculations, is presented. The results are compared with published values of Rhl for solids concentrations of 3 and 6%. A new expression for the Fanning friction factor for wastewater sludge flow is also presented.

  19. Computational techniques for inelastic analysis and numerical experiments

    International Nuclear Information System (INIS)

    Yamada, Y.

    1977-01-01

    A number of formulations have been proposed for inelastic analysis, particularly for the thermal elastic-plastic creep analysis of nuclear reactor components. In the elastic-plastic regime, which principally concerns with the time independent behavior, the numerical techniques based on the finite element method have been well exploited and computations have become a routine work. With respect to the problems in which the time dependent behavior is significant, it is desirable to incorporate a procedure which is workable on the mechanical model formulation as well as the method of equation of state proposed so far. A computer program should also take into account the strain-dependent and/or time-dependent micro-structural changes which often occur during the operation of structural components at the increasingly high temperature for a long period of time. Special considerations are crucial if the analysis is to be extended to large strain regime where geometric nonlinearities predominate. The present paper introduces a rational updated formulation and a computer program under development by taking into account the various requisites stated above. (Auth.)

  20. Development of computational technique for labeling magnetic flux-surfaces

    International Nuclear Information System (INIS)

    Nunami, Masanori; Kanno, Ryutaro; Satake, Shinsuke; Hayashi, Takaya; Takamaru, Hisanori

    2006-03-01

    In recent Large Helical Device (LHD) experiments, radial profiles of ion temperature, electric field, etc. are measured in the m/n=1/1 magnetic island produced by island control coils, where m is the poloidal mode number and n the toroidal mode number. When the transport of the plasma in the radial profiles is numerically analyzed, an average over a magnetic flux-surface in the island is a very useful concept to understand the transport. On averaging, a proper labeling of the flux-surfaces is necessary. In general, it is not easy to label the flux-surfaces in the magnetic field with the island, compared with the case of a magnetic field configuration having nested flux-surfaces. In the present paper, we have developed a new computational technique to label the magnetic flux-surfaces. This technique is constructed by using an optimization algorithm, which is known as an optimization method called the simulated annealing method. The flux-surfaces are discerned by using two labels: one is classification of the magnetic field structure, i.e., core, island, ergodic, and outside regions, and the other is a value of the toroidal magnetic flux. We have applied the technique to an LHD configuration with the m/n=1/1 island, and successfully obtained the discrimination of the magnetic field structure. (author)

  1. Computer vision techniques for rotorcraft low-altitude flight

    Science.gov (United States)

    Sridhar, Banavar; Cheng, Victor H. L.

    1988-01-01

    A description is given of research that applies techniques from computer vision to automation of rotorcraft navigation. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle detection approach can be used as obstacle data for the obstacle avoidance in an automataic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data, however, presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. Some comments are made on future work and how research in this area relates to the guidance of other autonomous vehicles.

  2. Iterative reconstruction techniques for computed tomography Part 1: Technical principles

    International Nuclear Information System (INIS)

    Willemink, Martin J.; Jong, Pim A. de; Leiner, Tim; Nievelstein, Rutger A.J.; Schilham, Arnold M.R.; Heer, Linda M. de; Budde, Ricardo P.J.

    2013-01-01

    To explain the technical principles of and differences between commercially available iterative reconstruction (IR) algorithms for computed tomography (CT) in non-mathematical terms for radiologists and clinicians. Technical details of the different proprietary IR techniques were distilled from available scientific articles and manufacturers' white papers and were verified by the manufacturers. Clinical results were obtained from a literature search spanning January 2006 to January 2012, including only original research papers concerning IR for CT. IR for CT iteratively reduces noise and artefacts in either image space or raw data, or both. Reported dose reductions ranged from 23 % to 76 % compared to locally used default filtered back-projection (FBP) settings, with similar noise, artefacts, subjective, and objective image quality. IR has the potential to allow reducing the radiation dose while preserving image quality. Disadvantages of IR include blotchy image appearance and longer computational time. Future studies need to address differences between IR algorithms for clinical low-dose CT. circle Iterative reconstruction technology for CT is presented in non-mathematical terms. (orig.)

  3. Computer-aided auscultation learning system for nursing technique instruction.

    Science.gov (United States)

    Hou, Chun-Ju; Chen, Yen-Ting; Hu, Ling-Chen; Chuang, Chih-Chieh; Chiu, Yu-Hsien; Tsai, Ming-Shih

    2008-01-01

    Pulmonary auscultation is a physical assessment skill learned by nursing students for examining the respiratory system. Generally, a sound simulator equipped mannequin is used to group teach auscultation techniques via classroom demonstration. However, nursing students cannot readily duplicate this learning environment for self-study. The advancement of electronic and digital signal processing technologies facilitates simulating this learning environment. This study aims to develop a computer-aided auscultation learning system for assisting teachers and nursing students in auscultation teaching and learning. This system provides teachers with signal recording and processing of lung sounds and immediate playback of lung sounds for students. A graphical user interface allows teachers to control the measuring device, draw lung sound waveforms, highlight lung sound segments of interest, and include descriptive text. Effects on learning lung sound auscultation were evaluated for verifying the feasibility of the system. Fifteen nursing students voluntarily participated in the repeated experiment. The results of a paired t test showed that auscultative abilities of the students were significantly improved by using the computer-aided auscultation learning system.

  4. Electrostatic afocal-zoom lens design using computer optimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Sise, Omer, E-mail: omersise@gmail.com

    2014-12-15

    Highlights: • We describe the detailed design of a five-element electrostatic afocal-zoom lens. • The simplex optimization is used to optimize lens voltages. • The method can be applied to multi-element electrostatic lenses. - Abstract: Electron optics is the key to the successful operation of electron collision experiments where well designed electrostatic lenses are needed to drive electron beam before and after the collision. In this work, the imaging properties and aberration analysis of an electrostatic afocal-zoom lens design were investigated using a computer optimization technique. We have found a whole new range of voltage combinations that has gone unnoticed until now. A full range of voltage ratios and spherical and chromatic aberration coefficients were systematically analyzed with a range of magnifications between 0.3 and 3.2. The grid-shadow evaluation was also employed to show the effect of spherical aberration. The technique is found to be useful for searching the optimal configuration in a multi-element lens system.

  5. An AmI-Based Software Architecture Enabling Evolutionary Computation in Blended Commerce: The Shopping Plan Application

    Directory of Open Access Journals (Sweden)

    Giuseppe D’Aniello

    2015-01-01

    Full Text Available This work describes an approach to synergistically exploit ambient intelligence technologies, mobile devices, and evolutionary computation in order to support blended commerce or ubiquitous commerce scenarios. The work proposes a software architecture consisting of three main components: linked data for e-commerce, cloud-based services, and mobile apps. The three components implement a scenario where a shopping mall is presented as an intelligent environment in which customers use NFC capabilities of their smartphones in order to handle e-coupons produced, suggested, and consumed by the abovesaid environment. The main function of the intelligent environment is to help customers define shopping plans, which minimize the overall shopping cost by looking for best prices, discounts, and coupons. The paper proposes a genetic algorithm to find suboptimal solutions for the shopping plan problem in a highly dynamic context, where the final cost of a product for an individual customer is dependent on his previous purchases. In particular, the work provides details on the Shopping Plan software prototype and some experimentation results showing the overall performance of the genetic algorithm.

  6. Computer-aided classification of lung nodules on computed tomography images via deep learning technique

    Directory of Open Access Journals (Sweden)

    Hua KL

    2015-08-01

    Full Text Available Kai-Lung Hua,1 Che-Hao Hsu,1 Shintami Chusnul Hidayati,1 Wen-Huang Cheng,2 Yu-Jen Chen3 1Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, 2Research Center for Information Technology Innovation, Academia Sinica, 3Department of Radiation Oncology, MacKay Memorial Hospital, Taipei, Taiwan Abstract: Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD scheme requires several image processing and pattern recognition steps to accomplish a quantitative tumor differentiation result. In such an ad hoc image analysis pipeline, every step depends heavily on the performance of the previous step. Accordingly, tuning of classification performance in a conventional CAD scheme is very complicated and arduous. Deep learning techniques, on the other hand, have the intrinsic advantage of an automatic exploitation feature and tuning of performance in a seamless fashion. In this study, we attempted to simplify the image analysis pipeline of conventional CAD with deep learning techniques. Specifically, we introduced models of a deep belief network and a convolutional neural network in the context of nodule classification in computed tomography images. Two baseline methods with feature computing steps were implemented for comparison. The experimental results suggest that deep learning methods could achieve better discriminative results and hold promise in the CAD application domain. Keywords: nodule classification, deep learning, deep belief network, convolutional neural network

  7. Continuous analog of multiplicative algebraic reconstruction technique for computed tomography

    Science.gov (United States)

    Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya

    2016-03-01

    We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.

  8. Computer processing of the scintigraphic image using digital filtering techniques

    International Nuclear Information System (INIS)

    Matsuo, Michimasa

    1976-01-01

    The theory of digital filtering was studied as a method for the computer processing of scintigraphic images. The characteristics and design techniques of finite impulse response (FIR) digital filters with linear phases were examined using the z-transform. The conventional data processing method, smoothing, could be recognized as one kind of linear phase FIR low-pass digital filtering. Ten representatives of FIR low-pass digital filters with various cut-off frequencies were scrutinized from the frequency domain in one-dimension and two-dimensions. These filters were applied to phantom studies with cold targets, using a Scinticamera-Minicomputer on-line System. These studies revealed that the resultant images had a direct connection with the magnitude response of the filter, that is, they could be estimated fairly well from the frequency response of the digital filter used. The filter, which was estimated from phantom studies as optimal for liver scintigrams using 198 Au-colloid, was successfully applied in clinical use for detecting true cold lesions and, at the same time, for eliminating spurious images. (J.P.N.)

  9. A computational technique to measure fracture callus in radiographs.

    Science.gov (United States)

    Lujan, Trevor J; Madey, Steven M; Fitzpatrick, Dan C; Byrd, Gregory D; Sanderson, Jason M; Bottlang, Michael

    2010-03-03

    Callus formation occurs in the presence of secondary bone healing and has relevance to the fracture's mechanical environment. An objective image processing algorithm was developed to standardize the quantitative measurement of periosteal callus area in plain radiographs of long bone fractures. Algorithm accuracy and sensitivity were evaluated using surrogate models. For algorithm validation, callus formation on clinical radiographs was measured manually by orthopaedic surgeons and compared to non-clinicians using the algorithm. The algorithm measured the projected area of surrogate calluses with less than 5% error. However, error will increase when analyzing very small areas of callus and when using radiographs with low image resolution (i.e. 100 pixels per inch). The callus size extracted by the algorithm correlated well to the callus size outlined by the surgeons (R2=0.94, p<0.001). Furthermore, compared to clinician results, the algorithm yielded results with five times less inter-observer variance. This computational technique provides a reliable and efficient method to quantify secondary bone healing response. Copyright 2009 Elsevier Ltd. All rights reserved.

  10. Projection computation based on pixel in simultaneous algebraic reconstruction technique

    International Nuclear Information System (INIS)

    Wang Xu; Chen Zhiqiang; Xiong Hua; Zhang Li

    2005-01-01

    SART is an important arithmetic of image reconstruction, in which the projection computation takes over half of the reconstruction time. An efficient way to compute projection coefficient matrix together with memory optimization is presented in this paper. Different from normal method, projection lines are located based on every pixel, and the following projection coefficient computation can make use of the results. Correlation of projection lines and pixels can be used to optimize the computation. (authors)

  11. Efficient technique for computational design of thermoelectric materials

    Science.gov (United States)

    Núñez-Valdez, Maribel; Allahyari, Zahed; Fan, Tao; Oganov, Artem R.

    2018-01-01

    Efficient thermoelectric materials are highly desirable, and the quest for finding them has intensified as they could be promising alternatives to fossil energy sources. Here we present a general first-principles approach to predict, in multicomponent systems, efficient thermoelectric compounds. The method combines a robust evolutionary algorithm, a Pareto multiobjective optimization, density functional theory and a Boltzmann semi-classical calculation of thermoelectric efficiency. To test the performance and reliability of our overall framework, we use the well-known system Bi2Te3-Sb2Te3.

  12. The analysis of gastric function using computational techniques

    International Nuclear Information System (INIS)

    Young, Paul

    2002-01-01

    The work presented in this thesis was carried out at the Magnetic Resonance Centre, Department of Physics and Astronomy, University of Nottingham, between October 1996 and June 2000. This thesis describes the application of computerised techniques to the analysis of gastric function, in relation to Magnetic Resonance Imaging data. The implementation of a computer program enabling the measurement of motility in the lower stomach is described in Chapter 6. This method allowed the dimensional reduction of multi-slice image data sets into a 'Motility Plot', from which the motility parameters - the frequency, velocity and depth of contractions - could be measured. The technique was found to be simple, accurate and involved substantial time savings, when compared to manual analysis. The program was subsequently used in the measurement of motility in three separate studies, described in Chapter 7. In Study 1, four different meal types of varying viscosity and nutrient value were consumed by 12 volunteers. The aim of the study was (i) to assess the feasibility of using the motility program in a volunteer study and (ii) to determine the effects of the meals on motility. The results showed that the parameters were remarkably consistent between the 4 meals. However, for each meal, velocity and percentage occlusion were found to increase as contractions propagated along the antrum. The first clinical application of the motility program was carried out in Study 2. Motility from three patients was measured, after they had been referred to the Magnetic Resonance Centre with gastric problems. The results showed that one of the patients displayed an irregular motility, compared to the results of the volunteer study. This result had not been observed using other investigative techniques. In Study 3, motility was measured in Low Viscosity and High Viscosity liquid/solid meals, with the solid particulate consisting of agar beads of varying breakdown strength. The results showed that

  13. A review of computer-aided design/computer-aided manufacture techniques for removable denture fabrication

    Science.gov (United States)

    Bilgin, Mehmet Selim; Baytaroğlu, Ebru Nur; Erdem, Ali; Dilber, Erhan

    2016-01-01

    The aim of this review was to investigate usage of computer-aided design/computer-aided manufacture (CAD/CAM) such as milling and rapid prototyping (RP) technologies for removable denture fabrication. An electronic search was conducted in the PubMed/MEDLINE, ScienceDirect, Google Scholar, and Web of Science databases. Databases were searched from 1987 to 2014. The search was performed using a variety of keywords including CAD/CAM, complete/partial dentures, RP, rapid manufacturing, digitally designed, milled, computerized, and machined. The identified developments (in chronological order), techniques, advantages, and disadvantages of CAD/CAM and RP for removable denture fabrication are summarized. Using a variety of keywords and aiming to find the topic, 78 publications were initially searched. For the main topic, the abstract of these 78 articles were scanned, and 52 publications were selected for reading in detail. Full-text of these articles was gained and searched in detail. Totally, 40 articles that discussed the techniques, advantages, and disadvantages of CAD/CAM and RP for removable denture fabrication and the articles were incorporated in this review. Totally, 16 of the papers summarized in the table. Following review of all relevant publications, it can be concluded that current innovations and technological developments of CAD/CAM and RP allow the digitally planning and manufacturing of removable dentures from start to finish. As a result according to the literature review CAD/CAM techniques and supportive maxillomandibular relationship transfer devices are growing fast. In the close future, fabricating removable dentures will become medical informatics instead of needing a technical staff and procedures. However the methods have several limitations for now. PMID:27095912

  14. Computer vision techniques for rotorcraft low altitude flight

    Science.gov (United States)

    Sridhar, Banavar

    1990-01-01

    Rotorcraft operating in high-threat environments fly close to the earth's surface to utilize surrounding terrain, vegetation, or manmade objects to minimize the risk of being detected by an enemy. Increasing levels of concealment are achieved by adopting different tactics during low-altitude flight. Rotorcraft employ three tactics during low-altitude flight: low-level, contour, and nap-of-the-earth (NOE). The key feature distinguishing the NOE mode from the other two modes is that the whole rotorcraft, including the main rotor, is below tree-top whenever possible. This leads to the use of lateral maneuvers for avoiding obstacles, which in fact constitutes the means for concealment. The piloting of the rotorcraft is at best a very demanding task and the pilot will need help from onboard automation tools in order to devote more time to mission-related activities. The development of an automation tool which has the potential to detect obstacles in the rotorcraft flight path, warn the crew, and interact with the guidance system to avoid detected obstacles, presents challenging problems. Research is described which applies techniques from computer vision to automation of rotorcraft navigtion. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle-detection approach can be used as obstacle data for the obstacle avoidance in an automatic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. The presentation concludes with some comments on future work and how research in this area relates to the guidance of other autonomous vehicles.

  15. Computer technique for correction of nonhomogeneous distribution in radiologic images

    International Nuclear Information System (INIS)

    Florian, Rogerio V.; Frere, Annie F.; Schiable, Homero; Marques, Paulo M.A.; Marques, Marcio A.

    1996-01-01

    An image processing technique to provide a 'Heel' effect compensation on medical images is presented. It is reported that the technique can improve the structures detection due to background homogeneity and can be used for any radiologic system

  16. Neutron visual sensing techniques making good use of computer science

    International Nuclear Information System (INIS)

    Kureta, Masatoshi

    2009-01-01

    Neutron visual sensing technique is one of the nondestructive visualization and image-sensing techniques. In this article, some advanced neutron visual sensing techniques are introduced. The most up-to-date high-speed neutron radiography, neutron 3D CT, high-speed scanning neutron 3D/4D CT and multi-beam neutron 4D CT techniques are included with some fundamental application results. Oil flow in a car engine was visualized by high-speed neutron radiography technique to make clear the unknown phenomena. 4D visualization of pained sand in the sand glass was reported as the demonstration of the high-speed scanning neutron 4D CT technique. The purposes of the development of these techniques are to make clear the unknown phenomena and to measure the void fraction, velocity etc. with high-speed or 3D/4D for many industrial applications. (author)

  17. [Development of computer aided forming techniques in manufacturing scaffolds for bone tissue engineering].

    Science.gov (United States)

    Wei, Xuelei; Dong, Fuhui

    2011-12-01

    To review recent advance in the research and application of computer aided forming techniques for constructing bone tissue engineering scaffolds. The literature concerning computer aided forming techniques for constructing bone tissue engineering scaffolds in recent years was reviewed extensively and summarized. Several studies over last decade have focused on computer aided forming techniques for bone scaffold construction using various scaffold materials, which is based on computer aided design (CAD) and bone scaffold rapid prototyping (RP). CAD include medical CAD, STL, and reverse design. Reverse design can fully simulate normal bone tissue and could be very useful for the CAD. RP techniques include fused deposition modeling, three dimensional printing, selected laser sintering, three dimensional bioplotting, and low-temperature deposition manufacturing. These techniques provide a new way to construct bone tissue engineering scaffolds with complex internal structures. With rapid development of molding and forming techniques, computer aided forming techniques are expected to provide ideal bone tissue engineering scaffolds.

  18. Computation techniques and computer programs to analyze Stirling cycle engines using characteristic dynamic energy equations

    Science.gov (United States)

    Larson, V. H.

    1982-01-01

    The basic equations that are used to describe the physical phenomena in a Stirling cycle engine are the general energy equations and equations for the conservation of mass and conversion of momentum. These equations, together with the equation of state, an analytical expression for the gas velocity, and an equation for mesh temperature are used in this computer study of Stirling cycle characteristics. The partial differential equations describing the physical phenomena that occurs in a Stirling cycle engine are of the hyperbolic type. The hyperbolic equations have real characteristic lines. By utilizing appropriate points along these curved lines the partial differential equations can be reduced to ordinary differential equations. These equations are solved numerically using a fourth-fifth order Runge-Kutta integration technique.

  19. Analysis of Piezoelectric Structural Sensors with Emergent Computing Techniques

    Science.gov (United States)

    Ramers, Douglas L.

    2005-01-01

    The purpose of this project was to try to interpret the results of some tests that were performed earlier this year and to demonstrate a possible use of emergence in computing to solve IVHM problems. The test data used was collected with piezoelectric sensors to detect mechanical changes in structures. This project team was included of Dr. Doug Ramers and Dr. Abdul Jallob of the Summer Faculty Fellowship Program, Arnaldo Colon-Lopez - a student intern from the University of Puerto Rico of Turabo, and John Lassister and Bob Engberg of the Structural and Dynamics Test Group. The tests were performed by Bob Engberg to compare the performance two types of piezoelectric (piezo) sensors, Pb(Zr(sub 1-1)Ti(sub x))O3, which we will label PZT, and Pb(Zn(sub 1/3)Nb(sub 2/3))O3-PbTiO, which we will label SCP. The tests were conducted under varying temperature and pressure conditions. One set of tests was done by varying water pressure inside an aluminum liner covered with carbon-fiber composite layers (a cylindrical "bottle" with domed ends) and the other by varying temperatures down to cryogenic levels on some specially prepared composite panels. This report discusses the data from the pressure study. The study of the temperature results was not completed in time for this report. The particular sensing done with these piezo sensors is accomplished by the sensor generating an controlled vibration that is transmitted into the structure to which the sensor is attached, and the same sensor then responding to the induced vibration of the structure. There is a relationship between the mechanical impedance of the structure and the resulting electrical impedance produced in the in the piezo sensor. The impedance is also a function of the excitation frequency. Changes in the real part of impendance signature relative to an original reference signature indicate a change in the coupled structure that could be the results of damage or strain. The water pressure tests were conducted by

  20. Statistical analysis and definition of blockages-prediction formulae for the wastewater network of Oslo by evolutionary computing.

    Science.gov (United States)

    Ugarelli, Rita; Kristensen, Stig Morten; Røstum, Jon; Saegrov, Sveinung; Di Federico, Vittorio

    2009-01-01

    Oslo Vann og Avløpsetaten (Oslo VAV)-the water/wastewater utility in the Norwegian capital city of Oslo-is assessing future strategies for selection of most reliable materials for wastewater networks, taking into account not only material technical performance but also material performance, regarding operational condition of the system.The research project undertaken by SINTEF Group, the largest research organisation in Scandinavia, NTNU (Norges Teknisk-Naturvitenskapelige Universitet) and Oslo VAV adopts several approaches to understand reasons for failures that may impact flow capacity, by analysing historical data for blockages in Oslo.The aim of the study was to understand whether there is a relationship between the performance of the pipeline and a number of specific attributes such as age, material, diameter, to name a few. This paper presents the characteristics of the data set available and discusses the results obtained by performing two different approaches: a traditional statistical analysis by segregating the pipes into classes, each of which with the same explanatory variables, and a Evolutionary Polynomial Regression model (EPR), developed by Technical University of Bari and University of Exeter, to identify possible influence of pipe's attributes on the total amount of predicted blockages in a period of time.Starting from a detailed analysis of the available data for the blockage events, the most important variables are identified and a classification scheme is adopted.From the statistical analysis, it can be stated that age, size and function do seem to have a marked influence on the proneness of a pipeline to blockages, but, for the reduced sample available, it is difficult to say which variable it is more influencing. If we look at total number of blockages the oldest class seems to be the most prone to blockages, but looking at blockage rates (number of blockages per km per year), then it is the youngest class showing the highest blockage rate

  1. Computer-assisted techniques to evaluate fringe patterns

    Science.gov (United States)

    Sciammarella, Cesar A.; Bhat, Gopalakrishna K.

    1992-01-01

    Strain measurement using interferometry requires an efficient way to extract the desired information from interferometric fringes. Availability of digital image processing systems makes it possible to use digital techniques for the analysis of fringes. In the past, there have been several developments in the area of one dimensional and two dimensional fringe analysis techniques, including the carrier fringe method (spatial heterodyning) and the phase stepping (quasi-heterodyning) technique. This paper presents some new developments in the area of two dimensional fringe analysis, including a phase stepping technique supplemented by the carrier fringe method and a two dimensional Fourier transform method to obtain the strain directly from the discontinuous phase contour map.

  2. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  3. Seismic activity prediction using computational intelligence techniques in northern Pakistan

    Science.gov (United States)

    Asim, Khawaja M.; Awais, Muhammad; Martínez-Álvarez, F.; Iqbal, Talat

    2017-10-01

    Earthquake prediction study is carried out for the region of northern Pakistan. The prediction methodology includes interdisciplinary interaction of seismology and computational intelligence. Eight seismic parameters are computed based upon the past earthquakes. Predictive ability of these eight seismic parameters is evaluated in terms of information gain, which leads to the selection of six parameters to be used in prediction. Multiple computationally intelligent models have been developed for earthquake prediction using selected seismic parameters. These models include feed-forward neural network, recurrent neural network, random forest, multi layer perceptron, radial basis neural network, and support vector machine. The performance of every prediction model is evaluated and McNemar's statistical test is applied to observe the statistical significance of computational methodologies. Feed-forward neural network shows statistically significant predictions along with accuracy of 75% and positive predictive value of 78% in context of northern Pakistan.

  4. Microwave integrated circuit mask design, using computer aided microfilm techniques

    Energy Technology Data Exchange (ETDEWEB)

    Reymond, J.M.; Batliwala, E.R.; Ajose, S.O.

    1977-01-01

    This paper examines the possibility of using a computer interfaced with a precision film C.R.T. information retrieval system, to produce photomasks suitable for the production of microwave integrated circuits.

  5. Advanced computer graphics techniques as applied to the nuclear industry

    International Nuclear Information System (INIS)

    Thomas, J.J.; Koontz, A.S.

    1985-08-01

    Computer graphics is a rapidly advancing technological area in computer science. This is being motivated by increased hardware capability coupled with reduced hardware costs. This paper will cover six topics in computer graphics, with examples forecasting how each of these capabilities could be used in the nuclear industry. These topics are: (1) Image Realism with Surfaces and Transparency; (2) Computer Graphics Motion; (3) Graphics Resolution Issues and Examples; (4) Iconic Interaction; (5) Graphic Workstations; and (6) Data Fusion - illustrating data coming from numerous sources, for display through high dimensional, greater than 3-D, graphics. All topics will be discussed using extensive examples with slides, video tapes, and movies. Illustrations have been omitted from the paper due to the complexity of color reproduction. 11 refs., 2 figs., 3 tabs

  6. Application of computer technique in the reconstruction of Chinese ancient buildings

    Science.gov (United States)

    Li, Deren; Yang, Jie; Zhu, Yixuan

    2003-01-01

    This paper offers an introduction of computer assemble and simulation of ancient building. A pioneer research work was carried out by investigators of surveying and mapping describing ancient Chinese timber buildings by 3D frame graphs with computers. But users can know the structural layers and the assembly process of these buildings if the frame graphs are processed further with computer. This can be implemented by computer simulation technique. This technique display the raw data on the screen of a computer and interactively manage them by combining technologies from computer graphics and image processing, multi-media technology, artificial intelligence, highly parallel real-time computation technique and human behavior science. This paper presents the implement procedure of simulation for large-sized wooden buildings as well as 3D dynamic assembly of these buildings under the 3DS MAX environment. The results of computer simulation are also shown in the paper.

  7. New technique for determining unavailability of computer controlled safety systems

    International Nuclear Information System (INIS)

    Fryer, M.O.; Bruske, S.Z.

    1984-04-01

    The availability of a safety system for a fusion reactor is determined. A fusion reactor processes tritium and requires an Emergency Tritium Cleanup (ETC) system for accidental tritium releases. The ETC is computer controlled and because of its complexity, is an excellent candidate for this analysis. The ETC system unavailability, for preliminary untested software, is calculated based on different assumptions about operator response. These assumptions are: (a) the operator shuts down the system after the first indication of plant failure; (b) the operator shuts down the system after following optimized failure verification procedures; or (c) the operator is taken out of the decision process, and the computer uses the optimized failure verification procedures

  8. Security Techniques for protecting data in Cloud Computing

    OpenAIRE

    Maddineni, Venkata Sravan Kumar; Ragi, Shivashanker

    2012-01-01

    Context: From the past few years, there has been a rapid progress in Cloud Computing. With the increasing number of companies resorting to use resources in the Cloud, there is a necessity for protecting the data of various users using centralized resources. Some major challenges that are being faced by Cloud Computing are to secure, protect and process the data which is the property of the user. Aims and Objectives: The main aim of this research is to understand the security threats and ident...

  9. Applications of NLP Techniques to Computer-Assisted Authoring of Test Items for Elementary Chinese

    Science.gov (United States)

    Liu, Chao-Lin; Lin, Jen-Hsiang; Wang, Yu-Chun

    2010-01-01

    The authors report an implemented environment for computer-assisted authoring of test items and provide a brief discussion about the applications of NLP techniques for computer assisted language learning. Test items can serve as a tool for language learners to examine their competence in the target language. The authors apply techniques for…

  10. Computational techniques in tribology and material science at the atomic level

    Science.gov (United States)

    Ferrante, J.; Bozzolo, G. H.

    1992-01-01

    Computations in tribology and material science at the atomic level present considerable difficulties. Computational techniques ranging from first-principles to semi-empirical and their limitations are discussed. Example calculations of metallic surface energies using semi-empirical techniques are presented. Finally, application of the methods to calculation of adhesion and friction are presented.

  11. Software Engineering Techniques for Computer-Aided Learning.

    Science.gov (United States)

    Ibrahim, Bertrand

    1989-01-01

    Describes the process for developing tutorials for computer-aided learning (CAL) using a programing language rather than an authoring system. The workstation used is described, the use of graphics is discussed, the role of a local area network (LAN) is explained, and future plans are discussed. (five references) (LRW)

  12. The development of a computer technique for the investigation of reactor lattice parameters

    International Nuclear Information System (INIS)

    Joubert, W.R.

    1982-01-01

    An integrated computer technique was developed whereby all the computer programmes needed to calculate reactor lattice parameters from basic neutron data, could be combined in one system. The theory of the computer programmes is explained in detail. Results are given and compared with experimental values as well as those calculated with a standard system

  13. The development of computer industry and applications of its relevant techniques in nuclear research laboratories

    International Nuclear Information System (INIS)

    Dai Guiliang

    1988-01-01

    The increasing needs for computers in the area of nuclear science and technology are described. The current status of commerical availabe computer products of different scale in world market are briefly reviewed. A survey of some noticeable techniques is given from the view point of computer applications in nuclear science research laboratories

  14. Evolutionary Nephrology.

    Science.gov (United States)

    Chevalier, Robert L

    2017-05-01

    Progressive kidney disease follows nephron loss, hyperfiltration, and incomplete repair, a process described as "maladaptive." In the past 20 years, a new discipline has emerged that expands research horizons: evolutionary medicine. In contrast to physiologic (homeostatic) adaptation, evolutionary adaptation is the result of reproductive success that reflects natural selection. Evolutionary explanations for physiologically maladaptive responses can emerge from mismatch of the phenotype with environment or evolutionary tradeoffs. Evolutionary adaptation to a terrestrial environment resulted in a vulnerable energy-consuming renal tubule and a hypoxic, hyperosmolar microenvironment. Natural selection favors successful energy investment strategy: energy is allocated to maintenance of nephron integrity through reproductive years, but this declines with increasing senescence after ~40 years of age. Risk factors for chronic kidney disease include restricted fetal growth or preterm birth (life history tradeoff resulting in fewer nephrons), evolutionary selection for APOL1 mutations (that provide resistance to trypanosome infection, a tradeoff), and modern life experience (Western diet mismatch leading to diabetes and hypertension). Current advances in genomics, epigenetics, and developmental biology have revealed proximate causes of kidney disease, but attempts to slow kidney disease remain elusive. Evolutionary medicine provides a complementary approach by addressing ultimate causes of kidney disease. Marked variation in nephron number at birth, nephron heterogeneity, and changing susceptibility to kidney injury throughout life history are the result of evolutionary processes. Combined application of molecular genetics, evolutionary developmental biology (evo-devo), developmental programming and life history theory may yield new strategies for prevention and treatment of chronic kidney disease.

  15. The practical use of computer graphics techniques for site characterization

    International Nuclear Information System (INIS)

    Tencer, B.; Newell, J.C.

    1982-01-01

    In this paper the authors describe the approach utilized by Roy F. Weston, Inc. (WESTON) to analyze and characterize data relative to a specific site and the computerized graphical techniques developed to display site characterization data. These techniques reduce massive amounts of tabular data to a limited number of graphics easily understood by both the public and policy level decision makers. First, they describe the general design of the system; then the application of this system to a low level rad site followed by a description of an application to an uncontrolled hazardous waste site

  16. Application of computational intelligence techniques for load shedding in power systems: A review

    International Nuclear Information System (INIS)

    Laghari, J.A.; Mokhlis, H.; Bakar, A.H.A.; Mohamad, Hasmaini

    2013-01-01

    Highlights: • The power system blackout history of last two decades is presented. • Conventional load shedding techniques, their types and limitations are presented. • Applications of intelligent techniques in load shedding are presented. • Intelligent techniques include ANN, fuzzy logic, ANFIS, genetic algorithm and PSO. • The discussion and comparison between these techniques are provided. - Abstract: Recent blackouts around the world question the reliability of conventional and adaptive load shedding techniques in avoiding such power outages. To address this issue, reliable techniques are required to provide fast and accurate load shedding to prevent collapse in the power system. Computational intelligence techniques, due to their robustness and flexibility in dealing with complex non-linear systems, could be an option in addressing this problem. Computational intelligence includes techniques like artificial neural networks, genetic algorithms, fuzzy logic control, adaptive neuro-fuzzy inference system, and particle swarm optimization. Research in these techniques is being undertaken in order to discover means for more efficient and reliable load shedding. This paper provides an overview of these techniques as applied to load shedding in a power system. This paper also compares the advantages of computational intelligence techniques over conventional load shedding techniques. Finally, this paper discusses the limitation of computational intelligence techniques, which restricts their usage in load shedding in real time

  17. A technique for computing bowing reactivity feedback in LMFBR's

    International Nuclear Information System (INIS)

    Finck, P.J.

    1987-01-01

    During normal or accidental transients occurring in a LMFBR core, the assemblies and their support structure are subjected to important thermal gradients which induce differential thermal expansions of the walls of the hexcans and differential displacement of the assembly support structure. These displacements, combined with the creep and swelling of structural materials, remain quite small, but the resulting reactivity changes constitute a significant component of the reactivity feedback coefficients used in safety analyses. It would be prohibitive to compute the reactivity changes due to all transients. Thus, the usual practice is to generate reactivity gradient tables. The purpose of the work presented here is twofold: develop and validate an efficient and accurate scheme for computing these reactivity tables; and to qualify this scheme

  18. Low Power system Design techniques for mobile computers

    NARCIS (Netherlands)

    Havinga, Paul J.M.; Smit, Gerardus Johannes Maria

    1997-01-01

    Portable products are being used increasingly. Because these systems are battery powered, reducing power consumption is vital. In this report we give the properties of low power design and techniques to exploit them on the architecture of the system. We focus on: min imizing capacitance, avoiding

  19. Optimizing Nuclear Reactor Operation Using Soft Computing Techniques

    NARCIS (Netherlands)

    Entzinger, J.O.; Ruan, D.; Kahraman, Cengiz

    2006-01-01

    The strict safety regulations for nuclear reactor control make it di±cult to implement new control techniques such as fuzzy logic control (FLC). FLC however, can provide very desirable advantages over classical control, like robustness, adaptation and the capability to include human experience into

  20. An appraisal of computational techniques for transient heat conduction equation

    International Nuclear Information System (INIS)

    Kant, T.

    1983-01-01

    A semi-discretization procedure in which the ''space'' dimension is discretized by the finite element method is emphasized for transient problems. This standard methodology transforms the space-time partial differential equation (PDE) system into a set of ordinary differential equations (ODE) in time. Existing methods for transient heat conduction calculations are then reviewed. Existence of two general classes of time integration schemes- implicit and explicit is noted. Numerical stability characteristics of these two methods are elucidated. Implicit methods are noted to be numerically stable, permitting large time steps, but the cost per step is high. On the otherhand, explicit schemes are noted to be inexpensive per step, but small step size is required. Low computational cost of the explicit schemes make it very attractive for nonlinear problems. However, numerical stability considerations requiring use of very small time steps come in the way of its general adoption. Effectiveness of the fourth-order Runge-Kutta-Gill explicit integrator is then numerically evaluated. Finally we discuss some very recent works on development of computational algorithms which not only achieve unconditional stability, high accuracy and convergence but involve computations on matrix equations of elements only. This development is considered to be very significant in the light of our experience gained for simple heat conduction calculations. We conclude that such algorithms have the potential for further developments leading to development of economical methods for general transient analysis of complex physical systems. (orig.)

  1. Computational techniques in gamma-ray skyshine analysis

    International Nuclear Information System (INIS)

    George, D.L.

    1988-12-01

    Two computer codes were developed to analyze gamma-ray skyshine, the scattering of gamma photons by air molecules. A review of previous gamma-ray skyshine studies discusses several Monte Carlo codes, programs using a single-scatter model, and the MicroSkyshine program for microcomputers. A benchmark gamma-ray skyshine experiment performed at Kansas State University is also described. A single-scatter numerical model was presented which traces photons from the source to their first scatter, then applies a buildup factor along a direct path from the scattering point to a detector. The FORTRAN code SKY, developed with this model before the present study, was modified to use Gauss quadrature, recent photon attenuation data and a more accurate buildup approximation. The resulting code, SILOGP, computes response from a point photon source on the axis of a silo, with and without concrete shielding over the opening. Another program, WALLGP, was developed using the same model to compute response from a point gamma source behind a perfectly absorbing wall, with and without shielding overhead. 29 refs., 48 figs., 13 tabs

  2. Virtual reality in medicine-computer graphics and interaction techniques.

    Science.gov (United States)

    Haubner, M; Krapichler, C; Lösch, A; Englmeier, K H; van Eimeren, W

    1997-03-01

    This paper describes several new visualization and interaction techniques that enable the use of virtual environments for routine medical purposes. A new volume-rendering method supports shaded and transparent visualization of medical image sequences in real-time with an interactive threshold definition. Based on these rendering algorithms two complementary segmentation approaches offer an intuitive assistance for a wide range of requirements in diagnosis and therapy planning. In addition, a hierarchical data representation for geometric surface descriptions guarantees an optimal use of available hardware resources and prevents inaccurate visualization. The combination of the presented techniques empowers the improved human-machine interface of virtual reality to support every interactive task in medical three-dimensional (3-D) image processing, from visualization of unsegmented data volumes up to the simulation of surgical procedures.

  3. Securing the Cloud Cloud Computer Security Techniques and Tactics

    CERN Document Server

    Winkler, Vic (JR)

    2011-01-01

    As companies turn to cloud computing technology to streamline and save money, security is a fundamental concern. Loss of certain control and lack of trust make this transition difficult unless you know how to handle it. Securing the Cloud discusses making the move to the cloud while securing your peice of it! The cloud offers felxibility, adaptability, scalability, and in the case of security-resilience. This book details the strengths and weaknesses of securing your company's information with different cloud approaches. Attacks can focus on your infrastructure, communications network, data, o

  4. Techniques for animation of CFD results. [computational fluid dynamics

    Science.gov (United States)

    Horowitz, Jay; Hanson, Jeffery C.

    1992-01-01

    Video animation is becoming increasingly vital to the computational fluid dynamics researcher, not just for presentation, but for recording and comparing dynamic visualizations that are beyond the current capabilities of even the most powerful graphic workstation. To meet these needs, Lewis Research Center has recently established a facility to provide users with easy access to advanced video animation capabilities. However, producing animation that is both visually effective and scientifically accurate involves various technological and aesthetic considerations that must be understood both by the researcher and those supporting the visualization process. These considerations include: scan conversion, color conversion, and spatial ambiguities.

  5. Computational techniques used in the development of coprocessing flowsheets

    International Nuclear Information System (INIS)

    Groenier, W.S.; Mitchell, A.D.; Jubin, R.T.

    1979-01-01

    The computer program SEPHIS, developed to aid in determining optimum solvent extraction conditions for the reprocessing of nuclear power reactor fuels by the Purex method, is described. The program employs a combination of approximate mathematical equilibrium expressions and a transient, stagewise-process calculational method to allow stage and product-stream concentrations to be predicted with accuracy and reliability. The possible applications to inventory control for nuclear material safeguards, nuclear criticality analysis, and process analysis and control are of special interest. The method is also applicable to other counntercurrent liquid--liquid solvent extraction processes having known chemical kinetics, that may involve multiple solutes and are performed in conventional contacting equipment

  6. A textbook of computer based numerical and statistical techniques

    CERN Document Server

    Jaiswal, AK

    2009-01-01

    About the Book: Application of Numerical Analysis has become an integral part of the life of all the modern engineers and scientists. The contents of this book covers both the introductory topics and the more advanced topics such as partial differential equations. This book is different from many other books in a number of ways. Salient Features: Mathematical derivation of each method is given to build the students understanding of numerical analysis. A variety of solved examples are given. Computer programs for almost all numerical methods discussed have been presented in `C` langu

  7. An Efficient Computational Technique for Fractal Vehicular Traffic Flow

    Directory of Open Access Journals (Sweden)

    Devendra Kumar

    2018-04-01

    Full Text Available In this work, we examine a fractal vehicular traffic flow problem. The partial differential equations describing a fractal vehicular traffic flow are solved with the aid of the local fractional homotopy perturbation Sumudu transform scheme and the local fractional reduced differential transform method. Some illustrative examples are taken to describe the success of the suggested techniques. The results derived with the aid of the suggested schemes reveal that the present schemes are very efficient for obtaining the non-differentiable solution to fractal vehicular traffic flow problem.

  8. Techniques for grid manipulation and adaptation. [computational fluid dynamics

    Science.gov (United States)

    Choo, Yung K.; Eisemann, Peter R.; Lee, Ki D.

    1992-01-01

    Two approaches have been taken to provide systematic grid manipulation for improved grid quality. One is the control point form (CPF) of algebraic grid generation. It provides explicit control of the physical grid shape and grid spacing through the movement of the control points. It works well in the interactive computer graphics environment and hence can be a good candidate for integration with other emerging technologies. The other approach is grid adaptation using a numerical mapping between the physical space and a parametric space. Grid adaptation is achieved by modifying the mapping functions through the effects of grid control sources. The adaptation process can be repeated in a cyclic manner if satisfactory results are not achieved after a single application.

  9. Computer tomography as a diagnostic technique in psychiatry

    Energy Technology Data Exchange (ETDEWEB)

    Strobl, G.; Reisner, T.; Zeiler, K. (Vienna Univ. (Austria). Psychiatrische Klinik; Vienna Univ. (Austria). Neurologische Klinik)

    1980-01-01

    CT findings in 516 hospitalized psychiatric patients are presented. The patients were classified in 9 groups according to a modified ICD classification, and type and incidence of pathological findings - almost exclusively degenerative processes of the brain - were registered. Diffuse cerebral atrophies are most frequent in the groups alcoholism and alcohol psychoses (44.0%) and psychoses and mental disturbances accompanying physical diseases . In schizophrenics, (almost exclusively residual and defect states) and in patients with affective psychosis diffuse cerebral atrophies are much less frequent (11.3% and 9.2%) than stated in earlier publications. Neurosis, changes in personality, or abnormal behaviour are hardly ever accompanied by cerebral atrophy. Problems encountered in the attempt to establish objective criteria for a diagnosis of cerebral atrophy on the basis of CT pictures are discussed. The computed tomograph does not permit conclusions on the etiology of diffuse atrophic processes.

  10. Modeling Techniques for a Computational Efficient Dynamic Turbofan Engine Model

    Directory of Open Access Journals (Sweden)

    Rory A. Roberts

    2014-01-01

    Full Text Available A transient two-stream engine model has been developed. Individual component models developed exclusively in MATLAB/Simulink including the fan, high pressure compressor, combustor, high pressure turbine, low pressure turbine, plenum volumes, and exit nozzle have been combined to investigate the behavior of a turbofan two-stream engine. Special attention has been paid to the development of transient capabilities throughout the model, increasing physics model, eliminating algebraic constraints, and reducing simulation time through enabling the use of advanced numerical solvers. The lessening of computation time is paramount for conducting future aircraft system-level design trade studies and optimization. The new engine model is simulated for a fuel perturbation and a specified mission while tracking critical parameters. These results, as well as the simulation times, are presented. The new approach significantly reduces the simulation time.

  11. Computer tomography as a diagnostic technique in psychiatry

    International Nuclear Information System (INIS)

    Strobl, G.; Reisner, T.; Zeiler, K.; Vienna Univ.

    1980-01-01

    CT findings in 516 hospitalized psychiatric patients are presented. The patients were classified in 9 groups according to a modified ICD classification, and type and incidence of pathological findings - almost exclusively degenerative processes of the brain - were registered. Diffuse cerebral atrophies are most frequent in the groups alcoholism and alcohol psychoses (44.0%) and psychoses and mental disturbances accompanying physical diseases. In schizophrenics, (almost exclusively residual and defect states) and in patients with affective psychosis diffuse cerebral atrophies are much less frequent (11.3% and 9.2%) than stated in earlier publications. Neurosis, changes in personality, or abnormal behaviour are hardly ever accompanied by cerebral atrophy. Problems encountered in the attempt to establish objective criteria for a diagnosis of cerebral atrophy on the basis of CT pictures are discussed. The computed tomograph does not permit conclusions on the etiology of diffuse atrophic processes. (orig.) [de

  12. Cloud Computing-An Ultimate Technique to Minimize Computing cost for Developing Countries

    OpenAIRE

    Narendra Kumar; Shikha Jain

    2012-01-01

    The presented paper deals with how remotely managed computing and IT resources can be beneficial in the developing countries like India and Asian sub-continent countries. This paper not only defines the architectures and functionalities of cloud computing but also indicates strongly about the current demand of Cloud computing to achieve organizational and personal level of IT supports in very minimal cost with high class flexibility. The power of cloud can be used to reduce the cost of IT - r...

  13. 3D computational mechanics elucidate the evolutionary implications of orbit position and size diversity of early amphibians.

    Directory of Open Access Journals (Sweden)

    Jordi Marcé-Nogué

    Full Text Available For the first time in vertebrate palaeontology, the potential of joining Finite Element Analysis (FEA and Parametrical Analysis (PA is used to shed new light on two different cranial parameters from the orbits to evaluate their biomechanical role and evolutionary patterns. The early tetrapod group of Stereospondyls, one of the largest groups of Temnospondyls is used as a case study because its orbits position and size vary hugely within the members of this group. An adult skull of Edingerella madagascariensis was analysed using two different cases of boundary and loading conditions in order to quantify stress and deformation response under a bilateral bite and during skull raising. Firstly, the variation of the original geometry of its orbits was introduced in the models producing new FEA results, allowing the exploration of the ecomorphology, feeding strategy and evolutionary patterns of these top predators. Secondly, the quantitative results were analysed in order to check if the orbit size and position were correlated with different stress patterns. These results revealed that in most of the cases the stress distribution is not affected by changes in the size and position of the orbit. This finding supports the high mechanical plasticity of this group during the Triassic period. The absence of mechanical constraints regarding the orbit probably promoted the ecomorphological diversity acknowledged for this group, as well as its ecological niche differentiation in the terrestrial Triassic ecosystems in clades as lydekkerinids, trematosaurs, capitosaurs or metoposaurs.

  14. 3D Computational Mechanics Elucidate the Evolutionary Implications of Orbit Position and Size Diversity of Early Amphibians

    Science.gov (United States)

    Marcé-Nogué, Jordi; Fortuny, Josep; De Esteban-Trivigno, Soledad; Sánchez, Montserrat; Gil, Lluís; Galobart, Àngel

    2015-01-01

    For the first time in vertebrate palaeontology, the potential of joining Finite Element Analysis (FEA) and Parametrical Analysis (PA) is used to shed new light on two different cranial parameters from the orbits to evaluate their biomechanical role and evolutionary patterns. The early tetrapod group of Stereospondyls, one of the largest groups of Temnospondyls is used as a case study because its orbits position and size vary hugely within the members of this group. An adult skull of Edingerella madagascariensis was analysed using two different cases of boundary and loading conditions in order to quantify stress and deformation response under a bilateral bite and during skull raising. Firstly, the variation of the original geometry of its orbits was introduced in the models producing new FEA results, allowing the exploration of the ecomorphology, feeding strategy and evolutionary patterns of these top predators. Secondly, the quantitative results were analysed in order to check if the orbit size and position were correlated with different stress patterns. These results revealed that in most of the cases the stress distribution is not affected by changes in the size and position of the orbit. This finding supports the high mechanical plasticity of this group during the Triassic period. The absence of mechanical constraints regarding the orbit probably promoted the ecomorphological diversity acknowledged for this group, as well as its ecological niche differentiation in the terrestrial Triassic ecosystems in clades as lydekkerinids, trematosaurs, capitosaurs or metoposaurs. PMID:26107295

  15. Computational Intelligence based techniques for islanding detection of distributed generation in distribution network: A review

    International Nuclear Information System (INIS)

    Laghari, J.A.; Mokhlis, H.; Karimi, M.; Bakar, A.H.A.; Mohamad, Hasmaini

    2014-01-01

    Highlights: • Unintentional and intentional islanding, their causes, and solutions are presented. • Remote, passive, active and hybrid islanding detection techniques are discussed. • The limitation of these techniques in accurately detect islanding are discussed. • Computational intelligence techniques ability in detecting islanding is discussed. • Review of ANN, fuzzy logic control, ANFIS, Decision tree techniques is provided. - Abstract: Accurate and fast islanding detection of distributed generation is highly important for its successful operation in distribution networks. Up to now, various islanding detection technique based on communication, passive, active and hybrid methods have been proposed. However, each technique suffers from certain demerits that cause inaccuracies in islanding detection. Computational intelligence based techniques, due to their robustness and flexibility in dealing with complex nonlinear systems, is an option that might solve this problem. This paper aims to provide a comprehensive review of computational intelligence based techniques applied for islanding detection of distributed generation. Moreover, the paper compares the accuracies of computational intelligence based techniques over existing techniques to provide a handful of information for industries and utility researchers to determine the best method for their respective system

  16. Evolutionary Nephrology

    Directory of Open Access Journals (Sweden)

    Robert L. Chevalier

    2017-05-01

    Full Text Available Progressive kidney disease follows nephron loss, hyperfiltration, and incomplete repair, a process described as “maladaptive.” In the past 20 years, a new discipline has emerged that expands research horizons: evolutionary medicine. In contrast to physiologic (homeostatic adaptation, evolutionary adaptation is the result of reproductive success that reflects natural selection. Evolutionary explanations for physiologically maladaptive responses can emerge from mismatch of the phenotype with environment or from evolutionary tradeoffs. Evolutionary adaptation to a terrestrial environment resulted in a vulnerable energy-consuming renal tubule and a hypoxic, hyperosmolar microenvironment. Natural selection favors successful energy investment strategy: energy is allocated to maintenance of nephron integrity through reproductive years, but this declines with increasing senescence after ∼40 years of age. Risk factors for chronic kidney disease include restricted fetal growth or preterm birth (life history tradeoff resulting in fewer nephrons, evolutionary selection for APOL1 mutations (which provide resistance to trypanosome infection, a tradeoff, and modern life experience (Western diet mismatch leading to diabetes and hypertension. Current advances in genomics, epigenetics, and developmental biology have revealed proximate causes of kidney disease, but attempts to slow kidney disease remain elusive. Evolutionary medicine provides a complementary approach by addressing ultimate causes of kidney disease. Marked variation in nephron number at birth, nephron heterogeneity, and changing susceptibility to kidney injury throughout the life history are the result of evolutionary processes. Combined application of molecular genetics, evolutionary developmental biology (evo-devo, developmental programming, and life history theory may yield new strategies for prevention and treatment of chronic kidney disease.

  17. Industrial radiography with Ir-192 using computed radiographic technique

    International Nuclear Information System (INIS)

    Ngernvijit, Narippawaj; Punnachaiya, Suvit; Chankow, Nares; Sukbumperng, Ampai; Thong-Aram, Decho

    2003-01-01

    The aim of this research is to study the utilization of a low activity Ir-192 gamma source for industrial radiographic testing using the Computed Radiography (CR) system. Due to a photo-salbutamol Imaging Plate (I P) using in CR is much more radiation sensitive than a type II film with lead foil intensifying screen, the exposure time with CR can be significantly reduced. For short-lived gamma-ray source like Ir-192 source, the exposure time must be proportionally increased until it is not practical particularly for thick specimens. Generally, when the source decays to an activity of about 5 Ci or less, it will be returned to the manufacturer as a radioactive waste. In this research, the optimum conditions for radiography of a 20 mm thick welded steel sample with 2.4 Ci Ir-192 was investigated using the CR system with high resolution image plate, i.e. type Bas-SR of the Fuji Film Co. Ltd. The I P was sandwiched by a pair of 0.25 mm thick Pb intensifying sere en. Low energy scattered radiations was filtered by placing another Pb sheet with a thickness of 3 mm under the cassette. It was found that the CR image could give a contrast sensitivity of 2.5 % using only 3-minute exposure time which was comparable to the image taken by the type II film with Pb intensifying screen using the exposure time of 45 minutes

  18. Computational modelling of the HyperVapotron cooling technique

    Energy Technology Data Exchange (ETDEWEB)

    Milnes, Joseph, E-mail: Joe.Milnes@ccfe.ac.uk [Euratom/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Burns, Alan [School of Process Material and Environmental Engineering, CFD Centre, University of Leeds, Leeds, LS2 9JT (United Kingdom); ANSYS UK, Milton Park, Oxfordshire (United Kingdom); Drikakis, Dimitris [Department of Engineering Physics, Cranfield University, Cranfield, MK43 0AL (United Kingdom)

    2012-09-15

    Highlights: Black-Right-Pointing-Pointer The heat transfer mechanisms within a HyperVapotron are examined. Black-Right-Pointing-Pointer A multiphase, CFD model is developed. Black-Right-Pointing-Pointer Modelling choices for turbulence and wall boiling are evaluated. Black-Right-Pointing-Pointer Considerable improvements in accuracy are found compared to standard boiling models. Black-Right-Pointing-Pointer The model should enable significant virtual prototyping to be performed. - Abstract: Efficient heat transfer technologies are essential for magnetically confined fusion reactors; this applies to both the current generation of experimental reactors as well as future power plants. A number of High Heat Flux devices have therefore been developed specifically for this application. One of the most promising candidates is the HyperVapotron, a water cooled device which relies on internal fins and boiling heat transfer to maximise the heat transfer capability. Over the past 30 years, numerous variations of the HyperVapotron have been built and tested at fusion research centres around the globe resulting in devices that can now sustain heat fluxes in the region of 20-30 MW/m{sup 2} in steady state. Until recently, there had been few attempts to model or understand the internal heat transfer mechanisms responsible for this exceptional performance with the result that design improvements have been traditionally sought experimentally which is both inefficient and costly. This paper presents the successful attempt to develop an engineering model of the HyperVapotron device using customisation of commercial Computational Fluid Dynamics software. To establish the most appropriate modelling choices, in-depth studies were performed examining the turbulence models (within the Reynolds Averaged Navier Stokes framework), near wall methods, grid resolution and boiling submodels. Comparing the CFD solutions with HyperVapotron experimental data suggests that a RANS-based, multiphase

  19. Evolutionary thinking

    Science.gov (United States)

    Hunt, Tam

    2014-01-01

    Evolution as an idea has a lengthy history, even though the idea of evolution is generally associated with Darwin today. Rebecca Stott provides an engaging and thoughtful overview of this history of evolutionary thinking in her 2013 book, Darwin's Ghosts: The Secret History of Evolution. Since Darwin, the debate over evolution—both how it takes place and, in a long war of words with religiously-oriented thinkers, whether it takes place—has been sustained and heated. A growing share of this debate is now devoted to examining how evolutionary thinking affects areas outside of biology. How do our lives change when we recognize that all is in flux? What can we learn about life more generally if we study change instead of stasis? Carter Phipps’ book, Evolutionaries: Unlocking the Spiritual and Cultural Potential of Science's Greatest Idea, delves deep into this relatively new development. Phipps generally takes as a given the validity of the Modern Synthesis of evolutionary biology. His story takes us into, as the subtitle suggests, the spiritual and cultural implications of evolutionary thinking. Can religion and evolution be reconciled? Can evolutionary thinking lead to a new type of spirituality? Is our culture already being changed in ways that we don't realize by evolutionary thinking? These are all important questions and Phipps book is a great introduction to this discussion. Phipps is an author, journalist, and contributor to the emerging “integral” or “evolutionary” cultural movement that combines the insights of Integral Philosophy, evolutionary science, developmental psychology, and the social sciences. He has served as the Executive Editor of EnlightenNext magazine (no longer published) and more recently is the co-founder of the Institute for Cultural Evolution, a public policy think tank addressing the cultural roots of America's political challenges. What follows is an email interview with Phipps. PMID:26478766

  20. Evolutionary Demography

    DEFF Research Database (Denmark)

    Levitis, Daniel

    2015-01-01

    of biological and cultural evolution. Demographic variation within and among human populations is influenced by our biology, and therefore by natural selection and our evolutionary background. Demographic methods are necessary for studying populations of other species, and for quantifying evolutionary fitness......Demography is the quantitative study of population processes, while evolution is a population process that influences all aspects of biological organisms, including their demography. Demographic traits common to all human populations are the products of biological evolution or the interaction...

  1. Comparative Analysis Between Computed and Conventional Inferior Alveolar Nerve Block Techniques.

    Science.gov (United States)

    Araújo, Gabriela Madeira; Barbalho, Jimmy Charles Melo; Dias, Tasiana Guedes de Souza; Santos, Thiago de Santana; Vasconcellos, Ricardo José de Holanda; de Morais, Hécio Henrique Araújo

    2015-11-01

    The aim of this randomized, double-blind, controlled trial was to compare the computed and conventional inferior alveolar nerve block techniques in symmetrically positioned inferior third molars. Both computed and conventional anesthetic techniques were performed in 29 healthy patients (58 surgeries) aged between 18 and 40 years. The anesthetic of choice was 2% lidocaine with 1: 200,000 epinephrine. The Visual Analogue Scale assessed the pain variable after anesthetic infiltration. Patient satisfaction was evaluated using the Likert Scale. Heart and respiratory rates, mean time to perform technique, and the need for additional anesthesia were also evaluated. Pain variable means were higher for the conventional technique as compared with computed, 3.45 ± 2.73 and 2.86 ± 1.96, respectively, but no statistically significant differences were found (P > 0.05). Patient satisfaction showed no statistically significant differences. The average computed technique runtime and the conventional were 3.85 and 1.61 minutes, respectively, showing statistically significant differences (P <0.001). The computed anesthetic technique showed lower mean pain perception, but did not show statistically significant differences when contrasted to the conventional technique.

  2. Random sampling technique for ultra-fast computations of molecular opacities for exoplanet atmospheres

    Science.gov (United States)

    Min, M.

    2017-10-01

    Context. Opacities of molecules in exoplanet atmospheres rely on increasingly detailed line-lists for these molecules. The line lists available today contain for many species up to several billions of lines. Computation of the spectral line profile created by pressure and temperature broadening, the Voigt profile, of all of these lines is becoming a computational challenge. Aims: We aim to create a method to compute the Voigt profile in a way that automatically focusses the computation time into the strongest lines, while still maintaining the continuum contribution of the high number of weaker lines. Methods: Here, we outline a statistical line sampling technique that samples the Voigt profile quickly and with high accuracy. The number of samples is adjusted to the strength of the line and the local spectral line density. This automatically provides high accuracy line shapes for strong lines or lines that are spectrally isolated. The line sampling technique automatically preserves the integrated line opacity for all lines, thereby also providing the continuum opacity created by the large number of weak lines at very low computational cost. Results: The line sampling technique is tested for accuracy when computing line spectra and correlated-k tables. Extremely fast computations ( 3.5 × 105 lines per second per core on a standard current day desktop computer) with high accuracy (≤1% almost everywhere) are obtained. A detailed recipe on how to perform the computations is given.

  3. Teaching Computer Ergonomic Techniques: Practices and Perceptions of Secondary and Postsecondary Business Educators.

    Science.gov (United States)

    Alexander, Melody W.; Arp, Larry W.

    1997-01-01

    A survey of 260 secondary and 251 postsecondary business educators found the former more likely to think computer ergonomic techniques should taught in elementary school and to address the hazards of improper use. Both groups stated that over half of students they observe do not use good techniques and agreed that students need continual…

  4. X-ray computer tomography, ultrasound and vibrational spectroscopic evaluation techniques of polymer gel dosimeters

    International Nuclear Information System (INIS)

    Baldock, Clive

    2004-01-01

    Since Gore et al published their paper on Fricke gel dosimetry, the predominant method of evaluation of both Fricke and polymer gel dosimeters has been magnetic resonance imaging (MRI). More recently optical computer tomography (CT) has also been a favourable evaluation method. Other techniques have been explored and developed as potential evaluation techniques in gel dosimetry. This paper reviews these other developments

  5. Instrumentation, computer software and experimental techniques used in low-frequency internal friction studies at WNRE

    International Nuclear Information System (INIS)

    Sprugmann, K.W.; Ritchie, I.G.

    1980-04-01

    A detailed and comprehensive account of the equipment, computer programs and experimental methods developed at the Whiteshell Nuclear Research Estalbishment for the study of low-frequency internal friction is presented. Part 1 describes the mechanical apparatus, electronic instrumentation and computer software, while Part II describes in detail the laboratory techniques and various types of experiments performed together with data reduction and analysis. Experimental procedures for the study of internal friction as a function of temperature, strain amplitude or time are described. Computer control of these experiments using the free-decay technique is outlined. In addition, a pendulum constant-amplitude drive system is described. (auth)

  6. [Clinical analysis of 12 cases of orthognathic surgery with digital computer-assisted technique].

    Science.gov (United States)

    Tan, Xin-ying; Hu, Min; Liu, Chang-kui; Liu, Hua-wei; Liu, San-xia; Tao, Ye

    2014-06-01

    This study was to investigate the effect of the digital computer-assisted technique in orthognathic surgery. Twelve patients from January 2008 to December 2011 with jaw malformation were treated in our department. With the help of CT and three-dimensional reconstruction technique, 12 patients underwent surgical treatment and the results were evaluated after surgery. Digital computer-assisted technique could clearly show the status of the jaw deformity and assist virtual surgery. After surgery all patients were satisfied with the results. Digital orthognathic surgery can improve the predictability of the surgical procedure, and to facilitate patients' communication, shorten operative time, and reduce patients' pain.

  7. Computational characterization of HPGe detectors usable for a wide variety of source geometries by using Monte Carlo simulation and a multi-objective evolutionary algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Guerra, J.G., E-mail: jglezg2002@gmail.es [Departamento de Física, Universidad de Las Palmas de Gran Canaria, 35001 Las Palmas de Gran Canaria (Spain); Rubiano, J.G. [Departamento de Física, Universidad de Las Palmas de Gran Canaria, 35001 Las Palmas de Gran Canaria (Spain); Instituto Universitario de Estudios Ambientales y Recursos Naturales, Universidad de Las Palmas de Gran Canaria, 35001 Las Palmas de Gran Canaria (Spain); Winter, G. [Instituto Universitario de Sistemas Inteligentes y Aplicaciones Numéricas en la Ingeniería, Universidad de Las Palmas de Gran Canaria, 35001 Las Palmas de Gran Canaria (Spain); Guerra, A.G.; Alonso, H.; Arnedo, M.A.; Tejera, A.; Martel, P. [Departamento de Física, Universidad de Las Palmas de Gran Canaria, 35001 Las Palmas de Gran Canaria (Spain); Instituto Universitario de Estudios Ambientales y Recursos Naturales, Universidad de Las Palmas de Gran Canaria, 35001 Las Palmas de Gran Canaria (Spain); Bolivar, J.P. [Departamento de Física Aplicada, Universidad de Huelva, 21071 Huelva (Spain)

    2017-06-21

    In this work, we have developed a computational methodology for characterizing HPGe detectors by implementing in parallel a multi-objective evolutionary algorithm, together with a Monte Carlo simulation code. The evolutionary algorithm is used for searching the geometrical parameters of a model of detector by minimizing the differences between the efficiencies calculated by Monte Carlo simulation and two reference sets of Full Energy Peak Efficiencies (FEPEs) corresponding to two given sample geometries, a beaker of small diameter laid over the detector window and a beaker of large capacity which wrap the detector. This methodology is a generalization of a previously published work, which was limited to beakers placed over the window of the detector with a diameter equal or smaller than the crystal diameter, so that the crystal mount cap (which surround the lateral surface of the crystal), was not considered in the detector model. The generalization has been accomplished not only by including such a mount cap in the model, but also using multi-objective optimization instead of mono-objective, with the aim of building a model sufficiently accurate for a wider variety of beakers commonly used for the measurement of environmental samples by gamma spectrometry, like for instance, Marinellis, Petris, or any other beaker with a diameter larger than the crystal diameter, for which part of the detected radiation have to pass through the mount cap. The proposed methodology has been applied to an HPGe XtRa detector, providing a model of detector which has been successfully verificated for different source-detector geometries and materials and experimentally validated using CRMs. - Highlights: • A computational method for characterizing HPGe detectors has been generalized. • The new version is usable for a wider range of sample geometries. • It starts from reference FEPEs obtained through a standard calibration procedure. • A model of an HPGe XtRa detector has been

  8. Computational characterization of HPGe detectors usable for a wide variety of source geometries by using Monte Carlo simulation and a multi-objective evolutionary algorithm

    International Nuclear Information System (INIS)

    Guerra, J.G.; Rubiano, J.G.; Winter, G.; Guerra, A.G.; Alonso, H.; Arnedo, M.A.; Tejera, A.; Martel, P.; Bolivar, J.P.

    2017-01-01

    In this work, we have developed a computational methodology for characterizing HPGe detectors by implementing in parallel a multi-objective evolutionary algorithm, together with a Monte Carlo simulation code. The evolutionary algorithm is used for searching the geometrical parameters of a model of detector by minimizing the differences between the efficiencies calculated by Monte Carlo simulation and two reference sets of Full Energy Peak Efficiencies (FEPEs) corresponding to two given sample geometries, a beaker of small diameter laid over the detector window and a beaker of large capacity which wrap the detector. This methodology is a generalization of a previously published work, which was limited to beakers placed over the window of the detector with a diameter equal or smaller than the crystal diameter, so that the crystal mount cap (which surround the lateral surface of the crystal), was not considered in the detector model. The generalization has been accomplished not only by including such a mount cap in the model, but also using multi-objective optimization instead of mono-objective, with the aim of building a model sufficiently accurate for a wider variety of beakers commonly used for the measurement of environmental samples by gamma spectrometry, like for instance, Marinellis, Petris, or any other beaker with a diameter larger than the crystal diameter, for which part of the detected radiation have to pass through the mount cap. The proposed methodology has been applied to an HPGe XtRa detector, providing a model of detector which has been successfully verificated for different source-detector geometries and materials and experimentally validated using CRMs. - Highlights: • A computational method for characterizing HPGe detectors has been generalized. • The new version is usable for a wider range of sample geometries. • It starts from reference FEPEs obtained through a standard calibration procedure. • A model of an HPGe XtRa detector has been

  9. On the theories, techniques, and computer codes used in numerical reactor criticality and burnup calculations

    International Nuclear Information System (INIS)

    El-Osery, I.A.

    1981-01-01

    The purpose of this paper is to discuss the theories, techniques and computer codes that are frequently used in numerical reactor criticality and burnup calculations. It is a part of an integrated nuclear reactor calculation scheme conducted by the Reactors Department, Inshas Nuclear Research Centre. The crude part in numerical reactor criticality and burnup calculations includes the determination of neutron flux distribution which can be obtained in principle as a solution of Boltzmann transport equation. Numerical methods used for solving transport equations are discussed. Emphasis are made on numerical techniques based on multigroup diffusion theory. These numerical techniques include nodal, modal, and finite difference ones. The most commonly known computer codes utilizing these techniques are reviewed. Some of the main computer codes that have been already developed at the Reactors Department and related to numerical reactor criticality and burnup calculations have been presented

  10. High-efficiency photorealistic computer-generated holograms based on the backward ray-tracing technique

    Science.gov (United States)

    Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin

    2018-03-01

    Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.

  11. Design of a computation tool for neutron spectrometry and dosimetry through evolutionary neural networks; Diseno de una herramienta de computo para la espectrometria y dosimetria de neutrones por medio de redes neuronales evolutivas

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J. M.; Vega C, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica, Av. Ramon Lopez Velarde No. 801, Col. Centro, Zacatecas (Mexico); Martinez B, M. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Av. Ramon Lopez Velarde No. 801, Col. Centro, Zacatecas (Mexico); Gallego, E. [Universidad Politecnica de Madrid, Departamento de Ingenieria Nuclear, Jose Gutierrez Abascal No. 2, E-28006 Madrid (Spain)], e-mail: morvymmyahoo@com.mx

    2009-10-15

    The neutron dosimetry is one of the most complicated tasks of radiation protection, due to it is a complex technique and highly dependent of neutron energy. One of the first devices used to perform neutron spectrometry is the system known as spectrometric system of Bonner spheres, that continuous being one of spectrometers most commonly used. This system has disadvantages such as: the components weight, the low resolution of spectrum, long and drawn out procedure for the spectra reconstruction, which require an expert user in system management, the need of use a reconstruction code as BUNKIE, SAND, etc., which are based on an iterative reconstruction algorithm and whose greatest inconvenience is that for the spectrum reconstruction, are needed to provide to system and initial spectrum as close as possible to the desired spectrum get. Consequently, researchers have mentioned the need to developed alternative measurement techniques to improve existing monitoring systems for workers. Among these alternative techniques have been reported several reconstruction procedures based on artificial intelligence techniques such as genetic algorithms, artificial neural networks and hybrid systems of evolutionary artificial neural networks using genetic algorithms. However, the use of these techniques in the nuclear science area is not free of problems, so it has been suggested that more research is conducted in such a way as to solve these disadvantages. Because they are emerging technologies, there are no tools for the results analysis, so in this paper we present first the design of a computation tool that allow to analyze the neutron spectra and equivalent doses, obtained through the hybrid technology of neural networks and genetic algorithms. This tool provides an user graphical environment, friendly, intuitive and easy of operate. The speed of program operation is high, executing the analysis in a few seconds, so it may storage and or print the obtained information for

  12. Evolutionary Expectations

    DEFF Research Database (Denmark)

    Nash, Ulrik William

    2014-01-01

    , they are correlated among people who share environments because these individuals satisfice within their cognitive bounds by using cues in order of validity, as opposed to using cues arbitrarily. Any difference in expectations thereby arise from differences in cognitive ability, because two individuals with identical...... cognitive bounds will perceive business opportunities identically. In addition, because cues provide information about latent causal structures of the environment, changes in causality must be accompanied by changes in cognitive representations if adaptation is to be maintained. The concept of evolutionary......The concept of evolutionary expectations descends from cue learning psychology, synthesizing ideas on rational expectations with ideas on bounded rationality, to provide support for these ideas simultaneously. Evolutionary expectations are rational, but within cognitive bounds. Moreover...

  13. [Evolutionary medicine].

    Science.gov (United States)

    Wjst, M

    2013-12-01

    Evolutionary medicine allows new insights into long standing medical problems. Are we "really stoneagers on the fast lane"? This insight might have enormous consequences and will allow new answers that could never been provided by traditional anthropology. Only now this is made possible using data from molecular medicine and systems biology. Thereby evolutionary medicine takes a leap from a merely theoretical discipline to practical fields - reproductive, nutritional and preventive medicine, as well as microbiology, immunology and psychiatry. Evolutionary medicine is not another "just so story" but a serious candidate for the medical curriculum providing a universal understanding of health and disease based on our biological origin. © Georg Thieme Verlag KG Stuttgart · New York.

  14. Evolutionary Awareness

    Directory of Open Access Journals (Sweden)

    Gregory Gorelik

    2014-10-01

    Full Text Available In this article, we advance the concept of “evolutionary awareness,” a metacognitive framework that examines human thought and emotion from a naturalistic, evolutionary perspective. We begin by discussing the evolution and current functioning of the moral foundations on which our framework rests. Next, we discuss the possible applications of such an evolutionarily-informed ethical framework to several domains of human behavior, namely: sexual maturation, mate attraction, intrasexual competition, culture, and the separation between various academic disciplines. Finally, we discuss ways in which an evolutionary awareness can inform our cross-generational activities—which we refer to as “intergenerational extended phenotypes”—by helping us to construct a better future for ourselves, for other sentient beings, and for our environment.

  15. Computational analyses of an evolutionary arms race between mammalian immunity mediated by immunoglobulin A and its subversion by bacterial pathogens.

    Directory of Open Access Journals (Sweden)

    Ana Pinheiro

    Full Text Available IgA is the predominant immunoglobulin isotype in mucosal tissues and external secretions, playing important roles both in defense against pathogens and in maintenance of commensal microbiota. Considering the complexity of its interactions with the surrounding environment, IgA is a likely target for diversifying or positive selection. To investigate this possibility, the action of natural selection on IgA was examined in depth with six different methods: CODEML from the PAML package and the SLAC, FEL, REL, MEME and FUBAR methods implemented in the Datamonkey webserver. In considering just primate IgA, these analyses show that diversifying selection targeted five positions of the Cα1 and Cα2 domains of IgA. Extending the analysis to include other mammals identified 18 positively selected sites: ten in Cα1, five in Cα2 and three in Cα3. All but one of these positions display variation in polarity and charge. Their structural locations suggest they indirectly influence the conformation of sites on IgA that are critical for interaction with host IgA receptors and also with proteins produced by mucosal pathogens that prevent their elimination by IgA-mediated effector mechanisms. Demonstrating the plasticity of IgA in the evolution of different groups of mammals, only two of the eighteen selected positions in all mammals are included in the five selected positions in primates. That IgA residues subject to positive selection impact sites targeted both by host receptors and subversive pathogen ligands highlights the evolutionary arms race playing out between mammals and pathogens, and further emphasizes the importance of IgA in protection against mucosal pathogens.

  16. PREFACE: 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011)

    Science.gov (United States)

    Teodorescu, Liliana; Britton, David; Glover, Nigel; Heinrich, Gudrun; Lauret, Jérôme; Naumann, Axel; Speer, Thomas; Teixeira-Dias, Pedro

    2012-06-01

    ACAT2011 This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011) which took place on 5-7 September 2011 at Brunel University, UK. The workshop series, which began in 1990 in Lyon, France, brings together computer science researchers and practitioners, and researchers from particle physics and related fields in order to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. It is a forum for the exchange of ideas among the fields, exploring and promoting cutting-edge computing, data analysis and theoretical calculation techniques in fundamental physics research. This year's edition of the workshop brought together over 100 participants from all over the world. 14 invited speakers presented key topics on computing ecosystems, cloud computing, multivariate data analysis, symbolic and automatic theoretical calculations as well as computing and data analysis challenges in astrophysics, bioinformatics and musicology. Over 80 other talks and posters presented state-of-the art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. Panel and round table discussions on data management and multivariate data analysis uncovered new ideas and collaboration opportunities in the respective areas. This edition of ACAT was generously sponsored by the Science and Technology Facility Council (STFC), the Institute for Particle Physics Phenomenology (IPPP) at Durham University, Brookhaven National Laboratory in the USA and Dell. We would like to thank all the participants of the workshop for the high level of their scientific contributions and for the enthusiastic participation in all its activities which were, ultimately, the key factors in the

  17. Arthroscopic Latarjet Techniques: Graft and Fixation Positioning Assessed With 2-Dimensional Computed Tomography Is Not Equivalent With Standard Open Technique.

    Science.gov (United States)

    Neyton, Lionel; Barth, Johannes; Nourissat, Geoffroy; Métais, Pierre; Boileau, Pascal; Walch, Gilles; Lafosse, Laurent

    2018-05-19

    To analyze graft and fixation (screw and EndoButton) positioning after the arthroscopic Latarjet technique with 2-dimensional computed tomography (CT) and to compare it with the open technique. We performed a retrospective multicenter study (March 2013 to June 2014). The inclusion criteria included patients with recurrent anterior instability treated with the Latarjet procedure. The exclusion criterion was the absence of a postoperative CT scan. The positions of the hardware, the positions of the grafts in the axial and sagittal planes, and the dispersion of values (variability) were compared. The study included 208 patients (79 treated with open technique, 87 treated with arthroscopic Latarjet technique with screw fixation [arthro-screw], and 42 treated with arthroscopic Latarjet technique with EndoButton fixation [arthro-EndoButton]). The angulation of the screws was different in the open group versus the arthro-screw group (superior, 10.3° ± 0.7° vs 16.9° ± 1.0° [P open inferior screws (P = .003). In the axial plane (level of equator), the arthroscopic techniques resulted in lateral positions (arthro-screw, 1.5 ± 0.3 mm lateral [P open technique (0.9 ± 0.2 mm medial). At the level of 25% of the glenoid height, the arthroscopic techniques resulted in lateral positions (arthro-screw, 0.3 ± 0.3 mm lateral [P open technique (1.0 ± 0.2 mm medial). Higher variability was observed in the arthro-screw group. In the sagittal plane, the arthro-screw technique resulted in higher positions (55% ± 3% of graft below equator) and the arthro-EndoButton technique resulted in lower positions (82% ± 3%, P open technique (71% ± 2%). Variability was not different. This study shows that the position of the fixation devices and position of the bone graft with the arthroscopic techniques are statistically significantly different from those with the open technique with 2-dimensional CT assessment. In the sagittal plane, the arthro-screw technique provides the highest

  18. Evolutionary robotics – A review

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    a need for a technique by which the robot is able to acquire new behaviours automatically .... Evolutionary robotics is a comparatively new field of robotics research, which seems to ..... Technical Report: PCIA-94-04, Institute of Psychology,.

  19. Evolutionary robotics

    Indian Academy of Sciences (India)

    In evolutionary robotics, a suitable robot control system is developed automatically through evolution due to the interactions between the robot and its environment. It is a complicated task, as the robot and the environment constitute a highly dynamical system. Several methods have been tried by various investigators to ...

  20. Plant process computer replacements - techniques to limit installation schedules and costs

    International Nuclear Information System (INIS)

    Baker, M.D.; Olson, J.L.

    1992-01-01

    Plant process computer systems, a standard fixture in all nuclear power plants, are used to monitor and display important plant process parameters. Scanning thousands of field sensors and alarming out-of-limit values, these computer systems are heavily relied on by control room operators. The original nuclear steam supply system (NSSS) vendor for the power plant often supplied the plant process computer. Designed using sixties and seventies technology, a plant's original process computer has been obsolete for some time. Driven by increased maintenance costs and new US Nuclear Regulatory Commission regulations such as NUREG-0737, Suppl. 1, many utilities have replaced their process computers with more modern computer systems. Given that computer systems are by their nature prone to rapid obsolescence, this replacement cycle will likely repeat. A process computer replacement project can be a significant capital expenditure and must be performed during a scheduled refueling outage. The object of the installation process is to install a working system on schedule. Experience gained by supervising several computer replacement installations has taught lessons that, if applied, will shorten the schedule and limit the risk of costly delays. Examples illustrating this technique are given. This paper and these examples deal only with the installation process and assume that the replacement computer system has been adequately designed, and development and factory tested

  1. Inductive reasoning and forecasting of population dynamics of Cylindrospermopsis raciborskii in three sub-tropical reservoirs by evolutionary computation.

    Science.gov (United States)

    Recknagel, Friedrich; Orr, Philip T; Cao, Hongqing

    2014-01-01

    Seven-day-ahead forecasting models of Cylindrospermopsis raciborskii in three warm-monomictic and mesotrophic reservoirs in south-east Queensland have been developed by means of water quality data from 1999 to 2010 and the hybrid evolutionary algorithm HEA. Resulting models using all measured variables as inputs as well as models using electronically measurable variables only as inputs forecasted accurately timing of overgrowth of C. raciborskii and matched well high and low magnitudes of observed bloom events with 0.45≤r 2 >0.61 and 0.4≤r 2 >0.57, respectively. The models also revealed relationships and thresholds triggering bloom events that provide valuable information on synergism between water quality conditions and population dynamics of C. raciborskii. Best performing models based on using all measured variables as inputs indicated electrical conductivity (EC) within the range of 206-280mSm -1 as threshold above which fast growth and high abundances of C. raciborskii have been observed for the three lakes. Best models based on electronically measurable variables for the Lakes Wivenhoe and Somerset indicated a water temperature (WT) range of 25.5-32.7°C within which fast growth and high abundances of C. raciborskii can be expected. By contrast the model for Lake Samsonvale highlighted a turbidity (TURB) level of 4.8 NTU as indicator for mass developments of C. raciborskii. Experiments with online measured water quality data of the Lake Wivenhoe from 2007 to 2010 resulted in predictive models with 0.61≤r 2 >0.65 whereby again similar levels of EC and WT have been discovered as thresholds for outgrowth of C. raciborskii. The highest validity of r 2 =0.75 for an in situ data-based model has been achieved after considering time lags for EC by 7 days and dissolved oxygen by 1 day. These time lags have been discovered by a systematic screening of all possible combinations of time lags between 0 and 10 days for all electronically measurable variables. The so

  2. Computational characterization of HPGe detectors usable for a wide variety of source geometries by using Monte Carlo simulation and a multi-objective evolutionary algorithm

    Science.gov (United States)

    Guerra, J. G.; Rubiano, J. G.; Winter, G.; Guerra, A. G.; Alonso, H.; Arnedo, M. A.; Tejera, A.; Martel, P.; Bolivar, J. P.

    2017-06-01

    In this work, we have developed a computational methodology for characterizing HPGe detectors by implementing in parallel a multi-objective evolutionary algorithm, together with a Monte Carlo simulation code. The evolutionary algorithm is used for searching the geometrical parameters of a model of detector by minimizing the differences between the efficiencies calculated by Monte Carlo simulation and two reference sets of Full Energy Peak Efficiencies (FEPEs) corresponding to two given sample geometries, a beaker of small diameter laid over the detector window and a beaker of large capacity which wrap the detector. This methodology is a generalization of a previously published work, which was limited to beakers placed over the window of the detector with a diameter equal or smaller than the crystal diameter, so that the crystal mount cap (which surround the lateral surface of the crystal), was not considered in the detector model. The generalization has been accomplished not only by including such a mount cap in the model, but also using multi-objective optimization instead of mono-objective, with the aim of building a model sufficiently accurate for a wider variety of beakers commonly used for the measurement of environmental samples by gamma spectrometry, like for instance, Marinellis, Petris, or any other beaker with a diameter larger than the crystal diameter, for which part of the detected radiation have to pass through the mount cap. The proposed methodology has been applied to an HPGe XtRa detector, providing a model of detector which has been successfully verificated for different source-detector geometries and materials and experimentally validated using CRMs.

  3. Costs incurred by applying computer-aided design/computer-aided manufacturing techniques for the reconstruction of maxillofacial defects.

    Science.gov (United States)

    Rustemeyer, Jan; Melenberg, Alex; Sari-Rieger, Aynur

    2014-12-01

    This study aims to evaluate the additional costs incurred by using a computer-aided design/computer-aided manufacturing (CAD/CAM) technique for reconstructing maxillofacial defects by analyzing typical cases. The medical charts of 11 consecutive patients who were subjected to the CAD/CAM technique were considered, and invoices from the companies providing the CAD/CAM devices were reviewed for every case. The number of devices used was significantly correlated with cost (r = 0.880; p costs were found between cases in which prebent reconstruction plates were used (€3346.00 ± €29.00) and cases in which they were not (€2534.22 ± €264.48; p costs of two, three and four devices, even when ignoring the cost of reconstruction plates. Additional fees provided by statutory health insurance covered a mean of 171.5% ± 25.6% of the cost of the CAD/CAM devices. Since the additional fees provide financial compensation, we believe that the CAD/CAM technique is suited for wide application and not restricted to complex cases. Where additional fees/funds are not available, the CAD/CAM technique might be unprofitable, so the decision whether or not to use it remains a case-to-case decision with respect to cost versus benefit. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  4. Computer Aided Measurement Laser (CAML): technique to quantify post-mastectomy lymphoedema

    International Nuclear Information System (INIS)

    Trombetta, Chiara; Abundo, Paolo; Felici, Antonella; Ljoka, Concetta; Foti, Calogero; Cori, Sandro Di; Rosato, Nicola

    2012-01-01

    Lymphoedema can be a side effect of cancer treatment. Eventhough several methods for assessing lymphoedema are used in clinical practice, an objective quantification of lymphoedema has been problematic. The aim of the study was to determine the objectivity, reliability and repeatability of the computer aided measurement laser (CAML) technique. CAML technique is based on computer aided design (CAD) methods and requires an infrared laser scanner. Measurements are scanned and the information describing size and shape of the limb allows to design the model by using the CAD software. The objectivity and repeatability was established in the beginning using a phantom. Consequently a group of subjects presenting post-breast cancer lymphoedema was evaluated using as a control the contralateral limb. Results confirmed that in clinical settings CAML technique is easy to perform, rapid and provides meaningful data for assessing lymphoedema. Future research will include a comparison of upper limb CAML technique between healthy subjects and patients with known lymphoedema.

  5. PREFACE: 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2013)

    Science.gov (United States)

    Wang, Jianxiong

    2014-06-01

    This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16-21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China. The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas. ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon. We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at http://acat2013.ihep.ac.cn. Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science Details of committees and sponsors are available in the PDF

  6. 16th International workshop on Advanced Computing and Analysis Techniques in physics (ACAT)

    CERN Document Server

    Lokajicek, M; Tumova, N

    2015-01-01

    16th International workshop on Advanced Computing and Analysis Techniques in physics (ACAT). The ACAT workshop series, formerly AIHENP (Artificial Intelligence in High Energy and Nuclear Physics), was created back in 1990. Its main purpose is to gather researchers related with computing in physics research together, from both physics and computer science sides, and bring them a chance to communicate with each other. It has established bridges between physics and computer science research, facilitating the advances in our understanding of the Universe at its smallest and largest scales. With the Large Hadron Collider and many astronomy and astrophysics experiments collecting larger and larger amounts of data, such bridges are needed now more than ever. The 16th edition of ACAT aims to bring related researchers together, once more, to explore and confront the boundaries of computing, automatic data analysis and theoretical calculation technologies. It will create a forum for exchanging ideas among the fields an...

  7. PREFACE: 16th International workshop on Advanced Computing and Analysis Techniques in physics research (ACAT2014)

    Science.gov (United States)

    Fiala, L.; Lokajicek, M.; Tumova, N.

    2015-05-01

    This volume of the IOP Conference Series is dedicated to scientific contributions presented at the 16th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2014), this year the motto was ''bridging disciplines''. The conference took place on September 1-5, 2014, at the Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic. The 16th edition of ACAT explored the boundaries of computing system architectures, data analysis algorithmics, automatic calculations, and theoretical calculation technologies. It provided a forum for confronting and exchanging ideas among these fields, where new approaches in computing technologies for scientific research were explored and promoted. This year's edition of the workshop brought together over 140 participants from all over the world. The workshop's 16 invited speakers presented key topics on advanced computing and analysis techniques in physics. During the workshop, 60 talks and 40 posters were presented in three tracks: Computing Technology for Physics Research, Data Analysis - Algorithms and Tools, and Computations in Theoretical Physics: Techniques and Methods. The round table enabled discussions on expanding software, knowledge sharing and scientific collaboration in the respective areas. ACAT 2014 was generously sponsored by Western Digital, Brookhaven National Laboratory, Hewlett Packard, DataDirect Networks, M Computers, Bright Computing, Huawei and PDV-Systemhaus. Special appreciations go to the track liaisons Lorenzo Moneta, Axel Naumann and Grigory Rubtsov for their work on the scientific program and the publication preparation. ACAT's IACC would also like to express its gratitude to all referees for their work on making sure the contributions are published in the proceedings. Our thanks extend to the conference liaisons Andrei Kataev and Jerome Lauret who worked with the local contacts and made this conference possible as well as to the program

  8. Computational modeling of Repeat1 region of INI1/hSNF5: An evolutionary link with ubiquitin

    Science.gov (United States)

    Bhutoria, Savita

    2016-01-01

    Abstract The structure of a protein can be very informative of its function. However, determining protein structures experimentally can often be very challenging. Computational methods have been used successfully in modeling structures with sufficient accuracy. Here we have used computational tools to predict the structure of an evolutionarily conserved and functionally significant domain of Integrase interactor (INI)1/hSNF5 protein. INI1 is a component of the chromatin remodeling SWI/SNF complex, a tumor suppressor and is involved in many protein‐protein interactions. It belongs to SNF5 family of proteins that contain two conserved repeat (Rpt) domains. Rpt1 domain of INI1 binds to HIV‐1 Integrase, and acts as a dominant negative mutant to inhibit viral replication. Rpt1 domain also interacts with oncogene c‐MYC and modulates its transcriptional activity. We carried out an ab initio modeling of a segment of INI1 protein containing the Rpt1 domain. The structural model suggested the presence of a compact and well defined ββαα topology as core structure in the Rpt1 domain of INI1. This topology in Rpt1 was similar to PFU domain of Phospholipase A2 Activating Protein, PLAA. Interestingly, PFU domain shares similarity with Ubiquitin and has ubiquitin binding activity. Because of the structural similarity between Rpt1 domain of INI1 and PFU domain of PLAA, we propose that Rpt1 domain of INI1 may participate in ubiquitin recognition or binding with ubiquitin or ubiquitin related proteins. This modeling study may shed light on the mode of interactions of Rpt1 domain of INI1 and is likely to facilitate future functional studies of INI1. PMID:27261671

  9. A new technique for on-line and off-line high speed computation

    International Nuclear Information System (INIS)

    Hartouni, E.P.; Jensen, D.A.; Klima, B.; Kreisler, M.N.; Rabin, M.S.Z.; Uribe, J.; Gottschalk, E.; Gara, A.; Knapp, B.C.

    1989-01-01

    A new technique for both on-line and off-line computation has been developed. With this technique, a reconstruction analysis in Elementary Particle Physics, otherwise prohibitively long, has been accomplished. It will be used on-line in an upcoming Fermilab experiment to reconstruct more than 100,000 events per second and to trigger on the basis of that information. The technique delivers 40 Giga operations per second, has a bandwidth on the order of Gigabytes per second and has a modest cost. An overview of the program, details of the system, and performance measurements are presented in this paper

  10. Introduction to Evolutionary Algorithms

    CERN Document Server

    Yu, Xinjie

    2010-01-01

    Evolutionary algorithms (EAs) are becoming increasingly attractive for researchers from various disciplines, such as operations research, computer science, industrial engineering, electrical engineering, social science, economics, etc. This book presents an insightful, comprehensive, and up-to-date treatment of EAs, such as genetic algorithms, differential evolution, evolution strategy, constraint optimization, multimodal optimization, multiobjective optimization, combinatorial optimization, evolvable hardware, estimation of distribution algorithms, ant colony optimization, particle swarm opti

  11. Domain Immersion Technique And Free Surface Computations Applied To Extrusion And Mixing Processes

    Science.gov (United States)

    Valette, Rudy; Vergnes, Bruno; Basset, Olivier; Coupez, Thierry

    2007-04-01

    This work focuses on the development of numerical techniques devoted to the simulation of mixing processes of complex fluids such as twin-screw extrusion or batch mixing. In mixing process simulation, the absence of symmetry of the moving boundaries (the screws or the rotors) implies that their rigid body motion has to be taken into account by using a special treatment. We therefore use a mesh immersion technique (MIT), which consists in using a P1+/P1-based (MINI-element) mixed finite element method for solving the velocity-pressure problem and then solving the problem in the whole barrel cavity by imposing a rigid motion (rotation) to nodes found located inside the so called immersed domain, each subdomain (screw, rotor) being represented by a surface CAD mesh (or its mathematical equation in simple cases). The independent meshes are immersed into a unique backgound computational mesh by computing the distance function to their boundaries. Intersections of meshes are accounted for, allowing to compute a fill factor usable as for the VOF methodology. This technique, combined with the use of parallel computing, allows to compute the time-dependent flow of generalized Newtonian fluids including yield stress fluids in a complex system such as a twin screw extruder, including moving free surfaces, which are treated by a "level set" and Hamilton-Jacobi method.

  12. Prediction of scour caused by 2D horizontal jets using soft computing techniques

    Directory of Open Access Journals (Sweden)

    Masoud Karbasi

    2017-12-01

    Full Text Available This paper presents application of five soft-computing techniques, artificial neural networks, support vector regression, gene expression programming, grouping method of data handling (GMDH neural network and adaptive-network-based fuzzy inference system, to predict maximum scour hole depth downstream of a sluice gate. The input parameters affecting the scour depth are the sediment size and its gradation, apron length, sluice gate opening, jet Froude number and the tail water depth. Six non-dimensional parameters were achieved to define a functional relationship between the input and output variables. Published data were used from the experimental researches. The results of soft-computing techniques were compared with empirical and regression based equations. The results obtained from the soft-computing techniques are superior to those of empirical and regression based equations. Comparison of soft-computing techniques showed that accuracy of the ANN model is higher than other models (RMSE = 0.869. A new GEP based equation was proposed.

  13. Development of a technique for three-dimensional image reconstruction from emission computed tomograms (ECT)

    International Nuclear Information System (INIS)

    Gerischer, R.

    1987-01-01

    The described technique for three-dimensional image reconstruction from ECT sections is based on a simple procedure, which can be carried out with the aid of any standard-type computer used in nuclear medicine and requires no sophisticated arithmetic approach. (TRV) [de

  14. Evolutionary institutionalism.

    Science.gov (United States)

    Fürstenberg, Dr Kai

    Institutions are hard to define and hard to study. Long prominent in political science have been two theories: Rational Choice Institutionalism (RCI) and Historical Institutionalism (HI). Arising from the life sciences is now a third: Evolutionary Institutionalism (EI). Comparative strengths and weaknesses of these three theories warrant review, and the value-to-be-added by expanding the third beyond Darwinian evolutionary theory deserves consideration. Should evolutionary institutionalism expand to accommodate new understanding in ecology, such as might apply to the emergence of stability, and in genetics, such as might apply to political behavior? Core arguments are reviewed for each theory with more detailed exposition of the third, EI. Particular attention is paid to EI's gene-institution analogy; to variation, selection, and retention of institutional traits; to endogeneity and exogeneity; to agency and structure; and to ecosystem effects, institutional stability, and empirical limitations in behavioral genetics. RCI, HI, and EI are distinct but complementary. Institutional change, while amenable to rational-choice analysis and, retrospectively, to criticaljuncture and path-dependency analysis, is also, and importantly, ecological. Stability, like change, is an emergent property of institutions, which tend to stabilize after change in a manner analogous to allopatric speciation. EI is more than metaphorically biological in that institutional behaviors are driven by human behaviors whose evolution long preceded the appearance of institutions themselves.

  15. Now and next-generation sequencing techniques: future of sequence analysis using cloud computing.

    Science.gov (United States)

    Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav

    2012-01-01

    Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed "cloud computing") has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows.

  16. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm

    Science.gov (United States)

    Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239

  17. Secure Scientific Applications Scheduling Technique for Cloud Computing Environment Using Global League Championship Algorithm.

    Science.gov (United States)

    Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid

    2016-01-01

    Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.

  18. 17th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2016)

    International Nuclear Information System (INIS)

    2016-01-01

    Preface The 2016 version of the International Workshop on Advanced Computing and Analysis Techniques in Physics Research took place on January 18-22, 2016, at the Universidad Técnica Federico Santa Maria -UTFSM- in Valparaiso, Chile. The present volume of IOP Conference Series is devoted to the selected scientific contributions presented at the workshop. In order to guarantee the scientific quality of the Proceedings all papers were thoroughly peer-reviewed by an ad-hoc Editorial Committee with the help of many careful reviewers. The ACAT Workshop series has a long tradition starting in 1990 (Lyon, France), and takes place in intervals of a year and a half. Formerly these workshops were known under the name AIHENP (Artificial Intelligence for High Energy and Nuclear Physics). Each edition brings together experimental and theoretical physicists and computer scientists/experts, from particle and nuclear physics, astronomy and astrophysics in order to exchange knowledge and experience in computing and data analysis in physics. Three tracks cover the main topics: Computing technology: languages and system architectures. Data analysis: algorithms and tools. Theoretical Physics: techniques and methods. Although most contributions and discussions are related to particle physics and computing, other fields like condensed matter physics, earth physics, biophysics are often addressed in the hope to share our approaches and visions. It created a forum for exchanging ideas among fields, exploring and promoting cutting-edge computing technologies and debating hot topics. (paper)

  19. Dual-scan technique for the customization of zirconia computer-aided design/computer-aided manufacturing frameworks.

    Science.gov (United States)

    Andreiuolo, Rafael Ferrone; Sabrosa, Carlos Eduardo; Dias, Katia Regina H Cervantes

    2013-09-01

    The use of bi-layered all-ceramic crowns has continuously grown since the introduction of computer-aided design/computer-aided manufacturing (CAD/CAM) zirconia cores. Unfortunately, despite the outstanding mechanical properties of zirconia, problems related to porcelain cracking or chipping remain. One of the reasons for this is that ceramic copings are usually milled to uniform thicknesses of 0.3-0.6 mm around the whole tooth preparation. This may not provide uniform thickness or appropriate support for the veneering porcelain. To prevent these problems, the dual-scan technique demonstrates an alternative that allows the restorative team to customize zirconia CAD/CAM frameworks with adequate porcelain thickness and support in a simple manner.

  20. Hardware for soft computing and soft computing for hardware

    CERN Document Server

    Nedjah, Nadia

    2014-01-01

    Single and Multi-Objective Evolutionary Computation (MOEA),  Genetic Algorithms (GAs), Artificial Neural Networks (ANNs), Fuzzy Controllers (FCs), Particle Swarm Optimization (PSO) and Ant colony Optimization (ACO) are becoming omnipresent in almost every intelligent system design. Unfortunately, the application of the majority of these techniques is complex and so requires a huge computational effort to yield useful and practical results. Therefore, dedicated hardware for evolutionary, neural and fuzzy computation is a key issue for designers. With the spread of reconfigurable hardware such as FPGAs, digital as well as analog hardware implementations of such computation become cost-effective. The idea behind this book is to offer a variety of hardware designs for soft computing techniques that can be embedded in any final product. Also, to introduce the successful application of soft computing technique to solve many hard problem encountered during the design of embedded hardware designs. Reconfigurable em...

  1. Rayleigh’s quotient–based damage detection algorithm: Theoretical concepts, computational techniques, and field implementation strategies

    DEFF Research Database (Denmark)

    NJOMO WANDJI, Wilfried

    2017-01-01

    levels are targeted: existence, location, and severity. The proposed algorithm is analytically developed from the dynamics theory and the virtual energy principle. Some computational techniques are proposed for carrying out computations, including discretization, integration, derivation, and suitable...

  2. A Computer Program for Simplifying Incompletely Specified Sequential Machines Using the Paull and Unger Technique

    Science.gov (United States)

    Ebersole, M. M.; Lecoq, P. E.

    1968-01-01

    This report presents a description of a computer program mechanized to perform the Paull and Unger process of simplifying incompletely specified sequential machines. An understanding of the process, as given in Ref. 3, is a prerequisite to the use of the techniques presented in this report. This process has specific application in the design of asynchronous digital machines and was used in the design of operational support equipment for the Mariner 1966 central computer and sequencer. A typical sequential machine design problem is presented to show where the Paull and Unger process has application. A description of the Paull and Unger process together with a description of the computer algorithms used to develop the program mechanization are presented. Several examples are used to clarify the Paull and Unger process and the computer algorithms. Program flow diagrams, program listings, and a program user operating procedures are included as appendixes.

  3. Quality-of-service sensitivity to bio-inspired/evolutionary computational methods for intrusion detection in wireless ad hoc multimedia sensor networks

    Science.gov (United States)

    Hortos, William S.

    2012-06-01

    In the author's previous work, a cross-layer protocol approach to wireless sensor network (WSN) intrusion detection an identification is created with multiple bio-inspired/evolutionary computational methods applied to the functions of the protocol layers, a single method to each layer, to improve the intrusion-detection performance of the protocol over that of one method applied to only a single layer's functions. The WSN cross-layer protocol design embeds GAs, anti-phase synchronization, ACO, and a trust model based on quantized data reputation at the physical, MAC, network, and application layer, respectively. The construct neglects to assess the net effect of the combined bioinspired methods on the quality-of-service (QoS) performance for "normal" data streams, that is, streams without intrusions. Analytic expressions of throughput, delay, and jitter, coupled with simulation results for WSNs free of intrusion attacks, are the basis for sensitivity analyses of QoS metrics for normal traffic to the bio-inspired methods.

  4. A Model for the Acceptance of Cloud Computing Technology Using DEMATEL Technique and System Dynamics Approach

    Directory of Open Access Journals (Sweden)

    seyyed mohammad zargar

    2018-03-01

    Full Text Available Cloud computing is a new method to provide computing resources and increase computing power in organizations. Despite the many benefits this method shares, it has not been universally used because of some obstacles including security issues and has become a concern for IT managers in organization. In this paper, the general definition of cloud computing is presented. In addition, having reviewed previous studies, the researchers identified effective variables on technology acceptance and, especially, cloud computing technology. Then, using DEMATEL technique, the effectiveness and permeability of the variable were determined. The researchers also designed a model to show the existing dynamics in cloud computing technology using system dynamics approach. The validity of the model was confirmed through evaluation methods in dynamics model by using VENSIM software. Finally, based on different conditions of the proposed model, a variety of scenarios were designed. Then, the implementation of these scenarios was simulated within the proposed model. The results showed that any increase in data security, government support and user training can lead to the increase in the adoption and use of cloud computing technology.

  5. Recent advances in 3D computed tomography techniques for simulation and navigation in hepatobiliary pancreatic surgery.

    Science.gov (United States)

    Uchida, Masafumi

    2014-04-01

    A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging. © 2014 Japanese Society of Hepato-Biliary-Pancreatic Surgery.

  6. Application of Soft Computing Techniques and Multiple Regression Models for CBR prediction of Soils

    Directory of Open Access Journals (Sweden)

    Fatimah Khaleel Ibrahim

    2017-08-01

    Full Text Available The techniques of soft computing technique such as Artificial Neutral Network (ANN have improved the predicting capability and have actually discovered application in Geotechnical engineering. The aim of this research is to utilize the soft computing technique and Multiple Regression Models (MLR for forecasting the California bearing ratio CBR( of soil from its index properties. The indicator of CBR for soil could be predicted from various soils characterizing parameters with the assist of MLR and ANN methods. The data base that collected from the laboratory by conducting tests on 86 soil samples that gathered from different projects in Basrah districts. Data gained from the experimental result were used in the regression models and soft computing techniques by using artificial neural network. The liquid limit, plastic index , modified compaction test and the CBR test have been determined. In this work, different ANN and MLR models were formulated with the different collection of inputs to be able to recognize their significance in the prediction of CBR. The strengths of the models that were developed been examined in terms of regression coefficient (R2, relative error (RE% and mean square error (MSE values. From the results of this paper, it absolutely was noticed that all the proposed ANN models perform better than that of MLR model. In a specific ANN model with all input parameters reveals better outcomes than other ANN models.

  7. Combining Acceleration Techniques for Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction.

    Science.gov (United States)

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2017-01-01

    Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.

  8. A novel technique for presurgical nasoalveolar molding using computer-aided reverse engineering and rapid prototyping.

    Science.gov (United States)

    Yu, Quan; Gong, Xin; Wang, Guo-Min; Yu, Zhe-Yuan; Qian, Yu-Fen; Shen, Gang

    2011-01-01

    To establish a new method of presurgical nasoalveolar molding (NAM) using computer-aided reverse engineering and rapid prototyping technique in infants with unilateral cleft lip and palate (UCLP). Five infants (2 males and 3 females with mean age of 1.2 w) with complete UCLP were recruited. All patients were subjected to NAM before the cleft lip repair. The upper denture casts were recorded using a three-dimensional laser scanner within 2 weeks after birth in UCLP infants. A digital model was constructed and analyzed to simulate the NAM procedure with reverse engineering software. The digital geometrical data were exported to print the solid model with rapid prototyping system. The whole set of appliances was fabricated based on these solid models. Laser scanning and digital model construction simplified the NAM procedure and estimated the treatment objective. The appliances were fabricated based on the rapid prototyping technique, and for each patient, the complete set of appliances could be obtained at one time. By the end of presurgical NAM treatment, the cleft was narrowed, and the malformation of nasoalveolar segments was aligned normally. We have developed a novel technique of presurgical NAM based on a computer-aided design. The accurate digital denture model of UCLP infants could be obtained with laser scanning. The treatment design and appliance fabrication could be simplified with a computer-aided reverse engineering and rapid prototyping technique.

  9. Molluscan Evolutionary Genomics

    Energy Technology Data Exchange (ETDEWEB)

    Simison, W. Brian; Boore, Jeffrey L.

    2005-12-01

    In the last 20 years there have been dramatic advances in techniques of high-throughput DNA sequencing, most recently accelerated by the Human Genome Project, a program that has determined the three billion base pair code on which we are based. Now this tremendous capability is being directed at other genome targets that are being sampled across the broad range of life. This opens up opportunities as never before for evolutionary and organismal biologists to address questions of both processes and patterns of organismal change. We stand at the dawn of a new 'modern synthesis' period, paralleling that of the early 20th century when the fledgling field of genetics first identified the underlying basis for Darwin's theory. We must now unite the efforts of systematists, paleontologists, mathematicians, computer programmers, molecular biologists, developmental biologists, and others in the pursuit of discovering what genomics can teach us about the diversity of life. Genome-level sampling for mollusks to date has mostly been limited to mitochondrial genomes and it is likely that these will continue to provide the best targets for broad phylogenetic sampling in the near future. However, we are just beginning to see an inroad into complete nuclear genome sequencing, with several mollusks and other eutrochozoans having been selected for work about to begin. Here, we provide an overview of the state of molluscan mitochondrial genomics, highlight a few of the discoveries from this research, outline the promise of broadening this dataset, describe upcoming projects to sequence whole mollusk nuclear genomes, and challenge the community to prepare for making the best use of these data.

  10. Unified commutation-pruning technique for efficient computation of composite DFTs

    Science.gov (United States)

    Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.

    2015-12-01

    An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with

  11. Quality comparison between DEF-10 digital image from simulation technique and Computed Tomography (CR) technique in industrial radiography

    International Nuclear Information System (INIS)

    Siti Nur Syatirah Ismail

    2012-01-01

    The study was conducted to make comparison of digital image quality of DEF-10 from the techniques of simulation and computed radiography (CR). The sample used is steel DEF-10 with thickness of 15.28 mm. In this study, the sample is exposed to radiation from X-ray machine (ISOVOLT Titan E) with certain parameters. The parameters used in this study such as current, volt, exposure time and distance are specified. The current and distance of 3 mA and 700 mm respectively are specified while the applied voltage varies at 140, 160, 180 and 200 kV. The exposure time is reduced at a rate of 0, 20, 40, 60 and 80 % for each sample exposure. Digital image of simulation produced from aRTist software whereas digital image of computed radiography produced from imaging plate. Therefore, both images were compared qualitatively (sensitivity) and quantitatively (Signal to-Noise Ratio; SNR, Basic Spatial Resolution; SRb and LOP size) using Isee software. Radiographic sensitivity is indicated by Image Quality Indicator (IQI) which is the ability of the CR system and aRTist software to identify IQI of wire type when the time exposure is reduced up to 80% according to exposure chart ( D7; ISOVOLT Titan E). The image of the thinnest wire diameter achieved by radiograph from simulation and CR are the wire numbered 7 rather than the wire numbered 8 required by the standard. In quantitative comparison, this study shows that the SNR values decreases with reducing exposure time. SRb values increases for simulation and decreases for CR when the exposure time decreases and the good image quality can be achieved at 80% reduced exposure time. The high SNR and SRb values produced good image quality in CR and simulation techniques respectively. (author)

  12. Parallel Evolutionary Optimization for Neuromorphic Network Training

    Energy Technology Data Exchange (ETDEWEB)

    Schuman, Catherine D [ORNL; Disney, Adam [University of Tennessee (UT); Singh, Susheela [North Carolina State University (NCSU), Raleigh; Bruer, Grant [University of Tennessee (UT); Mitchell, John Parker [University of Tennessee (UT); Klibisz, Aleksander [University of Tennessee (UT); Plank, James [University of Tennessee (UT)

    2016-01-01

    One of the key impediments to the success of current neuromorphic computing architectures is the issue of how best to program them. Evolutionary optimization (EO) is one promising programming technique; in particular, its wide applicability makes it especially attractive for neuromorphic architectures, which can have many different characteristics. In this paper, we explore different facets of EO on a spiking neuromorphic computing model called DANNA. We focus on the performance of EO in the design of our DANNA simulator, and on how to structure EO on both multicore and massively parallel computing systems. We evaluate how our parallel methods impact the performance of EO on Titan, the U.S.'s largest open science supercomputer, and BOB, a Beowulf-style cluster of Raspberry Pi's. We also focus on how to improve the EO by evaluating commonality in higher performing neural networks, and present the result of a study that evaluates the EO performed by Titan.

  13. Automated computation of femoral angles in dogs from three-dimensional computed tomography reconstructions: Comparison with manual techniques.

    Science.gov (United States)

    Longo, F; Nicetto, T; Banzato, T; Savio, G; Drigo, M; Meneghello, R; Concheri, G; Isola, M

    2018-02-01

    The aim of this ex vivo study was to test a novel three-dimensional (3D) automated computer-aided design (CAD) method (aCAD) for the computation of femoral angles in dogs from 3D reconstructions of computed tomography (CT) images. The repeatability and reproducibility of three manual radiography, manual CT reconstructions and the aCAD method for the measurement of three femoral angles were evaluated: (1) anatomical lateral distal femoral angle (aLDFA); (2) femoral neck angle (FNA); and (3) femoral torsion angle (FTA). Femoral angles of 22 femurs obtained from 16 cadavers were measured by three blinded observers. Measurements were repeated three times by each observer for each diagnostic technique. Femoral angle measurements were analysed using a mixed effects linear model for repeated measures to determine the levels of intra-observer agreement (repeatability) and inter-observer agreement (reproducibility). Repeatability and reproducibility of measurements using the aCAD method were excellent (intra-class coefficients, ICCs≥0.98) for all three angles assessed. Manual radiography and CT exhibited excellent agreement for the aLDFA measurement (ICCs≥0.90). However, FNA repeatability and reproducibility were poor (ICCscomputation of the 3D aCAD method provided the highest repeatability and reproducibility among the tested methodologies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Head and neck computed tomography virtual endoscopy: evaluation of a new imaging technique.

    Science.gov (United States)

    Gallivan, R P; Nguyen, T H; Armstrong, W B

    1999-10-01

    To evaluate a new radiographic imaging technique: computed tomography virtual endoscopy (CTVE) for head and neck tumors. Twenty-one patients presenting with head and neck masses who underwent axial computed tomography (CT) scan with contrast were evaluated by CTVE. Comparisons were made with video-recorded images and operative records to evaluate the potential utility of this new imaging technique. Twenty-one patients with aerodigestive head and neck tumors were evaluated by CTVE. One patient had a nasal cylindrical cell papilloma; the remainder, squamous cell carcinomas distributed throughout the upper aerodigestive tract. Patients underwent complete head and neck examination, flexible laryngoscopy, axial CT with contrast, CTVE, and in most cases, operative endoscopy. Available clinical and radiographic evaluations were compared and correlated to CTVE findings. CTVE accurately demonstrated abnormalities caused by intraluminal tumor, but where there was apposition of normal tissue against tumor, inaccurate depictions of surface contour occurred. Contour resolution was limited, and mucosal irregularity could not be defined. There was very good overall correlation between virtual images, flexible laryngoscopic findings, rigid endoscopy, and operative evaluation in cases where oncological resections were performed. CTVE appears to be most accurate in evaluation of subglottic and nasopharyngeal anatomy in our series of patients. CTVE is a new radiographic technique that provides surface-contour details. The technique is undergoing rapid technical evolution, and although the image quality is limited in situations where there is apposition of tissue folds, there are a number of potential applications for this new imaging technique.

  15. Artifact Elimination Technique in Tomogram of X-ray Computed Tomography

    International Nuclear Information System (INIS)

    Rasif Mohd Zain

    2015-01-01

    Artifacts of tomogram are main commonly problems occurred in x-ray computed tomography. The artifacts will be appearing in tomogram due to noise, beam hardening, and scattered radiation. The study has been carried out using CdTe time pix detector. The new technique has been developed to eliminate the artifact occurred in hardware and software. The hardware setup involved the careful alignment all of the components of the system and the introduction of a collimator beam. Meanwhile, in software development deal with the flat field correction, noise filter and data projection algorithm. The results show the technique developed produce good quality images and eliminate the artifacts. (author)

  16. Shaded computer graphic techniques for visualizing and interpreting analytic fluid flow models

    Science.gov (United States)

    Parke, F. I.

    1981-01-01

    Mathematical models which predict the behavior of fluid flow in different experiments are simulated using digital computers. The simulations predict values of parameters of the fluid flow (pressure, temperature and velocity vector) at many points in the fluid. Visualization of the spatial variation in the value of these parameters is important to comprehend and check the data generated, to identify the regions of interest in the flow, and for effectively communicating information about the flow to others. The state of the art imaging techniques developed in the field of three dimensional shaded computer graphics is applied to visualization of fluid flow. Use of an imaging technique known as 'SCAN' for visualizing fluid flow, is studied and the results are presented.

  17. Computational reduction techniques for numerical vibro-acoustic analysis of hearing aids

    DEFF Research Database (Denmark)

    Creixell Mediante, Ester

    . In this thesis, several challenges encountered in the process of modelling and optimizing hearing aids are addressed. Firstly, a strategy for modelling the contacts between plastic parts for harmonic analysis is developed. Irregularities in the contact surfaces, inherent to the manufacturing process of the parts....... Secondly, the applicability of Model Order Reduction (MOR) techniques to lower the computational complexity of hearing aid vibro-acoustic models is studied. For fine frequency response calculation and optimization, which require solving the numerical model repeatedly, a computational challenge...... is encountered due to the large number of Degrees of Freedom (DOFs) needed to represent the complexity of the hearing aid system accurately. In this context, several MOR techniques are discussed, and an adaptive reduction method for vibro-acoustic optimization problems is developed as a main contribution. Lastly...

  18. Finite-element-model updating using computational intelligence techniques applications to structural dynamics

    CERN Document Server

    Marwala, Tshilidzi

    2010-01-01

    Finite element models (FEMs) are widely used to understand the dynamic behaviour of various systems. FEM updating allows FEMs to be tuned better to reflect measured data and may be conducted using two different statistical frameworks: the maximum likelihood approach and Bayesian approaches. Finite Element Model Updating Using Computational Intelligence Techniques applies both strategies to the field of structural mechanics, an area vital for aerospace, civil and mechanical engineering. Vibration data is used for the updating process. Following an introduction a number of computational intelligence techniques to facilitate the updating process are proposed; they include: • multi-layer perceptron neural networks for real-time FEM updating; • particle swarm and genetic-algorithm-based optimization methods to accommodate the demands of global versus local optimization models; • simulated annealing to put the methodologies into a sound statistical basis; and • response surface methods and expectation m...

  19. Advanced technique for computing fuel combustion properties in pulverized-fuel fired boilers

    Energy Technology Data Exchange (ETDEWEB)

    Kotler, V.R. (Vsesoyuznyi Teplotekhnicheskii Institut (Russian Federation))

    1992-03-01

    Reviews foreign technical reports on advanced techniques for computing fuel combustion properties in pulverized-fuel fired boilers and analyzes a technique developed by Combustion Engineering, Inc. (USA). Characteristics of 25 fuel types, including 19 grades of coal, are listed along with a diagram of an installation with a drop tube furnace. Characteristics include burn-out intensity curves obtained using thermogravimetric analysis for high-volatile bituminous, semi-bituminous and coking coal. The patented LFP-SKM mathematical model is used to model combustion of a particular fuel under given conditions. The model allows for fuel particle size, air surplus, load, flame height, and portion of air supplied as tertiary blast. Good agreement between computational and experimental data was observed. The method is employed in designing new boilers as well as converting operating boilers to alternative types of fuel. 3 refs.

  20. Integration of computational modeling and experimental techniques to design fuel surrogates

    DEFF Research Database (Denmark)

    Choudhury, H.A.; Intikhab, S.; Kalakul, Sawitree

    2017-01-01

    performance. A simplified alternative is to develop surrogate fuels that have fewer compounds and emulate certain important desired physical properties of the target fuels. Six gasoline blends were formulated through a computer aided model based technique “Mixed Integer Non-Linear Programming” (MINLP...... Virtual Process-Product Design Laboratory (VPPD-Lab) are applied onto the defined compositions of the surrogate gasoline. The aim is to primarily verify the defined composition of gasoline by means of VPPD-Lab. ρ, η and RVP are calculated with more accuracy and constraints such as distillation curve...... and flash point on the blend design are also considered. A post-design experiment-based verification step is proposed to further improve and fine-tune the “best” selected gasoline blends following the computation work. Here, advanced experimental techniques are used to measure the RVP, ρ, η, RON...

  1. Computer vision techniques applied to the quality control of ceramic plates

    OpenAIRE

    Silveira, Joaquim; Ferreira, Manuel João Oliveira; Santos, Cristina; Martins, Teresa

    2009-01-01

    This paper presents a system, based on computer vision techniques, that detects and quantifies different types of defects in ceramic plates. It was developed in collaboration with the industrial ceramic sector and consequently it was focused on the defects that are considered more quality depreciating by the Portuguese industry. They are of three main types: cracks; granules and relief surface. For each type the development was specific as far as image processing techn...

  2. Computer literacy enhancement in the Teaching Hospital Olomouc. Part I: project management techniques. Short communication.

    Science.gov (United States)

    Sedlár, Drahomír; Potomková, Jarmila; Rehorová, Jarmila; Seckár, Pavel; Sukopová, Vera

    2003-11-01

    Information explosion and globalization make great demands on keeping pace with the new trends in the healthcare sector. The contemporary level of computer and information literacy among most health care professionals in the Teaching Hospital Olomouc (Czech Republic) is not satisfactory for efficient exploitation of modern information technology in diagnostics, therapy and nursing. The present contribution describes the application of two basic problem solving techniques (brainstorming, SWOT analysis) to develop a project aimed at information literacy enhancement.

  3. Auditors’ Usage of Computer Assisted Audit Tools and Techniques: Empirical Evidence from Nigeria

    OpenAIRE

    Appah Ebimobowei; G.N. Ogbonna; Zuokemefa P. Enebraye

    2013-01-01

    This study examines use of computer assisted audit tool and techniques in audit practice in the Niger Delta of Nigeria. To achieve this objective, data was collected from primary and secondary sources. The secondary sources were from scholarly books and journals while the primary source involved a well structured questionnaire of three sections of thirty seven items with an average reliability of 0.838. The data collected from the questionnaire were analyzed using relevant descriptive statist...

  4. A Simple Technique for Securing Data at Rest Stored in a Computing Cloud

    Science.gov (United States)

    Sedayao, Jeff; Su, Steven; Ma, Xiaohao; Jiang, Minghao; Miao, Kai

    "Cloud Computing" offers many potential benefits, including cost savings, the ability to deploy applications and services quickly, and the ease of scaling those application and services once they are deployed. A key barrier for enterprise adoption is the confidentiality of data stored on Cloud Computing Infrastructure. Our simple technique implemented with Open Source software solves this problem by using public key encryption to render stored data at rest unreadable by unauthorized personnel, including system administrators of the cloud computing service on which the data is stored. We validate our approach on a network measurement system implemented on PlanetLab. We then use it on a service where confidentiality is critical - a scanning application that validates external firewall implementations.

  5. Technique and results of the spinal computed tomography in the diagnosis of cervical disc disease

    International Nuclear Information System (INIS)

    Artmann, H.; Salbeck, R.; Grau, H.

    1985-01-01

    We give a description of a technique of the patient's positioning with traction of the arms during the cervical spinal computed tomography which allows to draw the shoulders downwards by about one to three cervical segments. By this method the quality of the images can be improved in 96% in the cervical segment 6/7 and in 81% in the cervical/thoracal segment 7/1 to such a degree that a reliable judgement of the soft parts in the spinal canal becomes possible. The diagnostic reliability of the computed tomography of the cervical disc herniation is thus improved so that the necessity of a myelography is decreasing. The results of 396 cervical spinal computed tomographies are presented. (orig.) [de

  6. Integrated State Estimation and Contingency Analysis Software Implementation using High Performance Computing Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yousu; Glaesemann, Kurt R.; Rice, Mark J.; Huang, Zhenyu

    2015-12-31

    Power system simulation tools are traditionally developed in sequential mode and codes are optimized for single core computing only. However, the increasing complexity in the power grid models requires more intensive computation. The traditional simulation tools will soon not be able to meet the grid operation requirements. Therefore, power system simulation tools need to evolve accordingly to provide faster and better results for grid operations. This paper presents an integrated state estimation and contingency analysis software implementation using high performance computing techniques. The software is able to solve large size state estimation problems within one second and achieve a near-linear speedup of 9,800 with 10,000 cores for contingency analysis application. The performance evaluation is presented to show its effectiveness.

  7. Research on integrated simulation of fluid-structure system by computation science techniques

    International Nuclear Information System (INIS)

    Yamaguchi, Akira

    1996-01-01

    In Power Reactor and Nuclear Fuel Development Corporation, the research on the integrated simulation of fluid-structure system by computation science techniques has been carried out, and by its achievement, the verification of plant systems which has depended on large scale experiments is substituted by computation science techniques, in this way, it has been aimed at to reduce development costs and to attain the optimization of FBR systems. For the purpose, it is necessary to establish the technology for integrally and accurately analyzing complicated phenomena (simulation technology), the technology for applying it to large scale problems (speed increasing technology), and the technology for assuring the reliability of the results of analysis when simulation technology is utilized for the permission and approval of FBRs (verifying technology). The simulation of fluid-structure interaction, the heat flow simulation in the space with complicated form and the related technologies are explained. As the utilization of computation science techniques, the elucidation of phenomena by numerical experiment and the numerical simulation as the substitute for tests are discussed. (K.I.)

  8. Measurement of mesothelioma on thoracic CT scans: A comparison of manual and computer-assisted techniques

    International Nuclear Information System (INIS)

    Armato, Samuel G. III; Oxnard, Geoffrey R.; MacMahon, Heber; Vogelzang, Nicholas J.; Kindler, Hedy L.; Kocherginsky, Masha; Starkey, Adam

    2004-01-01

    Our purpose in this study was to evaluate the variability of manual mesothelioma tumor thickness measurements in computed tomography (CT) scans and to assess the relative performance of six computerized measurement algorithms. The CT scans of 22 patients with malignant pleural mesothelioma were collected. In each scan, an initial observer identified up to three sites in each of three CT sections at which tumor thickness measurements were to be made. At each site, five observers manually measured tumor thickness through a computer interface. Three observers repeated these measurements during three separate sessions. Inter- and intra-observer variability in the manual measurement of tumor thickness was assessed. Six automated measurement algorithms were developed based on the geometric relationship between a specified measurement site and the automatically extracted lung regions. Computer-generated measurements were compared with manual measurements. The tumor thickness measurements of different observers were highly correlated (r≥0.99); however, the 95% limits of agreement for relative inter-observer difference spanned a range of 30%. Tumor thickness measurements generated by the computer algorithms also correlated highly with the average of observer measurements (r≥0.93). We have developed computerized techniques for the measurement of mesothelioma tumor thickness in CT scans. These techniques achieved varying levels of agreement with measurements made by human observers

  9. Full parallax three-dimensional computer generated hologram with occlusion effect using ray casting technique

    International Nuclear Information System (INIS)

    Zhang, Hao; Tan, Qiaofeng; Jin, Guofan

    2013-01-01

    Holographic display is capable of reconstructing the whole optical wave field of a three-dimensional (3D) scene. It is the only one among all the 3D display techniques that can produce all the depth cues. With the development of computing technology and spatial light modulators, computer generated holograms (CGHs) can now be used to produce dynamic 3D images of synthetic objects. Computation holography becomes highly complicated and demanding when it is employed to produce real 3D images. Here we present a novel algorithm for generating a full parallax 3D CGH with occlusion effect, which is an important property of 3D perception, but has often been neglected in fully computed hologram synthesis. The ray casting technique, which is widely used in computer graphics, is introduced to handle the occlusion issue of CGH computation. Horizontally and vertically distributed rays are projected from each hologram sample to the 3D objects to obtain the complex amplitude distribution. The occlusion issue is handled by performing ray casting calculations to all the hologram samples. The proposed algorithm has no restriction on or approximation to the 3D objects, and hence it can produce reconstructed images with correct shading effect and no visible artifacts. Programmable graphics processing unit (GPU) is used to perform parallel calculation. This is made possible because each hologram sample belongs to an independent operation. To demonstrate the performance of our proposed algorithm, an optical experiment is performed to reconstruct the 3D scene by using a phase-only spatial light modulator. We can easily perceive the accommodation cue by focusing our eyes on different depths of the scene and the motion parallax cue with occlusion effect by moving our eyes around. The experiment result confirms that the CGHs produced by our algorithm can successfully reconstruct 3D images with all the depth cues.

  10. Exploring Tradeoffs in Demand-Side and Supply-Side Management of Urban Water Resources Using Agent-Based Modeling and Evolutionary Computation

    Directory of Open Access Journals (Sweden)

    Lufthansa Kanta

    2015-11-01

    Full Text Available Urban water supply systems may be managed through supply-side and demand-side strategies, which focus on water source expansion and demand reductions, respectively. Supply-side strategies bear infrastructure and energy costs, while demand-side strategies bear costs of implementation and inconvenience to consumers. To evaluate the performance of demand-side strategies, the participation and water use adaptations of consumers should be simulated. In this study, a Complex Adaptive Systems (CAS framework is developed to simulate consumer agents that change their consumption to affect the withdrawal from the water supply system, which, in turn influences operational policies and long-term resource planning. Agent-based models are encoded to represent consumers and a policy maker agent and are coupled with water resources system simulation models. The CAS framework is coupled with an evolutionary computation-based multi-objective methodology to explore tradeoffs in cost, inconvenience to consumers, and environmental impacts for both supply-side and demand-side strategies. Decisions are identified to specify storage levels in a reservoir that trigger: (1 increases in the volume of water pumped through inter-basin transfers from an external reservoir; and (2 drought stages, which restrict the volume of water that is allowed for residential outdoor uses. The proposed methodology is demonstrated for Arlington, Texas, water supply system to identify non-dominated strategies for an historic drought decade. Results demonstrate that pumping costs associated with maximizing environmental reliability exceed pumping costs associated with minimizing restrictions on consumer water use.

  11. Evolutionary constrained optimization

    CERN Document Server

    Deb, Kalyanmoy

    2015-01-01

    This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...

  12. Solving Multi-Pollutant Emission Dispatch Problem Using Computational Intelligence Technique

    Directory of Open Access Journals (Sweden)

    Nur Azzammudin Rahmat

    2016-06-01

    Full Text Available Economic dispatch is a crucial process conducted by the utilities to correctly determine the satisfying amount of power to be generated and distributed to the consumers. During the process, the utilities also consider pollutant emission as the consequences of fossil-fuel consumption. Fossil-fuel includes petroleum, coal, and natural gas; each has its unique chemical composition of pollutants i.e. sulphur oxides (SOX, nitrogen oxides (NOX and carbon oxides (COX. This paper presents multi-pollutant emission dispatch problem using computational intelligence technique. In this study, a novel emission dispatch technique is formulated to determine the amount of the pollutant level. It utilizes a pre-developed optimization technique termed as differential evolution immunized ant colony optimization (DEIANT for the emission dispatch problem. The optimization results indicated high level of COX level, regardless of any type of fossil fuel consumed.

  13. Now And Next Generation Sequencing Techniques: Future of Sequence Analysis using Cloud Computing

    Directory of Open Access Journals (Sweden)

    Radhe Shyam Thakur

    2012-12-01

    Full Text Available Advancements in the field of sequencing techniques resulted in the huge sequenced data to be produced at a very faster rate. It is going cumbersome for the datacenter to maintain the databases. Data mining and sequence analysis approaches needs to analyze the databases several times to reach any efficient conclusion. To cope with such overburden on computer resources and to reach efficient and effective conclusions quickly, the virtualization of the resources and computation on pay as you go concept was introduced and termed as cloud computing. The datacenter’s hardware and software is collectively known as cloud which when available publicly is termed as public cloud. The datacenter’s resources are provided in a virtual mode to the clients via a service provider like Amazon, Google and Joyent which charges on pay as you go manner. The workload is shifted to the provider which is maintained by the required hardware and software upgradation. The service provider manages it by upgrading the requirements in the virtual mode. Basically a virtual environment is created according to the need of the user by taking permission from datacenter via internet, the task is performed and the environment is deleted after the task is over. In this discussion, we are focusing on the basics of cloud computing, the prerequisites and overall working of clouds. Furthermore, briefly the applications of cloud computing in biological systems, especially in comparative genomics, genome informatics and SNP detection with reference to traditional workflow are discussed.

  14. Acceleration of FDTD mode solver by high-performance computing techniques.

    Science.gov (United States)

    Han, Lin; Xi, Yanping; Huang, Wei-Ping

    2010-06-21

    A two-dimensional (2D) compact finite-difference time-domain (FDTD) mode solver is developed based on wave equation formalism in combination with the matrix pencil method (MPM). The method is validated for calculation of both real guided and complex leaky modes of typical optical waveguides against the bench-mark finite-difference (FD) eigen mode solver. By taking advantage of the inherent parallel nature of the FDTD algorithm, the mode solver is implemented on graphics processing units (GPUs) using the compute unified device architecture (CUDA). It is demonstrated that the high-performance computing technique leads to significant acceleration of the FDTD mode solver with more than 30 times improvement in computational efficiency in comparison with the conventional FDTD mode solver running on CPU of a standard desktop computer. The computational efficiency of the accelerated FDTD method is in the same order of magnitude of the standard finite-difference eigen mode solver and yet require much less memory (e.g., less than 10%). Therefore, the new method may serve as an efficient, accurate and robust tool for mode calculation of optical waveguides even when the conventional eigen value mode solvers are no longer applicable due to memory limitation.

  15. Quasistatic zooming of FDTD E-field computations: the impact of down-scaling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Van de Kamer, J.B.; Kroeze, H.; De Leeuw, A.A.C.; Lagendijk, J.J.W. [Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht (Netherlands)

    2001-05-01

    Due to current computer limitations, regional hyperthermia treatment planning (HTP) is practically limited to a resolution of 1 cm, whereas a millimetre resolution is desired. Using the centimetre resolution E-vector-field distribution, computed with, for example, the finite-difference time-domain (FDTD) method and the millimetre resolution patient anatomy it is possible to obtain a millimetre resolution SAR distribution in a volume of interest (VOI) by means of quasistatic zooming. To compute the required low-resolution E-vector-field distribution, a low-resolution dielectric geometry is needed which is constructed by down-scaling the millimetre resolution dielectric geometry. In this study we have investigated which down-scaling technique results in a dielectric geometry that yields the best low-resolution E-vector-field distribution as input for quasistatic zooming. A segmented 2 mm resolution CT data set of a patient has been down-scaled to 1 cm resolution using three different techniques: 'winner-takes-all', 'volumetric averaging' and 'anisotropic volumetric averaging'. The E-vector-field distributions computed for those low-resolution dielectric geometries have been used as input for quasistatic zooming. The resulting zoomed-resolution SAR distributions were compared with a reference: the 2 mm resolution SAR distribution computed with the FDTD method. The E-vector-field distribution for both a simple phantom and the complex partial patient geometry down-scaled using 'anisotropic volumetric averaging' resulted in zoomed-resolution SAR distributions that best approximate the corresponding high-resolution SAR distribution (correlation 97, 96% and absolute averaged difference 6, 14% respectively). (author)

  16. Spore: Spawning Evolutionary Misconceptions?

    Science.gov (United States)

    Bean, Thomas E.; Sinatra, Gale M.; Schrader, P. G.

    2010-10-01

    The use of computer simulations as educational tools may afford the means to develop understanding of evolution as a natural, emergent, and decentralized process. However, special consideration of developmental constraints on learning may be necessary when using these technologies. Specifically, the essentialist (biological forms possess an immutable essence), teleological (assignment of purpose to living things and/or parts of living things that may not be purposeful), and intentionality (assumption that events are caused by an intelligent agent) biases may be reinforced through the use of computer simulations, rather than addressed with instruction. We examine the video game Spore for its depiction of evolutionary content and its potential to reinforce these cognitive biases. In particular, we discuss three pedagogical strategies to mitigate weaknesses of Spore and other computer simulations: directly targeting misconceptions through refutational approaches, targeting specific principles of scientific inquiry, and directly addressing issues related to models as cognitive tools.

  17. Precision of lumbar intervertebral measurements: does a computer-assisted technique improve reliability?

    Science.gov (United States)

    Pearson, Adam M; Spratt, Kevin F; Genuario, James; McGough, William; Kosman, Katherine; Lurie, Jon; Sengupta, Dilip K

    2011-04-01

    Comparison of intra- and interobserver reliability of digitized manual and computer-assisted intervertebral motion measurements and classification of "instability." To determine if computer-assisted measurement of lumbar intervertebral motion on flexion-extension radiographs improves reliability compared with digitized manual measurements. Many studies have questioned the reliability of manual intervertebral measurements, although few have compared the reliability of computer-assisted and manual measurements on lumbar flexion-extension radiographs. Intervertebral rotation, anterior-posterior (AP) translation, and change in anterior and posterior disc height were measured with a digitized manual technique by three physicians and by three other observers using computer-assisted quantitative motion analysis (QMA) software. Each observer measured 30 sets of digital flexion-extension radiographs (L1-S1) twice. Shrout-Fleiss intraclass correlation coefficients for intra- and interobserver reliabilities were computed. The stability of each level was also classified (instability defined as >4 mm AP translation or 10° rotation), and the intra- and interobserver reliabilities of the two methods were compared using adjusted percent agreement (APA). Intraobserver reliability intraclass correlation coefficients were substantially higher for the QMA technique THAN the digitized manual technique across all measurements: rotation 0.997 versus 0.870, AP translation 0.959 versus 0.557, change in anterior disc height 0.962 versus 0.770, and change in posterior disc height 0.951 versus 0.283. The same pattern was observed for interobserver reliability (rotation 0.962 vs. 0.693, AP translation 0.862 vs. 0.151, change in anterior disc height 0.862 vs. 0.373, and change in posterior disc height 0.730 vs. 0.300). The QMA technique was also more reliable for the classification of "instability." Intraobserver APAs ranged from 87 to 97% for QMA versus 60% to 73% for digitized manual

  18. Contributions of computational chemistry and biophysical techniques to fragment-based drug discovery.

    Science.gov (United States)

    Gozalbes, Rafael; Carbajo, Rodrigo J; Pineda-Lucena, Antonio

    2010-01-01

    In the last decade, fragment-based drug discovery (FBDD) has evolved from a novel approach in the search of new hits to a valuable alternative to the high-throughput screening (HTS) campaigns of many pharmaceutical companies. The increasing relevance of FBDD in the drug discovery universe has been concomitant with an implementation of the biophysical techniques used for the detection of weak inhibitors, e.g. NMR, X-ray crystallography or surface plasmon resonance (SPR). At the same time, computational approaches have also been progressively incorporated into the FBDD process and nowadays several computational tools are available. These stretch from the filtering of huge chemical databases in order to build fragment-focused libraries comprising compounds with adequate physicochemical properties, to more evolved models based on different in silico methods such as docking, pharmacophore modelling, QSAR and virtual screening. In this paper we will review the parallel evolution and complementarities of biophysical techniques and computational methods, providing some representative examples of drug discovery success stories by using FBDD.

  19. A neuro-fuzzy computing technique for modeling hydrological time series

    Science.gov (United States)

    Nayak, P. C.; Sudheer, K. P.; Rangan, D. M.; Ramasastri, K. S.

    2004-05-01

    Intelligent computing tools such as artificial neural network (ANN) and fuzzy logic approaches are proven to be efficient when applied individually to a variety of problems. Recently there has been a growing interest in combining both these approaches, and as a result, neuro-fuzzy computing techniques have evolved. This approach has been tested and evaluated in the field of signal processing and related areas, but researchers have only begun evaluating the potential of this neuro-fuzzy hybrid approach in hydrologic modeling studies. This paper presents the application of an adaptive neuro fuzzy inference system (ANFIS) to hydrologic time series modeling, and is illustrated by an application to model the river flow of Baitarani River in Orissa state, India. An introduction to the ANFIS modeling approach is also presented. The advantage of the method is that it does not require the model structure to be known a priori, in contrast to most of the time series modeling techniques. The results showed that the ANFIS forecasted flow series preserves the statistical properties of the original flow series. The model showed good performance in terms of various statistical indices. The results are highly promising, and a comparative analysis suggests that the proposed modeling approach outperforms ANNs and other traditional time series models in terms of computational speed, forecast errors, efficiency, peak flow estimation etc. It was observed that the ANFIS model preserves the potential of the ANN approach fully, and eases the model building process.

  20. Comparison of the pain levels of computer-controlled and conventional anesthesia techniques in prosthodontic treatment

    Directory of Open Access Journals (Sweden)

    Murat Yenisey

    2009-10-01

    Full Text Available OBJECTIVE: The objective of this study was to compare the pain levels on opposite sides of the maxilla at needle insertion during delivery of local anesthetic solution and tooth preparation for both conventional and anterior middle superior alveolar (AMSA technique with the Wand computer-controlled local anesthesia application. MATERIAL AND METHODS: Pain scores of 16 patients were evaluated with a 5-point verbal rating scale (VRS and data were analyzed nonparametrically. Pain differences at needle insertion, during delivery of local anesthetic, and at tooth preparation, for conventional versus the Wand technique, were analyzed using the Mann-Whitney U test (p=0.01. RESULTS: The Wand technique had a lower pain level compared to conventional injection for needle insertion (p0.05. CONCLUSIONS: The AMSA technique using the Wand is recommended for prosthodontic treatment because it reduces pain during needle insertion and during delivery of local anaesthetic. However, these two techniques have the same pain levels for tooth preparation.

  1. Micro-computed tomography and bond strength analysis of different root canal filling techniques

    Directory of Open Access Journals (Sweden)

    Juliane Nhata

    2014-01-01

    Full Text Available Introduction: The aim of this study was to evaluate the quality and bond strength of three root filling techniques (lateral compaction, continuous wave of condensation and Tagger′s Hybrid technique [THT] using micro-computed tomography (CT images and push-out tests, respectively. Materials and Methods: Thirty mandibular incisors were prepared using the same protocol and randomly divided into three groups (n = 10: Lateral condensation technique (LCT, continuous wave of condensation technique (CWCT, and THT. All specimens were filled with Gutta-percha (GP cones and AH Plus sealer. Five specimens of each group were randomly chosen for micro-CT analysis and all of them were sectioned into 1 mm slices and subjected to push-out tests. Results: Micro-CT analysis revealed less empty spaces when GP was heated within the root canals in CWCT and THT when compared to LCT. Push-out tests showed that LCT and THT had a significantly higher displacement resistance (P < 0.05 when compared to the CWCT. Bond strength was lower in apical and middle thirds than in the coronal thirds. Conclusions: It can be concluded that LCT and THT were associated with higher bond strengths to intraradicular dentine than CWCT. However, LCT was associated with more empty voids than the other techniques.

  2. Evolutionary design assistants for architecture

    Directory of Open Access Journals (Sweden)

    N. Onur Sönmez

    2015-04-01

    Full Text Available In its parallel pursuit of an increased competitivity for design offices and more pleasurable and easier workflows for designers, artificial design intelligence is a technical, intellectual, and political challenge. While human-machine cooperation has become commonplace through Computer Aided Design (CAD tools, a more improved collaboration and better support appear possible only through an endeavor into a kind of artificial design intelligence, which is more sensitive to the human perception of affairs. Considered as part of the broader Computational Design studies, the research program of this quest can be called Artificial / Autonomous / Automated Design (AD. The current available level of Artificial Intelligence (AI for design is limited and a viable aim for current AD would be to develop design assistants that are capable of producing drafts for various design tasks. Thus, the overall aim of this thesis is the development of approaches, techniques, and tools towards artificial design assistants that offer a capability for generating drafts for sub-tasks within design processes. The main technology explored for this aim is Evolutionary Computation (EC, and the target design domain is architecture. The two connected research questions of the study concern, first, the investigation of the ways to develop an architectural design assistant, and secondly, the utilization of EC for the development of such assistants. While developing approaches, techniques, and computational tools for such an assistant, the study also carries out a broad theoretical investigation into the main problems, challenges, and requirements towards such assistants on a rather overall level. Therefore, the research is shaped as a parallel investigation of three main threads interwoven along several levels, moving from a more general level to specific applications. The three research threads comprise, first, theoretical discussions and speculations with regard to both

  3. Quantitative comparison of commercial and non-commercial metal artifact reduction techniques in computed tomography.

    Directory of Open Access Journals (Sweden)

    Dirk Wagenaar

    Full Text Available Typical streak artifacts known as metal artifacts occur in the presence of strongly attenuating materials in computed tomography (CT. Recently, vendors have started offering metal artifact reduction (MAR techniques. In addition, a MAR technique called the metal deletion technique (MDT is freely available and able to reduce metal artifacts using reconstructed images. Although a comparison of the MDT to other MAR techniques exists, a comparison of commercially available MAR techniques is lacking. The aim of this study was therefore to quantify the difference in effectiveness of the currently available MAR techniques of different scanners and the MDT technique.Three vendors were asked to use their preferential CT scanner for applying their MAR techniques. The scans were performed on a Philips Brilliance ICT 256 (S1, a GE Discovery CT 750 HD (S2 and a Siemens Somatom Definition AS Open (S3. The scans were made using an anthropomorphic head and neck phantom (Kyoto Kagaku, Japan. Three amalgam dental implants were constructed and inserted between the phantom's teeth. The average absolute error (AAE was calculated for all reconstructions in the proximity of the amalgam implants.The commercial techniques reduced the AAE by 22.0±1.6%, 16.2±2.6% and 3.3±0.7% for S1 to S3 respectively. After applying the MDT to uncorrected scans of each scanner the AAE was reduced by 26.1±2.3%, 27.9±1.0% and 28.8±0.5% respectively. The difference in efficiency between the commercial techniques and the MDT was statistically significant for S2 (p=0.004 and S3 (p<0.001, but not for S1 (p=0.63.The effectiveness of MAR differs between vendors. S1 performed slightly better than S2 and both performed better than S3. Furthermore, for our phantom and outcome measure the MDT was more effective than the commercial MAR technique on all scanners.

  4. A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications.

    Science.gov (United States)

    Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod

    2016-08-06

    In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively.

  5. A New Computational Technique for the Generation of Optimised Aircraft Trajectories

    Science.gov (United States)

    Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto

    2017-12-01

    A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.

  6. Data mining technique for a secure electronic payment transaction using MJk-RSA in mobile computing

    Science.gov (United States)

    G. V., Ramesh Babu; Narayana, G.; Sulaiman, A.; Padmavathamma, M.

    2012-04-01

    Due to the evolution of the Electronic Learning (E-Learning), one can easily get desired information on computer or mobile system connected through Internet. Currently E-Learning materials are easily accessible on the desktop computer system, but in future, most of the information shall also be available on small digital devices like Mobile, PDA, etc. Most of the E-Learning materials are paid and customer has to pay entire amount through credit/debit card system. Therefore, it is very important to study about the security of the credit/debit card numbers. The present paper is an attempt in this direction and a security technique is presented to secure the credit/debit card numbers supplied over the Internet to access the E-Learning materials or any kind of purchase through Internet. A well known method i.e. Data Cube Technique is used to design the security model of the credit/debit card system. The major objective of this paper is to design a practical electronic payment protocol which is the safest and most secured mode of transaction. This technique may reduce fake transactions which are above 20% at the global level.

  7. Application of parallel computing techniques to a large-scale reservoir simulation

    International Nuclear Information System (INIS)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris; Pruess, Karsten

    2001-01-01

    Even with the continual advances made in both computational algorithms and computer hardware used in reservoir modeling studies, large-scale simulation of fluid and heat flow in heterogeneous reservoirs remains a challenge. The problem commonly arises from intensive computational requirement for detailed modeling investigations of real-world reservoirs. This paper presents the application of a massive parallel-computing version of the TOUGH2 code developed for performing large-scale field simulations. As an application example, the parallelized TOUGH2 code is applied to develop a three-dimensional unsaturated-zone numerical model simulating flow of moisture, gas, and heat in the unsaturated zone of Yucca Mountain, Nevada, a potential repository for high-level radioactive waste. The modeling approach employs refined spatial discretization to represent the heterogeneous fractured tuffs of the system, using more than a million 3-D gridblocks. The problem of two-phase flow and heat transfer within the model domain leads to a total of 3,226,566 linear equations to be solved per Newton iteration. The simulation is conducted on a Cray T3E-900, a distributed-memory massively parallel computer. Simulation results indicate that the parallel computing technique, as implemented in the TOUGH2 code, is very efficient. The reliability and accuracy of the model results have been demonstrated by comparing them to those of small-scale (coarse-grid) models. These comparisons show that simulation results obtained with the refined grid provide more detailed predictions of the future flow conditions at the site, aiding in the assessment of proposed repository performance

  8. Early phase drug discovery: cheminformatics and computational techniques in identifying lead series.

    Science.gov (United States)

    Duffy, Bryan C; Zhu, Lei; Decornez, Hélène; Kitchen, Douglas B

    2012-09-15

    Early drug discovery processes rely on hit finding procedures followed by extensive experimental confirmation in order to select high priority hit series which then undergo further scrutiny in hit-to-lead studies. The experimental cost and the risk associated with poor selection of lead series can be greatly reduced by the use of many different computational and cheminformatic techniques to sort and prioritize compounds. We describe the steps in typical hit identification and hit-to-lead programs and then describe how cheminformatic analysis assists this process. In particular, scaffold analysis, clustering and property calculations assist in the design of high-throughput screening libraries, the early analysis of hits and then organizing compounds into series for their progression from hits to leads. Additionally, these computational tools can be used in virtual screening to design hit-finding libraries and as procedures to help with early SAR exploration. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments

    Directory of Open Access Journals (Sweden)

    Jose M. Moya

    2012-08-01

    Full Text Available Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

  10. Ubiquitous green computing techniques for high demand applications in Smart environments.

    Science.gov (United States)

    Zapater, Marina; Sanchez, Cesar; Ayala, Jose L; Moya, Jose M; Risco-Martín, José L

    2012-01-01

    Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

  11. Computer aided production of manufacturing CAMAC-wired boards by the multiwire-technique

    Energy Technology Data Exchange (ETDEWEB)

    Martini, M; Brehmer, W

    1975-10-01

    The multiwire-technique is a computer controlled wiring method for the manufacturing of circuit boards with insulated conductors. The technical data for production are dimensional drawings of the board and a list of all points which are to be connected. The listing must be in absolute co-ordinates including a list of all soldering points for component parts and a reproducible print pattern for inscription. For this wiring method a CAMAC standard board, a layout plan with alpha-numeric symbols, and a computer program which produces the essential technical data were developed. A description of the alpha-numeric symbols, the quality of the program, recognition and checking of these symbols, and the produced technical data is presented. (auth)

  12. Evaluation of user input methods for manipulating a tablet personal computer in sterile techniques.

    Science.gov (United States)

    Yamada, Akira; Komatsu, Daisuke; Suzuki, Takeshi; Kurozumi, Masahiro; Fujinaga, Yasunari; Ueda, Kazuhiko; Kadoya, Masumi

    2017-02-01

    To determine a quick and accurate user input method for manipulating tablet personal computers (PCs) in sterile techniques. We evaluated three different manipulation methods, (1) Computer mouse and sterile system drape, (2) Fingers and sterile system drape, and (3) Digitizer stylus and sterile ultrasound probe cover with a pinhole, in terms of the central processing unit (CPU) performance, manipulation performance, and contactlessness. A significant decrease in CPU score ([Formula: see text]) and an increase in CPU temperature ([Formula: see text]) were observed when a system drape was used. The respective mean times taken to select a target image from an image series (ST) and the mean times for measuring points on an image (MT) were [Formula: see text] and [Formula: see text] s for the computer mouse method, [Formula: see text] and [Formula: see text] s for the finger method, and [Formula: see text] and [Formula: see text] s for the digitizer stylus method, respectively. The ST for the finger method was significantly longer than for the digitizer stylus method ([Formula: see text]). The MT for the computer mouse method was significantly longer than for the digitizer stylus method ([Formula: see text]). The mean success rate for measuring points on an image was significantly lower for the finger method when the diameter of the target was equal to or smaller than 8 mm than for the other methods. No significant difference in the adenosine triphosphate amount at the surface of the tablet PC was observed before, during, or after manipulation via the digitizer stylus method while wearing starch-powdered sterile gloves ([Formula: see text]). Quick and accurate manipulation of tablet PCs in sterile techniques without CPU load is feasible using a digitizer stylus and sterile ultrasound probe cover with a pinhole.

  13. Automatic domain updating technique for improving computational efficiency of 2-D flood-inundation simulation

    Science.gov (United States)

    Tanaka, T.; Tachikawa, Y.; Ichikawa, Y.; Yorozu, K.

    2017-12-01

    Flood is one of the most hazardous disasters and causes serious damage to people and property around the world. To prevent/mitigate flood damage through early warning system and/or river management planning, numerical modelling of flood-inundation processes is essential. In a literature, flood-inundation models have been extensively developed and improved to achieve flood flow simulation with complex topography at high resolution. With increasing demands on flood-inundation modelling, its computational burden is now one of the key issues. Improvements of computational efficiency of full shallow water equations are made from various perspectives such as approximations of the momentum equations, parallelization technique, and coarsening approaches. To support these techniques and more improve the computational efficiency of flood-inundation simulations, this study proposes an Automatic Domain Updating (ADU) method of 2-D flood-inundation simulation. The ADU method traces the wet and dry interface and automatically updates the simulation domain in response to the progress and recession of flood propagation. The updating algorithm is as follow: first, to register the simulation cells potentially flooded at initial stage (such as floodplains nearby river channels), and then if a registered cell is flooded, to register its surrounding cells. The time for this additional process is saved by checking only cells at wet and dry interface. The computation time is reduced by skipping the processing time of non-flooded area. This algorithm is easily applied to any types of 2-D flood inundation models. The proposed ADU method is implemented to 2-D local inertial equations for the Yodo River basin, Japan. Case studies for two flood events show that the simulation is finished within two to 10 times smaller time showing the same result as that without the ADU method.

  14. Metal Artifacts Reduction of Pedicle Screws on Spine Computed Tomography Images Using Variable Thresholding Technique

    International Nuclear Information System (INIS)

    Kaewlek, T.; Koolpiruck, D.; Thongvigitmanee, S.; Mongkolsuk, M.; Chiewvit, P.; Thammakittiphan, S.

    2012-01-01

    Metal artifacts are one of significant problems in computed tomography (CT). The streak lines and air gaps arise from metal implants of orthopedic patients, such as prosthesis, dental bucket, and pedicle screws that cause incorrect diagnosis and local treatment planning. A common technique to suppressed artifacts is by adjusting windows, but those artifacts still remain on the images. To improve the detail of spine CT images, the variable thresholding technique is proposed in this paper. Three medical cases of spine CT images categorized by the severity of artifacts (screws head, one full screw, and two full screws) were investigated. Metal regions were segmented by k-mean clustering, then transformed into a sinogram domain. The metal sinogram was identified by the variable thresholding method, and then replaced the new estimated values by linear interpolation. The modified sinogram was reconstructed by the filtered back- projection algorithm, and added the metal region back to the modified reconstructed image in order to reproduce the final image. The image quality of the proposed technique, the automatic thresholding (Kalender) technique, and window adjustment technique was compared in term of noise and signal to noise ratio (SNR). The propose method can reduce metal artifacts between pedicle screws. After processing by our proposed technique, noise in the modified images is reduced (screws head 121.15 to73.83, one full screw 160.88 to 94.04, and two full screws 199.73 to 110.05 from the initial image) and SNR is increased (screws head 0.87 to 1.88, one full screw 1.54 to 2.82, and two full screws 0.32 to 0.41 from the initial image). The variable thresholding technique can identify the suitable boundary for restoring the missing data. The efficiency of the metal artifacts reduction is indicated on the case of partial and full pedicle screws. Our technique can improve the detail of spine CT images better than automatic thresholding (Kalender) technique, and

  15. A 3D edge detection technique for surface extraction in computed tomography for dimensional metrology applications

    DEFF Research Database (Denmark)

    Yagüe-Fabra, J.A.; Ontiveros, S.; Jiménez, R.

    2013-01-01

    Many factors influence the measurement uncertainty when using computed tomography for dimensional metrology applications. One of the most critical steps is the surface extraction phase. An incorrect determination of the surface may significantly increase the measurement uncertainty. This paper...... presents an edge detection method for the surface extraction based on a 3D Canny algorithm with sub-voxel resolution. The advantages of this method are shown in comparison with the most commonly used technique nowadays, i.e. the local threshold definition. Both methods are applied to reference standards...

  16. A singularity extraction technique for computation of antenna aperture fields from singular plane wave spectra

    DEFF Research Database (Denmark)

    Cappellin, Cecilia; Breinbjerg, Olav; Frandsen, Aksel

    2008-01-01

    An effective technique for extracting the singularity of plane wave spectra in the computation of antenna aperture fields is proposed. The singular spectrum is first factorized into a product of a finite function and a singular function. The finite function is inverse Fourier transformed...... numerically using the Inverse Fast Fourier Transform, while the singular function is inverse Fourier transformed analytically, using the Weyl-identity, and the two resulting spatial functions are then convolved to produce the antenna aperture field. This article formulates the theory of the singularity...

  17. A computer graphics display technique for the examination of aircraft design data

    Science.gov (United States)

    Talcott, N. A., Jr.

    1981-01-01

    An interactive computer graphics technique has been developed for quickly sorting and interpreting large amounts of aerodynamic data. It utilizes a graphic representation rather than numbers. The geometry package represents the vehicle as a set of panels. These panels are ordered in groups of ascending values (e.g., equilibrium temperatures). The groups are then displayed successively on a CRT building up to the complete vehicle. A zoom feature allows for displaying only the panels with values between certain limits. The addition of color allows a one-time display thus eliminating the need for a display build up.

  18. A technique for integrating remote minicomputers into a general computer's file system

    CERN Document Server

    Russell, R D

    1976-01-01

    This paper describes a simple technique for interfacing remote minicomputers used for real-time data acquisition into the file system of a central computer. Developed as part of the ORION system at CERN, this 'File Manager' subsystem enables a program in the minicomputer to access and manipulate files of any type as if they resided on a storage device attached to the minicomputer. Yet, completely transparent to the program, the files are accessed from disks on the central system via high-speed data links, with response times comparable to local storage devices. (6 refs).

  19. Shielding calculations using computer techniques; Calculo de blindajes mediante tecnicas de computacion

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez Portilla, M. I.; Marquez, J.

    2011-07-01

    Radiological protection aims to limit the ionizing radiation received by people and equipment, which in numerous occasions requires of protection shields. Although, for certain configurations, there are analytical formulas, to characterize these shields, the design setup may be very intensive in numerical calculations, therefore the most efficient from to design the shields is by means of computer programs to calculate dose and dose rates. In the present article we review the codes most frequently used to perform these calculations, and the techniques used by such codes. (Author) 13 refs.

  20. Development of a Fast Fluid-Structure Coupling Technique for Wind Turbine Computations

    DEFF Research Database (Denmark)

    Sessarego, Matias; Ramos García, Néstor; Shen, Wen Zhong

    2015-01-01

    Fluid-structure interaction simulations are routinely used in the wind energy industry to evaluate the aerodynamic and structural dynamic performance of wind turbines. Most aero-elastic codes in modern times implement a blade element momentum technique to model the rotor aerodynamics and a modal......, multi-body, or finite-element approach to model the turbine structural dynamics. The present paper describes a novel fluid-structure coupling technique which combines a threedimensional viscous-inviscid solver for horizontal-axis wind-turbine aerodynamics, called MIRAS, and the structural dynamics model...... used in the aero-elastic code FLEX5. The new code, MIRASFLEX, in general shows good agreement with the standard aero-elastic codes FLEX5 and FAST for various test cases. The structural model in MIRAS-FLEX acts to reduce the aerodynamic load computed by MIRAS, particularly near the tip and at high wind...

  1. Predicting the Pullout Capacity of Small Ground Anchors Using Nonlinear Integrated Computing Techniques

    Directory of Open Access Journals (Sweden)

    Mosbeh R. Kaloop

    2017-01-01

    Full Text Available This study investigates predicting the pullout capacity of small ground anchors using nonlinear computing techniques. The input-output prediction model for the nonlinear Hammerstein-Wiener (NHW and delay inputs for the adaptive neurofuzzy inference system (DANFIS are developed and utilized to predict the pullout capacity. The results of the developed models are compared with previous studies that used artificial neural networks and least square support vector machine techniques for the same case study. The in situ data collection and statistical performances are used to evaluate the models performance. Results show that the developed models enhance the precision of predicting the pullout capacity when compared with previous studies. Also, the DANFIS model performance is proven to be better than other models used to detect the pullout capacity of ground anchors.

  2. CASAD -- Computer-Aided Sonography of Abdominal Diseases - the concept of joint technique impact

    Directory of Open Access Journals (Sweden)

    T. Deserno

    2010-03-01

    Full Text Available Ultrasound image is the primary (input information for every ultrasonic examination. Since being used in ultrasound images analysis the both knowledge-base decision support and content-based image retrieval techniques have their own restrictions, the combination of these techniques looks promissory for covering the restrictions of one by advances of another. In this work we have focused on implementation of the proposed combination in the frame of CASAD (Computer-Aided Sonography of Abdominal Diseases system for supplying the ultrasound examiner with a diagnostic-assistant tool based on a data warehouse of standard referenced images. This warehouse serves: to manifest the diagnosis when the ecographist specifies the pathology and then looks through corresponding images to verify his opinion; to suggest a second opinion by automatic analysis of the annotation of relevant images that were assessed from the repository using content-based image retrieval.

  3. Multislice Spiral Computed Tomography of the Heart: Technique, Current Applications, and Perspective

    International Nuclear Information System (INIS)

    Mahnken, Andreas H.; Wildberger, Joachim E.; Koos, Ralf; Guenther, Rolf W.

    2005-01-01

    Multislice spiral computed tomography (MSCT) is a rapidly evolving, noninvasive technique for cardiac imaging. Knowledge of the principle of electrocardiogram-gated MSCT and its limitations in clinical routine are needed to optimize image quality. Therefore, the basic technical principle including essentials of image postprocessing is described. Cardiac MSCT imaging was initially focused on coronary calcium scoring, MSCT coronary angiography, and analysis of left ventricular function. Recent studies also evaluated the ability of cardiac MSCT to visualize myocardial infarction and assess valvular morphology. In combination with experimental approaches toward the assessment of aortic valve function and myocardial viability, cardiac MSCT holds the potential for a comprehensive examination of the heart using one single examination technique

  4. A technique for transferring a patient's smile line to a cone beam computed tomography (CBCT) image.

    Science.gov (United States)

    Bidra, Avinash S

    2014-08-01

    Fixed implant-supported prosthodontic treatment for patients requiring a gingival prosthesis often demands that bone and implant levels be apical to the patient's maximum smile line. This is to avoid the display of the prosthesis-tissue junction (the junction between the gingival prosthesis and natural soft tissues) and prevent esthetic failures. Recording a patient's lip position during maximum smile is invaluable for the treatment planning process. This article presents a simple technique for clinically recording and transferring the patient's maximum smile line to cone beam computed tomography (CBCT) images for analysis. The technique can help clinicians accurately determine the need for and amount of bone reduction required with respect to the maximum smile line and place implants in optimal positions. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  5. Computed tomography assessment of the efficiency of different techniques for removal of root canal filling material

    International Nuclear Information System (INIS)

    Dall'agnol, Cristina; Barletta, Fernando Branco; Hartmann, Mateus Silveira Martins

    2008-01-01

    This study evaluated the efficiency of different techniques for removal of filling material from root canals, using computed tomography (CT). Sixty mesial roots from extracted human mandibular molars were used. Root canals were filled and, after 6 months, the teeth were randomly assigned to 3 groups, according to the root-filling removal technique: Group A - hand instrumentation with K-type files; Group B - reciprocating instrumentation with engine-driven K-type files; and Group C rotary instrumentation with engine-driven ProTaper system. CT scans were used to assess the volume of filling material inside the root canals before and after the removal procedure. In both moments, the area of filling material was outlined by an experienced radiologist and the volume of filling material was automatically calculated by the CT software program. Based on the volume of initial and residual filling material of each specimen, the percentage of filling material removed from the root canals by the different techniques was calculated. Data were analyzed statistically by ANOVA and chi-square test for linear trend (α=0.05). No statistically significant difference (p=0.36) was found among the groups regarding the percent means of removed filling material. The analysis of the association between the percentage of filling material removal (high or low) and the proposed techniques by chi-square test showed statistically significant difference (p=0.015), as most cases in group B (reciprocating technique) presented less than 50% of filling material removed (low percent removal). In conclusion, none of the techniques evaluated in this study was effective in providing complete removal of filling material from the root canals. (author)

  6. Computed tomography assessment of the efficiency of different techniques for removal of root canal filling material

    Energy Technology Data Exchange (ETDEWEB)

    Dall' agnol, Cristina; Barletta, Fernando Branco [Lutheran University of Brazil, Canoas, RS (Brazil). Dental School. Dept. of Dentistry and Endodontics]. E-mail: fbarletta@terra.com.br; Hartmann, Mateus Silveira Martins [Uninga Dental School, Passo Fundo, RS (Brazil). Postgraduate Program in Dentistry

    2008-07-01

    This study evaluated the efficiency of different techniques for removal of filling material from root canals, using computed tomography (CT). Sixty mesial roots from extracted human mandibular molars were used. Root canals were filled and, after 6 months, the teeth were randomly assigned to 3 groups, according to the root-filling removal technique: Group A - hand instrumentation with K-type files; Group B - reciprocating instrumentation with engine-driven K-type files; and Group C rotary instrumentation with engine-driven ProTaper system. CT scans were used to assess the volume of filling material inside the root canals before and after the removal procedure. In both moments, the area of filling material was outlined by an experienced radiologist and the volume of filling material was automatically calculated by the CT software program. Based on the volume of initial and residual filling material of each specimen, the percentage of filling material removed from the root canals by the different techniques was calculated. Data were analyzed statistically by ANOVA and chi-square test for linear trend ({alpha}=0.05). No statistically significant difference (p=0.36) was found among the groups regarding the percent means of removed filling material. The analysis of the association between the percentage of filling material removal (high or low) and the proposed techniques by chi-square test showed statistically significant difference (p=0.015), as most cases in group B (reciprocating technique) presented less than 50% of filling material removed (low percent removal). In conclusion, none of the techniques evaluated in this study was effective in providing complete removal of filling material from the root canals. (author)

  7. A Hybrid Chaotic Quantum Evolutionary Algorithm

    DEFF Research Database (Denmark)

    Cai, Y.; Zhang, M.; Cai, H.

    2010-01-01

    A hybrid chaotic quantum evolutionary algorithm is proposed to reduce amount of computation, speed up convergence and restrain premature phenomena of quantum evolutionary algorithm. The proposed algorithm adopts the chaotic initialization method to generate initial population which will form a pe...... tests. The presented algorithm is applied to urban traffic signal timing optimization and the effect is satisfied....

  8. Computational modelling of the mechanics of trabecular bone and marrow using fluid structure interaction techniques.

    Science.gov (United States)

    Birmingham, E; Grogan, J A; Niebur, G L; McNamara, L M; McHugh, P E

    2013-04-01

    Bone marrow found within the porous structure of trabecular bone provides a specialized environment for numerous cell types, including mesenchymal stem cells (MSCs). Studies have sought to characterize the mechanical environment imposed on MSCs, however, a particular challenge is that marrow displays the characteristics of a fluid, while surrounded by bone that is subject to deformation, and previous experimental and computational studies have been unable to fully capture the resulting complex mechanical environment. The objective of this study was to develop a fluid structure interaction (FSI) model of trabecular bone and marrow to predict the mechanical environment of MSCs in vivo and to examine how this environment changes during osteoporosis. An idealized repeating unit was used to compare FSI techniques to a computational fluid dynamics only approach. These techniques were used to determine the effect of lower bone mass and different marrow viscosities, representative of osteoporosis, on the shear stress generated within bone marrow. Results report that shear stresses generated within bone marrow under physiological loading conditions are within the range known to stimulate a mechanobiological response in MSCs in vitro. Additionally, lower bone mass leads to an increase in the shear stress generated within the marrow, while a decrease in bone marrow viscosity reduces this generated shear stress.

  9. NNLO computational techniques: The cases H→γγ and H→gg

    Science.gov (United States)

    Actis, Stefano; Passarino, Giampiero; Sturm, Christian; Uccirati, Sandro

    2009-04-01

    A large set of techniques needed to compute decay rates at the two-loop level are derived and systematized. The main emphasis of the paper is on the two Standard Model decays H→γγ and H→gg. The techniques, however, have a much wider range of application: they give practical examples of general rules for two-loop renormalization; they introduce simple recipes for handling internal unstable particles in two-loop processes; they illustrate simple procedures for the extraction of collinear logarithms from the amplitude. The latter is particularly relevant to show cancellations, e.g. cancellation of collinear divergencies. Furthermore, the paper deals with the proper treatment of non-enhanced two-loop QCD and electroweak contributions to different physical (pseudo-)observables, showing how they can be transformed in a way that allows for a stable numerical integration. Numerical results for the two-loop percentage corrections to H→γγ,gg are presented and discussed. When applied to the process pp→gg+X→H+X, the results show that the electroweak scaling factor for the cross section is between -4% and +6% in the range 100 GeV500 GeV, without incongruent large effects around the physical electroweak thresholds, thereby showing that only a complete implementation of the computational scheme keeps two-loop corrections under control.

  10. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    Science.gov (United States)

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  11. An innovative privacy preserving technique for incremental datasets on cloud computing.

    Science.gov (United States)

    Aldeen, Yousra Abdul Alsahib S; Salleh, Mazleena; Aljeroudi, Yazan

    2016-08-01

    Cloud computing (CC) is a magnificent service-based delivery with gigantic computer processing power and data storage across connected communications channels. It imparted overwhelming technological impetus in the internet (web) mediated IT industry, where users can easily share private data for further analysis and mining. Furthermore, user affable CC services enable to deploy sundry applications economically. Meanwhile, simple data sharing impelled various phishing attacks and malware assisted security threats. Some privacy sensitive applications like health services on cloud that are built with several economic and operational benefits necessitate enhanced security. Thus, absolute cyberspace security and mitigation against phishing blitz became mandatory to protect overall data privacy. Typically, diverse applications datasets are anonymized with better privacy to owners without providing all secrecy requirements to the newly added records. Some proposed techniques emphasized this issue by re-anonymizing the datasets from the scratch. The utmost privacy protection over incremental datasets on CC is far from being achieved. Certainly, the distribution of huge datasets volume across multiple storage nodes limits the privacy preservation. In this view, we propose a new anonymization technique to attain better privacy protection with high data utility over distributed and incremental datasets on CC. The proficiency of data privacy preservation and improved confidentiality requirements is demonstrated through performance evaluation. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Evaluation of computer-based NDE techniques and regional support of inspection activities

    International Nuclear Information System (INIS)

    Taylor, T.T.; Kurtz, R.J.; Heasler, P.G.; Doctor, S.R.

    1991-01-01

    This paper describes the technical progress during fiscal year 1990 for the program entitled 'Evaluation of Computer-Based nondestructive evaluation (NDE) Techniques and Regional Support of Inspection Activities.' Highlights of the technical progress include: development of a seminar to provide basic knowledge required to review and evaluate computer-based systems; review of a typical computer-based field procedure to determine compliance with applicable codes, ambiguities in procedure guidance, and overall effectiveness and utility; design and fabrication of a series of three test blocks for NRC staff use for training or audit of UT systems; technical assistance in reviewing (1) San Onofre ten year reactor pressure vessel inservice inspection activities and (2) the capability of a proposed phased array inspection of the feedwater nozzle at Oyster Creek; completion of design calculations to determine the feasibility and significance of various sizes of mockup assemblies that could be used to evaluate the effectiveness of eddy current examinations performed on steam generators; and discussion of initial mockup design features and methods for fabricating flaws in steam generator tubes

  13. Review on the applications of the very high speed computing technique to atomic energy field

    International Nuclear Information System (INIS)

    Hoshino, Tsutomu

    1981-01-01

    The demand of calculation in atomic energy field is enormous, and the physical and technological knowledge obtained by experiments are summarized into mathematical models, and accumulated as the computer programs for design, safety analysis of operational management. These calculation code systems are classified into reactor physics, reactor technology, operational management and nuclear fusion. In this paper, the demand of calculation speed in the diffusion and transport of neutrons, shielding, technological safety, core control and particle simulation is explained as the typical calculation. These calculations are divided into two models, the one is fluid model which regards physical systems as continuum, and the other is particle model which regards physical systems as composed of finite number of particles. The speed of computers in present state is too slow, and the capability 1000 to 10000 times as much as the present general purpose machines is desirable. The calculation techniques of pipeline system and parallel processor system are described. As an example of the practical system, the computer network OCTOPUS in the Lorence Livermore Laboratory is shown. Also, the CHI system in UCLA is introduced. (Kako, I.)

  14. An enhanced technique for mobile cloudlet offloading with reduced computation using compression in the cloud

    Science.gov (United States)

    Moro, A. C.; Nadesh, R. K.

    2017-11-01

    The cloud computing paradigm has transformed the way we do business in today’s world. Services on cloud have come a long way since just providing basic storage or software on demand. One of the fastest growing factor in this is mobile cloud computing. With the option of offloading now available to mobile users, mobile users can offload entire applications onto cloudlets. With the problems regarding availability and limited-storage capacity of these mobile cloudlets, it becomes difficult to decide for the mobile user when to use his local memory or the cloudlets. Hence, we take a look at a fast algorithm that decides whether the mobile user should go for cloudlet or rely on local memory based on an offloading probability. We have partially implemented the algorithm which decides whether the task can be carried out locally or given to a cloudlet. But as it becomes a burden on the mobile devices to perform the complete computation, so we look to offload this on to a cloud in our paper. Also further we use a file compression technique before sending the file onto the cloud to further reduce the load.

  15. Learning-based computing techniques in geoid modeling for precise height transformation

    Science.gov (United States)

    Erol, B.; Erol, S.

    2013-03-01

    Precise determination of local geoid is of particular importance for establishing height control in geodetic GNSS applications, since the classical leveling technique is too laborious. A geoid model can be accurately obtained employing properly distributed benchmarks having GNSS and leveling observations using an appropriate computing algorithm. Besides the classical multivariable polynomial regression equations (MPRE), this study attempts an evaluation of learning based computing algorithms: artificial neural networks (ANNs), adaptive network-based fuzzy inference system (ANFIS) and especially the wavelet neural networks (WNNs) approach in geoid surface approximation. These algorithms were developed parallel to advances in computer technologies and recently have been used for solving complex nonlinear problems of many applications. However, they are rather new in dealing with precise modeling problem of the Earth gravity field. In the scope of the study, these methods were applied to Istanbul GPS Triangulation Network data. The performances of the methods were assessed considering the validation results of the geoid models at the observation points. In conclusion the ANFIS and WNN revealed higher prediction accuracies compared to ANN and MPRE methods. Beside the prediction capabilities, these methods were also compared and discussed from the practical point of view in conclusions.

  16. Signal Amplification Technique (SAT): an approach for improving resolution and reducing image noise in computed tomography

    International Nuclear Information System (INIS)

    Phelps, M.E.; Huang, S.C.; Hoffman, E.J.; Plummer, D.; Carson, R.

    1981-01-01

    Spatial resolution improvements in computed tomography (CT) have been limited by the large and unique error propagation properties of this technique. The desire to provide maximum image resolution has resulted in the use of reconstruction filter functions designed to produce tomographic images with resolution as close as possible to the intrinsic detector resolution. Thus, many CT systems produce images with excessive noise with the system resolution determined by the detector resolution rather than the reconstruction algorithm. CT is a rigorous mathematical technique which applies an increasing amplification to increasing spatial frequencies in the measured data. This mathematical approach to spatial frequency amplification cannot distinguish between signal and noise and therefore both are amplified equally. We report here a method in which tomographic resolution is improved by using very small detectors to selectively amplify the signal and not noise. Thus, this approach is referred to as the signal amplification technique (SAT). SAT can provide dramatic improvements in image resolution without increases in statistical noise or dose because increases in the cutoff frequency of the reconstruction algorithm are not required to improve image resolution. Alternatively, in cases where image counts are low, such as in rapid dynamic or receptor studies, statistical noise can be reduced by lowering the cutoff frequency while still maintaining the best possible image resolution. A possible system design for a positron CT system with SAT is described

  17. Nano-computed tomography. Technique and applications; Nanocomputertomografie. Technik und Applikationen

    Energy Technology Data Exchange (ETDEWEB)

    Kampschulte, M.; Sender, J.; Litzlbauer, H.D.; Althoehn, U.; Schwab, J.D.; Alejandre-Lafont, E.; Martels, G.; Krombach, G.A. [University Hospital Giessen (Germany). Dept. of Diagnostic and Interventional Radiology; Langheinirch, A.C. [BG Trauma Hospital Frankfurt/Main (Germany). Dept. of Diagnostic and Interventional Radiology

    2016-02-15

    Nano-computed tomography (nano-CT) is an emerging, high-resolution cross-sectional imaging technique and represents a technical advancement of the established micro-CT technology. Based on the application of a transmission target X-ray tube, the focal spot size can be decreased down to diameters less than 400 nanometers (nm). Together with specific detectors and examination protocols, a superior spatial resolution up to 400 nm (10 % MTF) can be achieved, thereby exceeding the resolution capacity of typical micro-CT systems. The technical concept of nano-CT imaging as well as the basics of specimen preparation are demonstrated exemplarily. Characteristics of atherosclerotic plaques (intraplaque hemorrhage and calcifications) in a murine model of atherosclerosis (ApoE{sub (-/-)}/LDLR{sub (-/-)} double knockout mouse) are demonstrated in the context of superior spatial resolution in comparison to micro-CT. Furthermore, this article presents the application of nano-CT for imaging cerebral microcirculation (murine), lung structures (porcine), and trabecular microstructure (ovine) in contrast to micro-CT imaging. This review shows the potential of nano-CT as a radiological method in biomedical basic research and discusses the application of experimental, high resolution CT techniques in consideration of other high resolution cross-sectional imaging techniques.

  18. Fundamentals of natural computing basic concepts, algorithms, and applications

    CERN Document Server

    de Castro, Leandro Nunes

    2006-01-01

    Introduction A Small Sample of Ideas The Philosophy of Natural Computing The Three Branches: A Brief Overview When to Use Natural Computing Approaches Conceptualization General Concepts PART I - COMPUTING INSPIRED BY NATURE Evolutionary Computing Problem Solving as a Search Task Hill Climbing and Simulated Annealing Evolutionary Biology Evolutionary Computing The Other Main Evolutionary Algorithms From Evolutionary Biology to Computing Scope of Evolutionary Computing Neurocomputing The Nervous System Artif

  19. Computational intelligence techniques for comparative genomics dedicated to Prof. Allam Appa Rao on the occasion of his 65th birthday

    CERN Document Server

    Gunjan, Vinit

    2015-01-01

    This Brief highlights Informatics and related techniques to Computer Science Professionals, Engineers, Medical Doctors, Bioinformatics researchers and other interdisciplinary researchers. Chapters include the Bioinformatics of Diabetes and several computational algorithms and statistical analysis approach to effectively study the disorders and possible causes along with medical applications.

  20. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    Science.gov (United States)

    Mai, J.; Tolson, B.

    2017-12-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an

  1. Patient size and x-ray technique factors in head computed tomography examinations. I. Radiation doses

    International Nuclear Information System (INIS)

    Huda, Walter; Lieberman, Kristin A.; Chang, Jack; Roskopf, Marsha L.

    2004-01-01

    We investigated how patient age, size and composition, together with the choice of x-ray technique factors, affect radiation doses in head computed tomography (CT) examinations. Head size dimensions, cross-sectional areas, and mean Hounsfield unit (HU) values were obtained from head CT images of 127 patients. For radiation dosimetry purposes patients were modeled as uniform cylinders of water. Dose computations were performed for 18x7 mm sections, scanned at a constant 340 mAs, for x-ray tube voltages ranging from 80 to 140 kV. Values of mean section dose, energy imparted, and effective dose were computed for patients ranging from the newborn to adults. There was a rapid growth of head size over the first two years, followed by a more modest increase of head size until the age of 18 or so. Newborns have a mean HU value of about 50 that monotonically increases with age over the first two decades of life. Average adult A-P and lateral dimensions were 186±8 mm and 147±8 mm, respectively, with an average HU value of 209±40. An infant head was found to be equivalent to a water cylinder with a radius of ∼60 mm, whereas an adult head had an equivalent radius 50% greater. Adult males head dimensions are about 5% larger than for females, and their average x-ray attenuation is ∼20 HU greater. For adult examinations performed at 120 kV, typical values were 32 mGy for the mean section dose, 105 mJ for the total energy imparted, and 0.64 mSv for the effective dose. Increasing the x-ray tube voltage from 80 to 140 kV increases patient doses by about a factor of 5. For the same technique factors, mean section doses in infants are 35% higher than in adults. Energy imparted for adults is 50% higher than for infants, but infant effective doses are four times higher than for adults. CT doses need to take into account patient age, head size, and composition as well as the selected x-ray technique factors

  2. Application of computer generated color graphic techniques to the processing and display of three dimensional fluid dynamic data

    Science.gov (United States)

    Anderson, B. H.; Putt, C. W.; Giamati, C. C.

    1981-01-01

    Color coding techniques used in the processing of remote sensing imagery were adapted and applied to the fluid dynamics problems associated with turbofan mixer nozzles. The computer generated color graphics were found to be useful in reconstructing the measured flow field from low resolution experimental data to give more physical meaning to this information and in scanning and interpreting the large volume of computer generated data from the three dimensional viscous computer code used in the analysis.

  3. Validation of an Improved Computer-Assisted Technique for Mining Free-Text Electronic Medical Records.

    Science.gov (United States)

    Duz, Marco; Marshall, John F; Parkin, Tim

    2017-06-29

    The use of electronic medical records (EMRs) offers opportunity for clinical epidemiological research. With large EMR databases, automated analysis processes are necessary but require thorough validation before they can be routinely used. The aim of this study was to validate a computer-assisted technique using commercially available content analysis software (SimStat-WordStat v.6 (SS/WS), Provalis Research) for mining free-text EMRs. The dataset used for the validation process included life-long EMRs from 335 patients (17,563 rows of data), selected at random from a larger dataset (141,543 patients, ~2.6 million rows of data) and obtained from 10 equine veterinary practices in the United Kingdom. The ability of the computer-assisted technique to detect rows of data (cases) of colic, renal failure, right dorsal colitis, and non-steroidal anti-inflammatory drug (NSAID) use in the population was compared with manual classification. The first step of the computer-assisted analysis process was the definition of inclusion dictionaries to identify cases, including terms identifying a condition of interest. Words in inclusion dictionaries were selected from the list of all words in the dataset obtained in SS/WS. The second step consisted of defining an exclusion dictionary, including combinations of words to remove cases erroneously classified by the inclusion dictionary alone. The third step was the definition of a reinclusion dictionary to reinclude cases that had been erroneously classified by the exclusion dictionary. Finally, cases obtained by the exclusion dictionary were removed from cases obtained by the inclusion dictionary, and cases from the reinclusion dictionary were subsequently reincluded using Rv3.0.2 (R Foundation for Statistical Computing, Vienna, Austria). Manual analysis was performed as a separate process by a single experienced clinician reading through the dataset once and classifying each row of data based on the interpretation of the free

  4. A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology.

    Science.gov (United States)

    Kumar, Neeraj; Verma, Ruchika; Sharma, Sanuj; Bhargava, Surabhi; Vahadane, Abhishek; Sethi, Amit

    2017-07-01

    Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.

  5. Regularization Techniques for ECG Imaging during Atrial Fibrillation: a Computational Study

    Directory of Open Access Journals (Sweden)

    Carlos Figuera

    2016-10-01

    Full Text Available The inverse problem of electrocardiography is usually analyzed during stationary rhythms. However, the performance of the regularization methods under fibrillatory conditions has not been fully studied. In this work, we assessed different regularization techniques during atrial fibrillation (AF for estimating four target parameters, namely, epicardial potentials, dominant frequency (DF, phase maps, and singularity point (SP location. We use a realistic mathematical model of atria and torso anatomy with three different electrical activity patterns (i.e. sinus rhythm, simple AF and complex AF. Body surface potentials (BSP were simulated using Boundary Element Method and corrupted with white Gaussian noise of different powers. Noisy BSPs were used to obtain the epicardial potentials on the atrial surface, using fourteen different regularization techniques. DF, phase maps and SP location were computed from estimated epicardial potentials. Inverse solutions were evaluated using a set of performance metrics adapted to each clinical target. For the case of SP location, an assessment methodology based on the spatial mass function of the SP location and four spatial error metrics was proposed. The role of the regularization parameter for Tikhonov-based methods, and the effect of noise level and imperfections in the knowledge of the transfer matrix were also addressed. Results showed that the Bayes maximum-a-posteriori method clearly outperforms the rest of the techniques but requires a priori information about the epicardial potentials. Among the purely non-invasive techniques, Tikhonov-based methods performed as well as more complex techniques in realistic fibrillatory conditions, with a slight gain between 0.02 and 0.2 in terms of the correlation coefficient. Also, the use of a constant regularization parameter may be advisable since the performance was similar to that obtained with a variable parameter (indeed there was no difference for the zero

  6. A practical technique for benefit-cost analysis of computer-aided design and drafting systems

    International Nuclear Information System (INIS)

    Shah, R.R.; Yan, G.

    1979-03-01

    Analysis of benefits and costs associated with the operation of Computer-Aided Design and Drafting Systems (CADDS) are needed to derive economic justification for acquiring new systems, as well as to evaluate the performance of existing installations. In practice, however, such analyses are difficult to perform since most technical and economic advantages of CADDS are ΣirreduciblesΣ, i.e. cannot be readily translated into monetary terms. In this paper, a practical technique for economic analysis of CADDS in a drawing office environment is presented. A Σworst caseΣ approach is taken since increase in productivity of existing manpower is the only benefit considered, while all foreseen costs are taken into account. Methods of estimating benefits and costs are described. The procedure for performing the analysis is illustrated by a case study based on the drawing office activities at Atomic Energy of Canada Limited. (auth)

  7. Revealing −1 Programmed Ribosomal Frameshifting Mechanisms by Single-Molecule Techniques and Computational Methods

    Directory of Open Access Journals (Sweden)

    Kai-Chun Chang

    2012-01-01

    Full Text Available Programmed ribosomal frameshifting (PRF serves as an intrinsic translational regulation mechanism employed by some viruses to control the ratio between structural and enzymatic proteins. Most viral mRNAs which use PRF adapt an H-type pseudoknot to stimulate −1 PRF. The relationship between the thermodynamic stability and the frameshifting efficiency of pseudoknots has not been fully understood. Recently, single-molecule force spectroscopy has revealed that the frequency of −1 PRF correlates with the unwinding forces required for disrupting pseudoknots, and that some of the unwinding work dissipates irreversibly due to the torsional restraint of pseudoknots. Complementary to single-molecule techniques, computational modeling provides insights into global motions of the ribosome, whose structural transitions during frameshifting have not yet been elucidated in atomic detail. Taken together, recent advances in biophysical tools may help to develop antiviral therapies that target the ubiquitous −1 PRF mechanism among viruses.

  8. [Adverse Effect Predictions Based on Computational Toxicology Techniques and Large-scale Databases].

    Science.gov (United States)

    Uesawa, Yoshihiro

    2018-01-01

     Understanding the features of chemical structures related to the adverse effects of drugs is useful for identifying potential adverse effects of new drugs. This can be based on the limited information available from post-marketing surveillance, assessment of the potential toxicities of metabolites and illegal drugs with unclear characteristics, screening of lead compounds at the drug discovery stage, and identification of leads for the discovery of new pharmacological mechanisms. This present paper describes techniques used in computational toxicology to investigate the content of large-scale spontaneous report databases of adverse effects, and it is illustrated with examples. Furthermore, volcano plotting, a new visualization method for clarifying the relationships between drugs and adverse effects via comprehensive analyses, will be introduced. These analyses may produce a great amount of data that can be applied to drug repositioning.

  9. Understanding refraction contrast using a comparison of absorption and refraction computed tomographic techniques

    Science.gov (United States)

    Wiebe, S.; Rhoades, G.; Wei, Z.; Rosenberg, A.; Belev, G.; Chapman, D.

    2013-05-01

    Refraction x-ray contrast is an imaging modality used primarily in a research setting at synchrotron facilities, which have a biomedical imaging research program. The most common method for exploiting refraction contrast is by using a technique called Diffraction Enhanced Imaging (DEI). The DEI apparatus allows the detection of refraction between two materials and produces a unique ''edge enhanced'' contrast appearance, very different from the traditional absorption x-ray imaging used in clinical radiology. In this paper we aim to explain the features of x-ray refraction contrast as a typical clinical radiologist would understand. Then a discussion regarding what needs to be considered in the interpretation of the refraction image takes place. Finally we present a discussion about the limitations of planar refraction imaging and the potential of DEI Computed Tomography. This is an original work that has not been submitted to any other source for publication. The authors have no commercial interests or conflicts of interest to disclose.

  10. Quantification of ventilated facade efficiency by using computational fluid mechanics techniques

    International Nuclear Information System (INIS)

    Mora Perez, M.; Lopez Patino, G.; Bengochea Escribano, M. A.; Lopez Jimenez, P. A.

    2011-01-01

    In some countries, summer over-heating is a big problem in a buildings energy balance. Ventilated facades are a useful tool when applied to building design, especially in bio climatic building design. A ventilated facade is a complex, multi-layer structural solution that enables dry installation of the covering elements. The objective of this paper is to quantify the efficiency improvement in the building thermal when this sort of facade is installed. These improvements are due to convection produced in the air gap of the facade. This convection depends on the air movement inside the gap and the heat transmission in this motion. These quantities are mathematically modelled by Computational Fluid Dynamics (CFD) techniques using a commercial code: STAR CCM+. The proposed method allows an assessment of the energy potential of the ventilated facade and its capacity for cooling. (Author) 23 refs.

  11. An algebraic iterative reconstruction technique for differential X-ray phase-contrast computed tomography.

    Science.gov (United States)

    Fu, Jian; Schleede, Simone; Tan, Renbo; Chen, Liyuan; Bech, Martin; Achterhold, Klaus; Gifford, Martin; Loewen, Rod; Ruth, Ronald; Pfeiffer, Franz

    2013-09-01

    Iterative reconstruction has a wide spectrum of proven advantages in the field of conventional X-ray absorption-based computed tomography (CT). In this paper, we report on an algebraic iterative reconstruction technique for grating-based differential phase-contrast CT (DPC-CT). Due to the differential nature of DPC-CT projections, a differential operator and a smoothing operator are added to the iterative reconstruction, compared to the one commonly used for absorption-based CT data. This work comprises a numerical study of the algorithm and its experimental verification using a dataset measured at a two-grating interferometer setup. Since the algorithm is easy to implement and allows for the extension to various regularization possibilities, we expect a significant impact of the method for improving future medical and industrial DPC-CT applications. Copyright © 2012. Published by Elsevier GmbH.

  12. Techniques for computing reactivity changes caused by fuel axial expansion in LMR's

    International Nuclear Information System (INIS)

    Khalil, H.

    1988-01-01

    An evaluation is made of the accuracy of methods used to compute reactivity changes caused by axial fuel relocation in fast reactors. Results are presented to demonstrate the validity of assumptions commonly made such as linearity of reactivity with fuel elongation, additivity of local reactivity contributions, and the adequacy of standard perturbation techniques. Accurate prediction of the reactivity loss caused by axial swelling of metallic fuel is shown to require proper representation of the burnup dependence of the expansion reactivity. Some accuracy limitations in the methods used in transient analyses, which are based on the use of fuel worth tables, are identified, and efficient ways to improve accuracy are described. Implementation of these corrections produced expansion reactivity estimates within 5% of higher-order method for a metal-fueled FFTF core representation. 18 refs., 3 figs., 3 tabs

  13. Computational efficiency improvement with Wigner rotation technique in studying atoms in intense few-cycle circularly polarized pulses

    International Nuclear Information System (INIS)

    Yuan, Minghu; Feng, Liqiang; Lü, Rui; Chu, Tianshu

    2014-01-01

    We show that by introducing Wigner rotation technique into the solution of time-dependent Schrödinger equation in length gauge, computational efficiency can be greatly improved in describing atoms in intense few-cycle circularly polarized laser pulses. The methodology with Wigner rotation technique underlying our openMP parallel computational code for circularly polarized laser pulses is described. Results of test calculations to investigate the scaling property of the computational code with the number of the electronic angular basis function l as well as the strong field phenomena are presented and discussed for the hydrogen atom

  14. APPLICATION OF SOFT COMPUTING TECHNIQUES FOR PREDICTING COOLING TIME REQUIRED DROPPING INITIAL TEMPERATURE OF MASS CONCRETE

    Directory of Open Access Journals (Sweden)

    Santosh Bhattarai

    2017-07-01

    Full Text Available Minimizing the thermal cracks in mass concrete at an early age can be achieved by removing the hydration heat as quickly as possible within initial cooling period before the next lift is placed. Recognizing the time needed to remove hydration heat within initial cooling period helps to take an effective and efficient decision on temperature control plan in advance. Thermal properties of concrete, water cooling parameters and construction parameter are the most influencing factors involved in the process and the relationship between these parameters are non-linear in a pattern, complicated and not understood well. Some attempts had been made to understand and formulate the relationship taking account of thermal properties of concrete and cooling water parameters. Thus, in this study, an effort have been made to formulate the relationship for the same taking account of thermal properties of concrete, water cooling parameters and construction parameter, with the help of two soft computing techniques namely: Genetic programming (GP software “Eureqa” and Artificial Neural Network (ANN. Relationships were developed from the data available from recently constructed high concrete double curvature arch dam. The value of R for the relationship between the predicted and real cooling time from GP and ANN model is 0.8822 and 0.9146 respectively. Relative impact on target parameter due to input parameters was evaluated through sensitivity analysis and the results reveal that, construction parameter influence the target parameter significantly. Furthermore, during the testing phase of proposed models with an independent set of data, the absolute and relative errors were significantly low, which indicates the prediction power of the employed soft computing techniques deemed satisfactory as compared to the measured data.

  15. Wheeze sound analysis using computer-based techniques: a systematic review.

    Science.gov (United States)

    Ghulam Nabi, Fizza; Sundaraj, Kenneth; Chee Kiang, Lam; Palaniappan, Rajkumar; Sundaraj, Sebastian

    2017-10-31

    Wheezes are high pitched continuous respiratory acoustic sounds which are produced as a result of airway obstruction. Computer-based analyses of wheeze signals have been extensively used for parametric analysis, spectral analysis, identification of airway obstruction, feature extraction and diseases or pathology classification. While this area is currently an active field of research, the available literature has not yet been reviewed. This systematic review identified articles describing wheeze analyses using computer-based techniques on the SCOPUS, IEEE Xplore, ACM, PubMed and Springer and Elsevier electronic databases. After a set of selection criteria was applied, 41 articles were selected for detailed analysis. The findings reveal that 1) computerized wheeze analysis can be used for the identification of disease severity level or pathology, 2) further research is required to achieve acceptable rates of identification on the degree of airway obstruction with normal breathing, 3) analysis using combinations of features and on subgroups of the respiratory cycle has provided a pathway to classify various diseases or pathology that stem from airway obstruction.

  16. A review on computational fluid dynamic simulation techniques for Darrieus vertical axis wind turbines

    International Nuclear Information System (INIS)

    Ghasemian, Masoud; Ashrafi, Z. Najafian; Sedaghat, Ahmad

    2017-01-01

    Highlights: • A review on CFD simulation technique for Darrieus wind turbines is provided. • Recommendations and guidelines toward reliable and accurate simulations are presented. • Different progresses in CFD simulation of Darrieus wind turbines are addressed. - Abstract: The global warming threats, the presence of policies on support of renewable energies, and the desire for clean smart cities are the major drives for most recent researches on developing small wind turbines in urban environments. VAWTs (vertical axis wind turbines) are most appealing for energy harvesting in the urban environment. This is attributed due to structural simplicity, wind direction independency, no yaw mechanism required, withstand high turbulence winds, cost effectiveness, easier maintenance, and lower noise emission of VAWTs. This paper reviews recent published works on CFD (computational fluid dynamic) simulations of Darrieus VAWTs. Recommendations and guidelines are presented for turbulence modeling, spatial and temporal discretization, numerical schemes and algorithms, and computational domain size. The operating and geometrical parameters such as tip speed ratio, wind speed, solidity, blade number and blade shapes are fully investigated. The purpose is to address different progresses in simulations areas such as blade profile modification and optimization, wind turbine performance augmentation using guide vanes, wind turbine wake interaction in wind farms, wind turbine aerodynamic noise reduction, dynamic stall control, self-starting characteristics, and effects of unsteady and skewed wind conditions.

  17. The use of automatic programming techniques for fault tolerant computing systems

    Science.gov (United States)

    Wild, C.

    1985-01-01

    It is conjectured that the production of software for ultra-reliable computing systems such as required by Space Station, aircraft, nuclear power plants and the like will require a high degree of automation as well as fault tolerance. In this paper, the relationship between automatic programming techniques and fault tolerant computing systems is explored. Initial efforts in the automatic synthesis of code from assertions to be used for error detection as well as the automatic generation of assertions and test cases from abstract data type specifications is outlined. Speculation on the ability to generate truly diverse designs capable of recovery from errors by exploring alternate paths in the program synthesis tree is discussed. Some initial thoughts on the use of knowledge based systems for the global detection of abnormal behavior using expectations and the goal-directed reconfiguration of resources to meet critical mission objectives are given. One of the sources of information for these systems would be the knowledge captured during the automatic programming process.

  18. Simplified techniques of cerebral angiography using a mobile X-ray unit and computed radiography

    International Nuclear Information System (INIS)

    Gondo, Gakuji; Ishiwata, Yusuke; Yamashita, Toshinori; Iida, Takashi; Moro, Yutaka

    1989-01-01

    Simplified techniques of cerebral angiography using a mobile X-ray unit and computed radiography (CR) are discussed. Computed radiography is a digital radiography system in which an imaging plate is used as an X-ray detector and a final image is displayed on the film. In the angiograms performed with CR, the spatial frequency components can be enhanced for the easy analysis of fine blood vessels. Computed radiography has an automatic sensitivity and a latitude-setting mechanism, thus serving as an 'automatic camera.' This mechanism is useful for radiography with a mobile X-ray unit in hospital wards, intensive care units, or operating rooms where the appropriate setting of exposure conditions is difficult. We applied this mechanism to direct percutaneous carotid angiography and intravenous digital subtraction angiography with a mobile X-ray unit. Direct percutaneous carotid angiography using CR and a mobile X-ray unit were taken after the manual injection of a small amount of a contrast material through a fine needle. We performed direct percutaneous carotid angiography with this method 68 times on 25 cases from August 1986 to December 1987. Of the 68 angiograms, 61 were evaluated as good, compared with conventional angiography. Though the remaining seven were evaluated as poor, they were still diagnostically effective. This method is found useful for carotid angiography in emergency rooms, intensive care units, or operating rooms. Cerebral venography using CR and a mobile X-ray unit was done after the manual injection of a contrast material through the bilateral cubital veins. The cerebral venous system could be visualized from 16 to 24 seconds after the beginning of the injection of the contrast material. We performed cerebral venography with this method 14 times on six cases. These venograms were better than conventional angiograms in all cases. This method may be useful in managing patients suffering from cerebral venous thrombosis. (J.P.N.)

  19. Brain-computer interface: changes in performance using virtual reality techniques.

    Science.gov (United States)

    Ron-Angevin, Ricardo; Díaz-Estrella, Antonio

    2009-01-09

    The ability to control electroencephalographic (EEG) signals when different mental tasks are carried out would provide a method of communication for people with serious motor function problems. This system is known as a brain-computer interface (BCI). Due to the difficulty of controlling one's own EEG signals, a suitable training protocol is required to motivate subjects, as it is necessary to provide some type of visual feedback allowing subjects to see their progress. Conventional systems of feedback are based on simple visual presentations, such as a horizontal bar extension. However, virtual reality is a powerful tool with graphical possibilities to improve BCI-feedback presentation. The objective of the study is to explore the advantages of the use of feedback based on virtual reality techniques compared to conventional systems of feedback. Sixteen untrained subjects, divided into two groups, participated in the experiment. A group of subjects was trained using a BCI system, which uses conventional feedback (bar extension), and another group was trained using a BCI system, which submits subjects to a more familiar environment, such as controlling a car to avoid obstacles. The obtained results suggest that EEG behaviour can be modified via feedback presentation. Significant differences in classification error rates between both interfaces were obtained during the feedback period, confirming that an interface based on virtual reality techniques can improve the feedback control, specifically for untrained subjects.

  20. Efficient computation of the elastography inverse problem by combining variational mesh adaption and a clustering technique

    International Nuclear Information System (INIS)

    Arnold, Alexander; Bruhns, Otto T; Reichling, Stefan; Mosler, Joern

    2010-01-01

    This paper is concerned with an efficient implementation suitable for the elastography inverse problem. More precisely, the novel algorithm allows us to compute the unknown stiffness distribution in soft tissue by means of the measured displacement field by considerably reducing the numerical cost compared to previous approaches. This is realized by combining and further elaborating variational mesh adaption with a clustering technique similar to those known from digital image compression. Within the variational mesh adaption, the underlying finite element discretization is only locally refined if this leads to a considerable improvement of the numerical solution. Additionally, the numerical complexity is reduced by the aforementioned clustering technique, in which the parameters describing the stiffness of the respective soft tissue are sorted according to a predefined number of intervals. By doing so, the number of unknowns associated with the elastography inverse problem can be chosen explicitly. A positive side effect of this method is the reduction of artificial noise in the data (smoothing of the solution). The performance and the rate of convergence of the resulting numerical formulation are critically analyzed by numerical examples.

  1. a Holistic Approach for Inspection of Civil Infrastructures Based on Computer Vision Techniques

    Science.gov (United States)

    Stentoumis, C.; Protopapadakis, E.; Doulamis, A.; Doulamis, N.

    2016-06-01

    In this work, it is examined the 2D recognition and 3D modelling of concrete tunnel cracks, through visual cues. At the time being, the structural integrity inspection of large-scale infrastructures is mainly performed through visual observations by human inspectors, who identify structural defects, rate them and, then, categorize their severity. The described approach targets at minimum human intervention, for autonomous inspection of civil infrastructures. The shortfalls of existing approaches in crack assessment are being addressed by proposing a novel detection scheme. Although efforts have been made in the field, synergies among proposed techniques are still missing. The holistic approach of this paper exploits the state of the art techniques of pattern recognition and stereo-matching, in order to build accurate 3D crack models. The innovation lies in the hybrid approach for the CNN detector initialization, and the use of the modified census transformation for stereo matching along with a binary fusion of two state-of-the-art optimization schemes. The described approach manages to deal with images of harsh radiometry, along with severe radiometric differences in the stereo pair. The effectiveness of this workflow is evaluated on a real dataset gathered in highway and railway tunnels. What is promising is that the computer vision workflow described in this work can be transferred, with adaptations of course, to other infrastructure such as pipelines, bridges and large industrial facilities that are in the need of continuous state assessment during their operational life cycle.

  2. A New Heuristic Anonymization Technique for Privacy Preserved Datasets Publication on Cloud Computing

    Science.gov (United States)

    Aldeen Yousra, S.; Mazleena, Salleh

    2018-05-01

    Recent advancement in Information and Communication Technologies (ICT) demanded much of cloud services to sharing users’ private data. Data from various organizations are the vital information source for analysis and research. Generally, this sensitive or private data information involves medical, census, voter registration, social network, and customer services. Primary concern of cloud service providers in data publishing is to hide the sensitive information of individuals. One of the cloud services that fulfill the confidentiality concerns is Privacy Preserving Data Mining (PPDM). The PPDM service in Cloud Computing (CC) enables data publishing with minimized distortion and absolute privacy. In this method, datasets are anonymized via generalization to accomplish the privacy requirements. However, the well-known privacy preserving data mining technique called K-anonymity suffers from several limitations. To surmount those shortcomings, I propose a new heuristic anonymization framework for preserving the privacy of sensitive datasets when publishing on cloud. The advantages of K-anonymity, L-diversity and (α, k)-anonymity methods for efficient information utilization and privacy protection are emphasized. Experimental results revealed the superiority and outperformance of the developed technique than K-anonymity, L-diversity, and (α, k)-anonymity measure.

  3. A HOLISTIC APPROACH FOR INSPECTION OF CIVIL INFRASTRUCTURES BASED ON COMPUTER VISION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    C. Stentoumis

    2016-06-01

    Full Text Available In this work, it is examined the 2D recognition and 3D modelling of concrete tunnel cracks, through visual cues. At the time being, the structural integrity inspection of large-scale infrastructures is mainly performed through visual observations by human inspectors, who identify structural defects, rate them and, then, categorize their severity. The described approach targets at minimum human intervention, for autonomous inspection of civil infrastructures. The shortfalls of existing approaches in crack assessment are being addressed by proposing a novel detection scheme. Although efforts have been made in the field, synergies among proposed techniques are still missing. The holistic approach of this paper exploits the state of the art techniques of pattern recognition and stereo-matching, in order to build accurate 3D crack models. The innovation lies in the hybrid approach for the CNN detector initialization, and the use of the modified census transformation for stereo matching along with a binary fusion of two state-of-the-art optimization schemes. The described approach manages to deal with images of harsh radiometry, along with severe radiometric differences in the stereo pair. The effectiveness of this workflow is evaluated on a real dataset gathered in highway and railway tunnels. What is promising is that the computer vision workflow described in this work can be transferred, with adaptations of course, to other infrastructure such as pipelines, bridges and large industrial facilities that are in the need of continuous state assessment during their operational life cycle.

  4. Computed tomography automatic exposure control techniques in 18F-FDG oncology PET-CT scanning.

    Science.gov (United States)

    Iball, Gareth R; Tout, Deborah

    2014-04-01

    Computed tomography (CT) automatic exposure control (AEC) systems are now used in all modern PET-CT scanners. A collaborative study was undertaken to compare AEC techniques of the three major PET-CT manufacturers for fluorine-18 fluorodeoxyglucose half-body oncology imaging. An audit of 70 patients was performed for half-body CT scans taken on a GE Discovery 690, Philips Gemini TF and Siemens Biograph mCT (all 64-slice CT). Patient demographic and dose information was recorded and image noise was calculated as the SD of Hounsfield units in the liver. A direct comparison of the AEC systems was made by scanning a Rando phantom on all three systems for a range of AEC settings. The variation in dose and image quality with patient weight was significantly different for all three systems, with the GE system showing the largest variation in dose with weight and Philips the least. Image noise varied with patient weight in Philips and Siemens systems but was constant for all weights in GE. The z-axis mA profiles from the Rando phantom demonstrate that these differences are caused by the nature of the tube current modulation techniques applied. The mA profiles varied considerably according to the AEC settings used. CT AEC techniques from the three manufacturers yield significantly different tube current modulation patterns and hence deliver different doses and levels of image quality across a range of patient weights. Users should be aware of how their system works and of steps that could be taken to optimize imaging protocols.

  5. Evolutionary engineering for industrial microbiology.

    Science.gov (United States)

    Vanee, Niti; Fisher, Adam B; Fong, Stephen S

    2012-01-01

    Superficially, evolutionary engineering is a paradoxical field that balances competing interests. In natural settings, evolution iteratively selects and enriches subpopulations that are best adapted to a particular ecological niche using random processes such as genetic mutation. In engineering desired approaches utilize rational prospective design to address targeted problems. When considering details of evolutionary and engineering processes, more commonality can be found. Engineering relies on detailed knowledge of the problem parameters and design properties in order to predict design outcomes that would be an optimized solution. When detailed knowledge of a system is lacking, engineers often employ algorithmic search strategies to identify empirical solutions. Evolution epitomizes this iterative optimization by continuously diversifying design options from a parental design, and then selecting the progeny designs that represent satisfactory solutions. In this chapter, the technique of applying the natural principles of evolution to engineer microbes for industrial applications is discussed to highlight the challenges and principles of evolutionary engineering.

  6. Estimating true evolutionary distances under the DCJ model.

    Science.gov (United States)

    Lin, Yu; Moret, Bernard M E

    2008-07-01

    Modern techniques can yield the ordering and strandedness of genes on each chromosome of a genome; such data already exists for hundreds of organisms. The evolutionary mechanisms through which the set of the genes of an organism is altered and reordered are of great interest to systematists, evolutionary biologists, comparative genomicists and biomedical researchers. Perhaps the most basic concept in this area is that of evolutionary distance between two genomes: under a given model of genomic evolution, how many events most likely took place to account for the difference between the two genomes? We present a method to estimate the true evolutionary distance between two genomes under the 'double-cut-and-join' (DCJ) model of genome rearrangement, a model under which a single multichromosomal operation accounts for all genomic rearrangement events: inversion, transposition, translocation, block interchange and chromosomal fusion and fission. Our method relies on a simple structural characterization of a genome pair and is both analytically and computationally tractable. We provide analytical results to describe the asymptotic behavior of genomes under the DCJ model, as well as experimental results on a wide variety of genome structures to exemplify the very high accuracy (and low variance) of our estimator. Our results provide a tool for accurate phylogenetic reconstruction from multichromosomal gene rearrangement data as well as a theoretical basis for refinements of the DCJ model to account for biological constraints. All of our software is available in source form under GPL at http://lcbb.epfl.ch.

  7. Attractive evolutionary equilibria

    NARCIS (Netherlands)

    Joosten, Reinoud A.M.G.; Roorda, Berend

    2011-01-01

    We present attractiveness, a refinement criterion for evolutionary equilibria. Equilibria surviving this criterion are robust to small perturbations of the underlying payoff system or the dynamics at hand. Furthermore, certain attractive equilibria are equivalent to others for certain evolutionary

  8. Examination of China's performance and thematic evolution in quantum cryptography research using quantitative and computational techniques.

    Directory of Open Access Journals (Sweden)

    Nicholas V Olijnyk

    Full Text Available This study performed two phases of analysis to shed light on the performance and thematic evolution of China's quantum cryptography (QC research. First, large-scale research publication metadata derived from QC research published from 2001-2017 was used to examine the research performance of China relative to that of global peers using established quantitative and qualitative measures. Second, this study identified the thematic evolution of China's QC research using co-word cluster network analysis, a computational science mapping technique. The results from the first phase indicate that over the past 17 years, China's performance has evolved dramatically, placing it in a leading position. Among the most significant findings is the exponential rate at which all of China's performance indicators (i.e., Publication Frequency, citation score, H-index are growing. China's H-index (a normalized indicator has surpassed all other countries' over the last several years. The second phase of analysis shows how China's main research focus has shifted among several QC themes, including quantum-key-distribution, photon-optical communication, network protocols, and quantum entanglement with an emphasis on applied research. Several themes were observed across time periods (e.g., photons, quantum-key-distribution, secret-messages, quantum-optics, quantum-signatures; some themes disappeared over time (e.g., computer-networks, attack-strategies, bell-state, polarization-state, while others emerged more recently (e.g., quantum-entanglement, decoy-state, unitary-operation. Findings from the first phase of analysis provide empirical evidence that China has emerged as the global driving force in QC. Considering China is the premier driving force in global QC research, findings from the second phase of analysis provide an understanding of China's QC research themes, which can provide clarity into how QC technologies might take shape. QC and science and technology

  9. Examination of China's performance and thematic evolution in quantum cryptography research using quantitative and computational techniques.

    Science.gov (United States)

    Olijnyk, Nicholas V

    2018-01-01

    This study performed two phases of analysis to shed light on the performance and thematic evolution of China's quantum cryptography (QC) research. First, large-scale research publication metadata derived from QC research published from 2001-2017 was used to examine the research performance of China relative to that of global peers using established quantitative and qualitative measures. Second, this study identified the thematic evolution of China's QC research using co-word cluster network analysis, a computational science mapping technique. The results from the first phase indicate that over the past 17 years, China's performance has evolved dramatically, placing it in a leading position. Among the most significant findings is the exponential rate at which all of China's performance indicators (i.e., Publication Frequency, citation score, H-index) are growing. China's H-index (a normalized indicator) has surpassed all other countries' over the last several years. The second phase of analysis shows how China's main research focus has shifted among several QC themes, including quantum-key-distribution, photon-optical communication, network protocols, and quantum entanglement with an emphasis on applied research. Several themes were observed across time periods (e.g., photons, quantum-key-distribution, secret-messages, quantum-optics, quantum-signatures); some themes disappeared over time (e.g., computer-networks, attack-strategies, bell-state, polarization-state), while others emerged more recently (e.g., quantum-entanglement, decoy-state, unitary-operation). Findings from the first phase of analysis provide empirical evidence that China has emerged as the global driving force in QC. Considering China is the premier driving force in global QC research, findings from the second phase of analysis provide an understanding of China's QC research themes, which can provide clarity into how QC technologies might take shape. QC and science and technology policy researchers

  10. Prediction of monthly regional groundwater levels through hybrid soft-computing techniques

    Science.gov (United States)

    Chang, Fi-John; Chang, Li-Chiu; Huang, Chien-Wei; Kao, I.-Feng

    2016-10-01

    Groundwater systems are intrinsically heterogeneous with dynamic temporal-spatial patterns, which cause great difficulty in quantifying their complex processes, while reliable predictions of regional groundwater levels are commonly needed for managing water resources to ensure proper service of water demands within a region. In this study, we proposed a novel and flexible soft-computing technique that could effectively extract the complex high-dimensional input-output patterns of basin-wide groundwater-aquifer systems in an adaptive manner. The soft-computing models combined the Self Organized Map (SOM) and the Nonlinear Autoregressive with Exogenous Inputs (NARX) network for predicting monthly regional groundwater levels based on hydrologic forcing data. The SOM could effectively classify the temporal-spatial patterns of regional groundwater levels, the NARX could accurately predict the mean of regional groundwater levels for adjusting the selected SOM, the Kriging was used to interpolate the predictions of the adjusted SOM into finer grids of locations, and consequently the prediction of a monthly regional groundwater level map could be obtained. The Zhuoshui River basin in Taiwan was the study case, and its monthly data sets collected from 203 groundwater stations, 32 rainfall stations and 6 flow stations during 2000 and 2013 were used for modelling purpose. The results demonstrated that the hybrid SOM-NARX model could reliably and suitably predict monthly basin-wide groundwater levels with high correlations (R2 > 0.9 in both training and testing cases). The proposed methodology presents a milestone in modelling regional environmental issues and offers an insightful and promising way to predict monthly basin-wide groundwater levels, which is beneficial to authorities for sustainable water resources management.

  11. Time-Domain Techniques for Computation and Reconstruction of One-Dimensional Profiles

    Directory of Open Access Journals (Sweden)

    M. Rahman

    2005-01-01

    Full Text Available This paper presents a time-domain technique to compute the electromagnetic fields and to reconstruct the permittivity profile within a one-dimensional medium of finite length. The medium is characterized by a permittivity as well as conductivity profile which vary only with depth. The discussed scattering problem is thus one-dimensional. The modeling tool is divided into two different schemes which are named as the forward solver and the inverse solver. The task of the forward solver is to compute the internal fields of the specimen which is performed by Green’s function approach. When a known electromagnetic wave is incident normally on the media, the resulting electromagnetic field within the media can be calculated by constructing a Green’s operator. This operator maps the incident field on either side of the medium to the field at an arbitrary observation point. It is nothing but a matrix of integral operators with kernels satisfying known partial differential equations. The reflection and transmission behavior of the medium is also determined from the boundary values of the Green's operator. The inverse solver is responsible for solving an inverse scattering problem by reconstructing the permittivity profile of the medium. Though it is possible to use several algorithms to solve this problem, the invariant embedding method, also known as the layer-stripping method, has been implemented here due to the advantage that it requires a finite time trace of reflection data. Here only one round trip of reflection data is used, where one round trip is defined by the time required by the pulse to propagate through the medium and back again. The inversion process begins by retrieving the reflection kernel from the reflected wave data by simply using a deconvolution technique. The rest of the task can easily be performed by applying a numerical approach to determine different profile parameters. Both the solvers have been found to have the

  12. Evolutionary Stable Strategy

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 21; Issue 9. Evolutionary Stable Strategy: Application of Nash Equilibrium in Biology. General Article Volume 21 Issue 9 September 2016 pp 803- ... Keywords. Evolutionary game theory, evolutionary stable state, conflict, cooperation, biological games.

  13. Evaluation of two iterative techniques for reducing metal artifacts in computed tomography.

    Science.gov (United States)

    Boas, F Edward; Fleischmann, Dominik

    2011-06-01

    To evaluate two methods for reducing metal artifacts in computed tomography (CT)--the metal deletion technique (MDT) and the selective algebraic reconstruction technique (SART)--and compare these methods with filtered back projection (FBP) and linear interpolation (LI). The institutional review board approved this retrospective HIPAA-compliant study; informed patient consent was waived. Simulated projection data were calculated for a phantom that contained water, soft tissue, bone, and iron. Clinical projection data were obtained retrospectively from 11 consecutively identified CT scans with metal streak artifacts, with a total of 178 sections containing metal. Each scan was reconstructed using FBP, LI, SART, and MDT. The simulated scans were evaluated quantitatively by calculating the average error in Hounsfield units for each pixel compared with the original phantom. Two radiologists who were blinded to the reconstruction algorithms used qualitatively evaluated the clinical scans, ranking the overall severity of artifacts for each algorithm. P values for comparisons of the image quality ranks were calculated from the binomial distribution. The simulations showed that MDT reduces artifacts due to photon starvation, beam hardening, and motion and does not introduce new streaks between metal and bone. MDT had the lowest average error (76% less than FBP, 42% less than LI, 17% less than SART). Blinded comparison of the clinical scans revealed that MDT had the best image quality 100% of the time (95% confidence interval: 72%, 100%). LI had the second best image quality, and SART and FBP had the worst image quality. On images from two CT scans, as compared with images generated by the scanner, MDT revealed information of potential clinical importance. For a wide range of scans, MDT yields reduced metal streak artifacts and better-quality images than does FBP, LI, or SART. http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11101782/-/DC1. RSNA, 2011

  14. A comparative analysis among computational intelligence techniques for dissolved oxygen prediction in Delaware River

    Directory of Open Access Journals (Sweden)

    Ehsan Olyaie

    2017-05-01

    Full Text Available Most of the water quality models previously developed and used in dissolved oxygen (DO prediction are complex. Moreover, reliable data available to develop/calibrate new DO models is scarce. Therefore, there is a need to study and develop models that can handle easily measurable parameters of a particular site, even with short length. In recent decades, computational intelligence techniques, as effective approaches for predicting complicated and significant indicator of the state of aquatic ecosystems such as DO, have created a great change in predictions. In this study, three different AI methods comprising: (1 two types of artificial neural networks (ANN namely multi linear perceptron (MLP and radial based function (RBF; (2 an advancement of genetic programming namely linear genetic programming (LGP; and (3 a support vector machine (SVM technique were used for DO prediction in Delaware River located at Trenton, USA. For evaluating the performance of the proposed models, root mean square error (RMSE, Nash–Sutcliffe efficiency coefficient (NS, mean absolute relative error (MARE and, correlation coefficient statistics (R were used to choose the best predictive model. The comparison of estimation accuracies of various intelligence models illustrated that the SVM was able to develop the most accurate model in DO estimation in comparison to other models. Also, it was found that the LGP model performs better than the both ANNs models. For example, the determination coefficient was 0.99 for the best SVM model, while it was 0.96, 0.91 and 0.81 for the best LGP, MLP and RBF models, respectively. In general, the results indicated that an SVM model could be employed satisfactorily in DO estimation.

  15. STUDY OF IMAGE SEGMENTATION TECHNIQUES ON RETINAL IMAGES FOR HEALTH CARE MANAGEMENT WITH FAST COMPUTING

    Directory of Open Access Journals (Sweden)

    Srikanth Prabhu

    2012-02-01

    Full Text Available The role of segmentation in image processing is to separate foreground from background. In this process, the features become clearly visible when appropriate filters are applied on the image. In this paper emphasis has been laid on segmentation of biometric retinal images to filter out the vessels explicitly for evaluating the bifurcation points and features for diabetic retinopathy. Segmentation on images is performed by calculating ridges or morphology. Ridges are those areas in the images where there is sharp contrast in features. Morphology targets the features using structuring elements. Structuring elements are of different shapes like disk, line which is used for extracting features of those shapes. When segmentation was performed on retinal images problems were encountered during image pre-processing stage. Also edge detection techniques have been deployed to find out the contours of the retinal images. After the segmentation has been performed, it has been seen that artifacts of the retinal images have been minimal when ridge based segmentation technique was deployed. In the field of Health Care Management, image segmentation has an important role to play as it determines whether a person is normal or having any disease specially diabetes. During the process of segmentation, diseased features are classified as diseased one’s or artifacts. The problem comes when artifacts are classified as diseased ones. This results in misclassification which has been discussed in the analysis Section. We have achieved fast computing with better performance, in terms of speed for non-repeating features, when compared to repeating features.

  16. Patient size and x-ray technique factors in head computed tomography examinations. II. Image quality

    International Nuclear Information System (INIS)

    Huda, Walter; Lieberman, Kristin A.; Chang, Jack; Roskopf, Marsha L.

    2004-01-01

    We investigated how patient head characteristics, as well as the choice of x-ray technique factors, affect lesion contrast and noise values in computed tomography (CT) images. Head sizes and mean Hounsfield unit (HU) values were obtained from head CT images for five classes of patients ranging from the newborn to adults. X-ray spectra with tube voltages ranging from 80 to 140 kV were used to compute the average photon energy, and energy fluence, transmitted through the heads of patients of varying size. Image contrast, and the corresponding contrast to noise ratios (CNRs), were determined for lesions of fat, muscle, and iodine relative to a uniform water background. Maintaining a constant image CNR for each lesion, the patient energy imparted was also computed to identify the x-ray tube voltage that minimized the radiation dose. For adults, increasing the tube voltage from 80 to 140 kV changed the iodine HU from 2.62x10 5 to 1.27x10 5 , the fat HU from -138 to -108, and the muscle HU from 37.1 to 33.0. Increasing the x-ray tube voltage from 80 to 140 kV increased the percentage energy fluence transmission by up to a factor of 2. For a fixed x-ray tube voltage, the percentage transmitted energy fluence in adults was more than a factor of 4 lower than for newborns. For adults, increasing the x-ray tube voltage from 80 to 140 kV improved the CNR for muscle lesions by 130%, for fat lesions by a factor of 2, and for iodine lesions by 25%. As the size of the patient increased from newborn to adults, lesion CNR was reduced by about a factor of 2. The mAs value can be reduced by 80% when scanning newborns while maintaining the same lesion CNR as for adults. Maintaining the CNR of an iodine lesion at a constant level, use of 140 kV increases the energy imparted to an adult patient by nearly a factor of 3.5 in comparison to 80 kV. For fat and muscle lesions, raising the x-ray tube voltage from 80 to 140 kV at a constant CNR increased the patient dose by 37% and 7

  17. A New Screening Methodology for Improved Oil Recovery Processes Using Soft-Computing Techniques

    Science.gov (United States)

    Parada, Claudia; Ertekin, Turgay

    2010-05-01

    The first stage of production of any oil reservoir involves oil displacement by natural drive mechanisms such as solution gas drive, gas cap drive and gravity drainage. Typically, improved oil recovery (IOR) methods are applied to oil reservoirs that have been depleted naturally. In more recent years, IOR techniques are applied to reservoirs even before their natural energy drive is exhausted by primary depletion. Descriptive screening criteria for IOR methods are used to select the appropriate recovery technique according to the fluid and rock properties. This methodology helps in assessing the most suitable recovery process for field deployment of a candidate reservoir. However, the already published screening guidelines neither provide information about the expected reservoir performance nor suggest a set of project design parameters, which can be used towards the optimization of the process. In this study, artificial neural networks (ANN) are used to build a high-performance neuro-simulation tool for screening different improved oil recovery techniques: miscible injection (CO2 and N2), waterflooding and steam injection processes. The simulation tool consists of proxy models that implement a multilayer cascade feedforward back propagation network algorithm. The tool is intended to narrow the ranges of possible scenarios to be modeled using conventional simulation, reducing the extensive time and energy spent in dynamic reservoir modeling. A commercial reservoir simulator is used to generate the data to train and validate the artificial neural networks. The proxy models are built considering four different well patterns with different well operating conditions as the field design parameters. Different expert systems are developed for each well pattern. The screening networks predict oil production rate and cumulative oil production profiles for a given set of rock and fluid properties, and design parameters. The results of this study show that the networks are

  18. Algorithms for computing parsimonious evolutionary scenarios for genome evolution, the last universal common ancestor and dominance of horizontal gene transfer in the evolution of prokaryotes

    Directory of Open Access Journals (Sweden)

    Galperin Michael Y

    2003-01-01

    Full Text Available Abstract Background Comparative analysis of sequenced genomes reveals numerous instances of apparent horizontal gene transfer (HGT, at least in prokaryotes, and indicates that lineage-specific gene loss might have been even more common in evolution. This complicates the notion of a species tree, which needs to be re-interpreted as a prevailing evolutionary trend, rather than the full depiction of evolution, and makes reconstruction of ancestral genomes a non-trivial task. Results We addressed the problem of constructing parsimonious scenarios for individual sets of orthologous genes given a species tree. The orthologous sets were taken from the database of Clusters of Orthologous Groups of proteins (COGs. We show that the phyletic patterns (patterns of presence-absence in completely sequenced genomes of almost 90% of the COGs are inconsistent with the hypothetical species tree. Algorithms were developed to reconcile the phyletic patterns with the species tree by postulating gene loss, COG emergence and HGT (the latter two classes of events were collectively treated as gene gains. We prove that each of these algorithms produces a parsimonious evolutionary scenario, which can be represented as mapping of loss and gain events on the species tree. The distribution of the evolutionary events among the tree nodes substantially depends on the underlying assumptions of the reconciliation algorithm, e.g. whether or not independent gene gains (gain after loss after gain are permitted. Biological considerations suggest that, on average, gene loss might be a more likely event than gene gain. Therefore different gain penalties were used and the resulting series of reconstructed gene sets for the last universal common ancestor (LUCA of the extant life forms were analysed. The number of genes in the reconstructed LUCA gene sets grows as the gain penalty increases. However, qualitative examination of the LUCA versions reconstructed with different gain penalties

  19. The comparison of bolus tracking and test bolus techniques for computed tomography thoracic angiography in healthy beagles

    Directory of Open Access Journals (Sweden)

    Nicolette Cassel

    2013-05-01

    Full Text Available Computed tomography thoracic angiography studies were performed on five adult beagles using the bolus tracking (BT technique and the test bolus (TB technique, which were performed at least two weeks apart. For the BT technique, 2 mL/kg of 300 mgI/mL iodinated contrast agent was injected intravenously. Scans were initiated when the contrast in the aorta reached 150 Hounsfield units (HU. For the TB technique, the dogs received a test dose of 15% of 2 mL/kg of 300 mgI/mL iodinated contrast agent, followed by a series of low dose sequential scans. The full dose of the contrast agent was then administered and the scans were conducted at optimal times as identified from time attenuation curves. Mean attenuation in HU was measured in the aorta (Ao and right caudal pulmonary artery (rCPA. Additional observations included the study duration, milliAmpere (mA, computed tomography dose index volume (CTDI[vol] and dose length product (DLP. The attenuation in the Ao (BT = 660 52 HU ± 138 49 HU, TB = 469 82 HU ± 199 52 HU, p = 0.13 and in the rCPA (BT = 606 34 HU ± 143 37 HU, TB = 413 72 HU ± 174.99 HU, p = 0.28 did not differ significantly between the two techniques. The BT technique was conducted in a significantly shorter time period than the TB technique (p = 0.03. The mean mA for the BT technique was significantly lower than the TB technique (p = 0.03, as was the mean CTDI(vol (p = 0.001. The mean DLP did not differ significantly between the two techniques (p = 0.17. No preference was given to either technique when evaluating the Ao or rCPA but the BT technique was shown to be shorter in duration and resulted in less DLP than the TB technique.

  20. Soft Computing Technique and Conventional Controller for Conical Tank Level Control

    Directory of Open Access Journals (Sweden)

    Sudharsana Vijayan

    2016-03-01

    Full Text Available In many process industries the control of liquid level is mandatory. But the control of nonlinear process is difficult. Many process industries use conical tanks because of its non linear shape contributes better drainage for solid mixtures, slurries and viscous liquids. So, control of conical tank level is a challenging task due to its non-linearity and continually varying cross-section. This is due to relationship between controlled variable level and manipulated variable flow rate, which has a square root relationship. The main objective is to execute the suitable controller for conical tank system to maintain the desired level. System identification of the non-linear process is done using black box modelling and found to be first order plus dead time (FOPDT model. In this paper it is proposed to obtain the mathematical modelling of a conical tank system and to study the system using block diagram after that soft computing technique like fuzzy and conventional controller is also used for the comparison.

  1. Evaluating Statistical Process Control (SPC) techniques and computing the uncertainty of force calibrations

    Science.gov (United States)

    Navard, Sharon E.

    1989-01-01

    In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.

  2. Evaluation of efficacy of metal artefact reduction technique using contrast media in Computed Tomography

    Science.gov (United States)

    Yusob, Diana; Zukhi, Jihan; Aziz Tajuddin, Abd; Zainon, Rafidah

    2017-05-01

    The aim of this study was to evaluate the efficacy of metal artefact reduction using contrasts media in Computed Tomography (CT) imaging. A water-based abdomen phantom of diameter 32 cm (adult body size) was fabricated using polymethyl methacrylate (PMMA) material. Three different contrast agents (iodine, barium and gadolinium) were filled in small PMMA tubes and placed inside a water-based PMMA adult abdomen phantom. The orthopedic metal screw was placed in each small PMMA tube separately. These two types of orthopedic metal screw (stainless steel and titanium alloy) were scanned separately. The orthopedic metal crews were scanned with single-energy CT at 120 kV and dual-energy CT at fast kV-switching between 80 kV and 140 kV. The scan modes were set automatically using the current modulation care4Dose setting and the scans were set at different pitch and slice thickness. The use of the contrast media technique on orthopedic metal screws were optimised by using pitch = 0.60 mm, and slice thickness = 5.0 mm. The use contrast media can reduce the metal streaking artefacts on CT image, enhance the CT images surrounding the implants, and it has potential use in improving diagnostic performance in patients with severe metallic artefacts. These results are valuable for imaging protocol optimisation in clinical applications.

  3. Epileptic seizure predictors based on computational intelligence techniques: a comparative study with 278 patients.

    Science.gov (United States)

    Alexandre Teixeira, César; Direito, Bruno; Bandarabadi, Mojtaba; Le Van Quyen, Michel; Valderrama, Mario; Schelter, Bjoern; Schulze-Bonhage, Andreas; Navarro, Vincent; Sales, Francisco; Dourado, António

    2014-05-01

    The ability of computational intelligence methods to predict epileptic seizures is evaluated in long-term EEG recordings of 278 patients suffering from pharmaco-resistant partial epilepsy, also known as refractory epilepsy. This extensive study in seizure prediction considers the 278 patients from the European Epilepsy Database, collected in three epilepsy centres: Hôpital Pitié-là-Salpêtrière, Paris, France; Universitätsklinikum Freiburg, Germany; Centro Hospitalar e Universitário de Coimbra, Portugal. For a considerable number of patients it was possible to find a patient specific predictor with an acceptable performance, as for example predictors that anticipate at least half of the seizures with a rate of false alarms of no more than 1 in 6 h (0.15 h⁻¹). We observed that the epileptic focus localization, data sampling frequency, testing duration, number of seizures in testing, type of machine learning, and preictal time influence significantly the prediction performance. The results allow to face optimistically the feasibility of a patient specific prospective alarming system, based on machine learning techniques by considering the combination of several univariate (single-channel) electroencephalogram features. We envisage that this work will serve as benchmark data that will be of valuable importance for future studies based on the European Epilepsy Database. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. Incomplete projection reconstruction of computed tomography based on the modified discrete algebraic reconstruction technique

    Science.gov (United States)

    Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Gao, Zongzhao; Yang, YaFei

    2018-02-01

    Based on the discrete algebraic reconstruction technique (DART), this study aims to address and test a new improved algorithm applied to incomplete projection data to generate a high quality reconstruction image by reducing the artifacts and noise in computed tomography. For the incomplete projections, an augmented Lagrangian based on compressed sensing is first used in the initial reconstruction for segmentation of the DART to get higher contrast graphics for boundary and non-boundary pixels. Then, the block matching 3D filtering operator was used to suppress the noise and to improve the gray distribution of the reconstructed image. Finally, simulation studies on the polychromatic spectrum were performed to test the performance of the new algorithm. Study results show a significant improvement in the signal-to-noise ratios (SNRs) and average gradients (AGs) of the images reconstructed from incomplete data. The SNRs and AGs of the new images reconstructed by DART-ALBM were on average 30%-40% and 10% higher than the images reconstructed by DART algorithms. Since the improved DART-ALBM algorithm has a better robustness to limited-view reconstruction, which not only makes the edge of the image clear but also makes the gray distribution of non-boundary pixels better, it has the potential to improve image quality from incomplete projections or sparse projections.

  5. Fuzzy classification for strawberry diseases-infection using machine vision and soft-computing techniques

    Science.gov (United States)

    Altıparmak, Hamit; Al Shahadat, Mohamad; Kiani, Ehsan; Dimililer, Kamil

    2018-04-01

    Robotic agriculture requires smart and doable techniques to substitute the human intelligence with machine intelligence. Strawberry is one of the important Mediterranean product and its productivity enhancement requires modern and machine-based methods. Whereas a human identifies the disease infected leaves by his eye, the machine should also be capable of vision-based disease identification. The objective of this paper is to practically verify the applicability of a new computer-vision method for discrimination between the healthy and disease infected strawberry leaves which does not require neural network or time consuming trainings. The proposed method was tested under outdoor lighting condition using a regular DLSR camera without any particular lens. Since the type and infection degree of disease is approximated a human brain a fuzzy decision maker classifies the leaves over the images captured on-site having the same properties of human vision. Optimizing the fuzzy parameters for a typical strawberry production area at a summer mid-day in Cyprus produced 96% accuracy for segmented iron deficiency and 93% accuracy for segmented using a typical human instant classification approximation as the benchmark holding higher accuracy than a human eye identifier. The fuzzy-base classifier provides approximate result for decision making on the leaf status as if it is healthy or not.

  6. Monitoring the Microgravity Environment Quality On-Board the International Space Station Using Soft Computing Techniques

    Science.gov (United States)

    Jules, Kenol; Lin, Paul P.

    2001-01-01

    This paper presents an artificial intelligence monitoring system developed by the NASA Glenn Principal Investigator Microgravity Services project to help the principal investigator teams identify the primary vibratory disturbance sources that are active, at any moment in time, on-board the International Space Station, which might impact the microgravity environment their experiments are exposed to. From the Principal Investigator Microgravity Services' web site, the principal investigator teams can monitor via a graphical display, in near real time, which event(s) is/are on, such as crew activities, pumps, fans, centrifuges, compressor, crew exercise, platform structural modes, etc., and decide whether or not to run their experiments based on the acceleration environment associated with a specific event. This monitoring system is focused primarily on detecting the vibratory disturbance sources, but could be used as well to detect some of the transient disturbance sources, depending on the events duration. The system has built-in capability to detect both known and unknown vibratory disturbance sources. Several soft computing techniques such as Kohonen's Self-Organizing Feature Map, Learning Vector Quantization, Back-Propagation Neural Networks, and Fuzzy Logic were used to design the system.

  7. Early detection and identification of anomalies in chemical regime based on computational intelligence techniques

    International Nuclear Information System (INIS)

    Figedy, Stefan; Smiesko, Ivan

    2012-01-01

    This article provides brief information about the fundamental features of a newly-developed diagnostic system for early detection and identification of anomalies being generated in water chemistry regime of the primary and secondary circuit of the VVER-440 reactor. This system, which is called SACHER (System of Analysis of CHEmical Regime), was installed within the major modernization project at the NPP-V2 Bohunice in the Slovak Republic. The SACHER system has been fully developed on MATLAB environment. It is based on computational intelligence techniques and inserts various elements of intelligent data processing modules for clustering, diagnosing, future prediction, signal validation, etc, into the overall chemical information system. The application of SACHER would essentially assist chemists to identify the current situation regarding anomalies being generated in the primary and secondary circuit water chemistry. This system is to be used for diagnostics and data handling, however it is not intended to fully replace the presence of experienced chemists to decide upon corrective actions. (author)

  8. Measurement of liver and spleen volume by computed tomography using point counting technique

    International Nuclear Information System (INIS)

    Matsuda, Yoshiro; Sato, Hiroyuki; Nei, Jinichi; Takada, Akira

    1982-01-01

    We devised a new method for measurement of liver and spleen volume by computed tomography using point counting technique. This method is very simple and applicable to any kind of CT scanner. The volumes of the livers and spleens estimated by this method were significantly correlated with the weights of the corresponding organs measured on autopsy or surgical operation, indicating clinical usefulness of this method. Hepatic and splenic volumes were estimated by this method in 43 patients with chronic liver disease and 9 subjects with non-hepatobiliary disease. The mean hepatic volume in non-alcoholic liver cirrhosis was significantly smaller than those in non-hepatobiliary disease and other chronic liver diseases. The mean hepatic volume in alcoholic cirrhosis and alcoholic fibrosis tended to be slightly larger than that in non-hepatobiliary disease. The mean splenic volume in liver cirrhosis was significantly larger than those in non-hepatobiliary disease and other chronic liver diseases. However, there was no significant difference of the mean splenic volume between alcoholic and non-alcoholic cirrhosis. Significantly positive correlation between hepatic and splenic volumes was found in alcoholic cirrhosis, but not in non-alcoholic cirrhosis. These results indicate that estimation of hepatic and splenic volumes by this method is useful for the analysis of the pathophysiological condition of chronic liver diseases. (author)

  9. New evaluation methods for conceptual design selection using computational intelligence techniques

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Hong Zhong; Liu, Yu; Li, Yanfeng; Wang, Zhonglai [University of Electronic Science and Technology of China, Chengdu (China); Xue, Lihua [Higher Education Press, Beijing (China)

    2013-03-15

    The conceptual design selection, which aims at choosing the best or most desirable design scheme among several candidates for the subsequent detailed design stage, oftentimes requires a set of tools to conduct design evaluation. Using computational intelligence techniques, such as fuzzy logic, neural network, genetic algorithm, and physical programming, several design evaluation methods are put forth in this paper to realize the conceptual design selection under different scenarios. Depending on whether an evaluation criterion can be quantified or not, the linear physical programming (LPP) model and the RAOGA-based fuzzy neural network (FNN) model can be utilized to evaluate design alternatives in conceptual design stage. Furthermore, on the basis of Vanegas and Labib's work, a multi-level conceptual design evaluation model based on the new fuzzy weighted average (NFWA) and the fuzzy compromise decision-making method is developed to solve the design evaluation problem consisting of many hierarchical criteria. The effectiveness of the proposed methods is demonstrated via several illustrative examples.

  10. New evaluation methods for conceptual design selection using computational intelligence techniques

    International Nuclear Information System (INIS)

    Huang, Hong Zhong; Liu, Yu; Li, Yanfeng; Wang, Zhonglai; Xue, Lihua

    2013-01-01

    The conceptual design selection, which aims at choosing the best or most desirable design scheme among several candidates for the subsequent detailed design stage, oftentimes requires a set of tools to conduct design evaluation. Using computational intelligence techniques, such as fuzzy logic, neural network, genetic algorithm, and physical programming, several design evaluation methods are put forth in this paper to realize the conceptual design selection under different scenarios. Depending on whether an evaluation criterion can be quantified or not, the linear physical programming (LPP) model and the RAOGA-based fuzzy neural network (FNN) model can be utilized to evaluate design alternatives in conceptual design stage. Furthermore, on the basis of Vanegas and Labib's work, a multi-level conceptual design evaluation model based on the new fuzzy weighted average (NFWA) and the fuzzy compromise decision-making method is developed to solve the design evaluation problem consisting of many hierarchical criteria. The effectiveness of the proposed methods is demonstrated via several illustrative examples.

  11. Three-dimensional demonstration of liver and spleen by computer graphics technique

    International Nuclear Information System (INIS)

    Kashiwagi, Toru; Azuma, Masayoshi; Katayama, Kazuhiro; Yoshioka, Hiroaki; Ishizu, Hiromi; Mitsutani, Natsuki; Koizumi, Takao; Takayama, Ichiro

    1987-01-01

    Three-dimensional demonstration system of the liver and spleen has been developed using computer graphics technique. Three-dimensional models were constructed from CT images of the organ surface. The three-dimensional images were displayed as wire-frame and/or solid models on the color CRT. The anatomical surface of the liver and spleen was realistically viewed from any direction. In liver cirrhosis, atrophy of the right lobe, hypertrophy of the left lobe and splenomegaly were displayed vividly. The liver and hepatoma were displayed as wire-frame and solid models respectively on the same image. This combined display clarified the intrahepatic location of hepatoma together with configuration of liver and hepatoma. Furthermore, superimposed display of three dimensional models and celiac angiogram enabled us to understand the location and configuration of lesions more easily than the original CT data or angiogram alone. Therefore, it is expected that this system is clinically useful for noninvasive evaluation of patho-morphological changes of the liver and spleen. (author)

  12. Design and manufacturing of patient-specific orthodontic appliances by computer-aided engineering techniques.

    Science.gov (United States)

    Barone, Sandro; Neri, Paolo; Paoli, Alessandro; Razionale, Armando Viviano

    2018-01-01

    Orthodontic treatments are usually performed using fixed brackets or removable oral appliances, which are traditionally made from alginate impressions and wax registrations. Among removable devices, eruption guidance appliances are used for early orthodontic treatments in order to intercept and prevent malocclusion problems. Commercially available eruption guidance appliances, however, are symmetric devices produced using a few standard sizes. For this reason, they are not able to meet all the specific patient's needs since the actual dental anatomies present various geometries and asymmetric conditions. In this article, a computer-aided design-based methodology for the design and manufacturing of a patient-specific eruption guidance appliances is presented. The proposed approach is based on the digitalization of several steps of the overall process: from the digital reconstruction of patients' anatomies to the manufacturing of customized appliances. A finite element model has been developed to evaluate the temporomandibular joint disks stress level caused by using symmetric eruption guidance appliances with different teeth misalignment conditions. The developed model can then be used to guide the design of a patient-specific appliance with the aim at reducing the patient discomfort. At this purpose, two different customization levels are proposed in order to face both arches and single tooth misalignment issues. A low-cost manufacturing process, based on an additive manufacturing technique, is finally presented and discussed.

  13. [Key points for esthetic rehabilitation of anterior teeth using chair-side computer aided design and computer aided manufacture technique].

    Science.gov (United States)

    Yang, J; Feng, H L

    2018-04-09

    With the rapid development of the chair-side computer aided design and computer aided manufacture (CAD/CAM) technology, its accuracy and operability of have been greatly improved in recent years. Chair-side CAD/CAM system may produce all kinds of indirect restorations, and has the advantages of rapid, accurate and stable production. It has become the future development direction of Stomatology. This paper describes the clinical application of the chair-side CAD/CAM technology for anterior aesthetic restorations from the aspects of shade and shape.

  14. Evolutionary molecular medicine.

    Science.gov (United States)

    Nesse, Randolph M; Ganten, Detlev; Gregory, T Ryan; Omenn, Gilbert S

    2012-05-01

    Evolution has long provided a foundation for population genetics, but some major advances in evolutionary biology from the twentieth century that provide foundations for evolutionary medicine are only now being applied in molecular medicine. They include the need for both proximate and evolutionary explanations, kin selection, evolutionary models for cooperation, competition between alleles, co-evolution, and new strategies for tracing phylogenies and identifying signals of selection. Recent advances in genomics are transforming evolutionary biology in ways that create even more opportunities for progress at its interfaces with genetics, medicine, and public health. This article reviews 15 evolutionary principles and their applications in molecular medicine in hopes that readers will use them and related principles to speed the development of evolutionary molecular medicine.

  15. Estimation of Postmortem Interval Using the Radiological Techniques, Computed Tomography: A Pilot Study

    Directory of Open Access Journals (Sweden)

    Jiulin Wang

    2017-01-01

    Full Text Available Estimation of postmortem interval (PMI has been an important and difficult subject in the forensic study. It is a primary task of forensic work, and it can help guide the work in field investigation. With the development of computed tomography (CT technology, CT imaging techniques are now being more frequently applied to the field of forensic medicine. This study used CT imaging techniques to observe area changes in different tissues and organs of rabbits after death and the changing pattern of the average CT values in the organs. The study analyzed the relationship between the CT values of different organs and PMI with the imaging software Max Viewer and obtained multiparameter nonlinear regression equation of the different organs, and the study provided an objective and accurate method and reference information for the estimation of PMI in the forensic medicine. In forensic science, PMI refers to the time interval between the discovery or inspection of corpse and the time of death. CT, magnetic resonance imaging, and other imaging techniques have become important means of clinical examinations over the years. Although some scholars in our country have used modern radiological techniques in various fields of forensic science, such as estimation of injury time, personal identification of bodies, analysis of the cause of death, determination of the causes of injury, and identification of the foreign substances of bodies, there are only a few studies on the estimation of time of death. We detected the process of subtle changes in adult rabbits after death, the shape and size of tissues and organs, and the relationship between adjacent organs in three-dimensional space in an effort to develop new method for the estimation of PMI. The bodies of the dead rabbits were stored at 20°C room temperature, sealed condition, and prevented exposure to flesh flies. The dead rabbits were randomly divided into comparison group and experimental group. The whole

  16. Evolutionary Sound Synthesis Controlled by Gestural Data

    Directory of Open Access Journals (Sweden)

    Jose Fornari

    2011-05-01

    Full Text Available This article focuses on the interdisciplinary research involving Computer Music and Generative Visual Art. We describe the implementation of two interactive artistic systems based on principles of Gestural Data (WILSON, 2002 retrieval and self-organization (MORONI, 2003, to control an Evolutionary Sound Synthesis method (ESSynth. The first implementation uses, as gestural data, image mapping of handmade drawings. The second one uses gestural data from dynamic body movements of dance. The resulting computer output is generated by an interactive system implemented in Pure Data (PD. This system uses principles of Evolutionary Computation (EC, which yields the generation of a synthetic adaptive population of sound objects. Considering that music could be seen as “organized sound” the contribution of our study is to develop a system that aims to generate "self-organized sound" – a method that uses evolutionary computation to bridge between gesture, sound and music.

  17. Success rates for computed tomography-guided musculoskeletal biopsies performed using a low-dose technique

    International Nuclear Information System (INIS)

    Motamedi, Kambiz; Levine, Benjamin D.; Seeger, Leanne L.; McNitt-Gray, Michael F.

    2014-01-01

    To evaluate the success rate of a low-dose (50 % mAs reduction) computed tomography (CT) biopsy technique. This protocol was adopted based on other successful reduced-CT radiation dose protocols in our department, which were implemented in conjunction with quality improvement projects. The technique included a scout view and initial localizing scan with standard dose. Additional scans obtained for further guidance or needle adjustment were acquired by reducing the tube current-time product (mAs) by 50 %. The radiology billing data were searched for CT-guided musculoskeletal procedures performed over a period of 8 months following the initial implementation of the protocol. These were reviewed for the type of procedure and compliance with the implemented protocol. The compliant CT-guided biopsy cases were then retrospectively reviewed for patient demographics, tumor pathology, and lesion size. Pathology results were compared to the ultimate diagnoses and were categorized as diagnostic, accurate, or successful. Of 92 CT-guided procedures performed during this period, two were excluded as they were not biopsies (one joint injection and one drainage), 19 were excluded due to non-compliance (operators neglected to follow the protocol), and four were excluded due to lack of available follow-up in our electronic medical records. A total of 67 compliant biopsies were performed in 63 patients (two had two biopsies, and one had three biopsies). There were 32 males and 31 females with an average age of 50 (range, 15-84 years). Of the 67 biopsies, five were non-diagnostic and inaccurate and thus unsuccessful (7 %); five were diagnostic but inaccurate and thus unsuccessful (7 %); 57 were diagnostic and accurate thus successful (85 %). These results were comparable with results published in the radiology literature. The success rate of CT-guided biopsies using a low-dose protocol is comparable to published rates for conventional dose biopsies. The implemented low-dose protocol

  18. Success rates for computed tomography-guided musculoskeletal biopsies performed using a low-dose technique

    Energy Technology Data Exchange (ETDEWEB)

    Motamedi, Kambiz; Levine, Benjamin D.; Seeger, Leanne L.; McNitt-Gray, Michael F. [UCLA Health System, Radiology, Los Angeles, CA (United States)

    2014-11-15

    To evaluate the success rate of a low-dose (50 % mAs reduction) computed tomography (CT) biopsy technique. This protocol was adopted based on other successful reduced-CT radiation dose protocols in our department, which were implemented in conjunction with quality improvement projects. The technique included a scout view and initial localizing scan with standard dose. Additional scans obtained for further guidance or needle adjustment were acquired by reducing the tube current-time product (mAs) by 50 %. The radiology billing data were searched for CT-guided musculoskeletal procedures performed over a period of 8 months following the initial implementation of the protocol. These were reviewed for the type of procedure and compliance with the implemented protocol. The compliant CT-guided biopsy cases were then retrospectively reviewed for patient demographics, tumor pathology, and lesion size. Pathology results were compared to the ultimate diagnoses and were categorized as diagnostic, accurate, or successful. Of 92 CT-guided procedures performed during this period, two were excluded as they were not biopsies (one joint injection and one drainage), 19 were excluded due to non-compliance (operators neglected to follow the protocol), and four were excluded due to lack of available follow-up in our electronic medical records. A total of 67 compliant biopsies were performed in 63 patients (two had two biopsies, and one had three biopsies). There were 32 males and 31 females with an average age of 50 (range, 15-84 years). Of the 67 biopsies, five were non-diagnostic and inaccurate and thus unsuccessful (7 %); five were diagnostic but inaccurate and thus unsuccessful (7 %); 57 were diagnostic and accurate thus successful (85 %). These results were comparable with results published in the radiology literature. The success rate of CT-guided biopsies using a low-dose protocol is comparable to published rates for conventional dose biopsies. The implemented low-dose protocol

  19. Avoiding Local Optima with Interactive Evolutionary Robotics

    Science.gov (United States)

    2012-07-09

    the top of a flight of stairs selects for climbing ; suspending the robot and the target object above the ground and creating rungs between the two will...REPORT Avoiding Local Optimawith Interactive Evolutionary Robotics 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: The main bottleneck in evolutionary... robotics has traditionally been the time required to evolve robot controllers. However with the continued acceleration in computational resources, the

  20. Comparative study on computed orthopantomography and film radiographic techniques in the radiography of temporomandibular joint

    International Nuclear Information System (INIS)

    Chen Tao; Ning Lixia; Liu Yuai; Li Ningyi; Chen Feng

    2007-01-01

    Objective: To compare the computed orthopantomography (COPT) with Shriller radiography(SR), film orthopantomography (FOPT) and other traditional radiographic techniques in the radiography of temporomandibular joint (TMJ). Methods: Ninty-eight cases were randomly divided into 3 groups, and the open and close positions of TMJs of both sides were examined with SR, FOPT, and COPT, respectively. The satisfactory rates of the X-ray pictures were statistically analyzed with Pearson chi-square in SPSS10.0, and the satisfactory rates were analyzed with q test between the groups. Results: One hundred and forty-four of the open and close positions of 144 TMJ pictures of the COPT group, 128 of 128 of the FOPT group, and 6 of 120 of the SR group were satisfactory in the mandible ramus of the TMJ, with satisfactory rate being 100%, 100%, and 5%, respectively (P 0.01), respectively between FOPT and COPT groups. The difference was not statistically significant. The exposure was as follows: COPT, 99-113 mAs; FOPT, 210-225 mAs; and SR, 48-75 mAs. Therefore, COPT and FOPT were superior to SR in the pictures of the mandible ramus, coronoid process, and incisure, but inferior in the joint space pictures. The satisfactory rates of the condylar process and articular tubercle were same in the 3 groups. The exposure of the FOPT group was greater than that of the COPT and SR groups. Conclusion: COPT is superior to SR and FOPT in TMJ radiography, and should be applied widely in the clinic. (authors)

  1. Computer-controlled pneumatic pressure algometry--a new technique for quantitative sensory testing.

    Science.gov (United States)

    Polianskis, R; Graven-Nielsen, T; Arendt-Nielsen, L

    2001-01-01

    Hand-held pressure algometry usually assesses pressure-pain detection thresholds and provides little information on pressure-pain stimulus-response function. In this article, a cuff pressure algometry for advanced pressure-pain function evaluation is proposed. The experimental set-up consisted of a pneumatic tourniquet cuff, a computer-controlled air compressor and an electronic visual analogue scale (VAS) for constant pain intensity rating. Twelve healthy volunteers were included in the study. In the first part, hand-held algometry and cuff algometry were performed over the gastrocnemius muscle with constant compression rate. In the second part, the cuff algometry was performed with different compression rates to evaluate the influence of the compression rate on pain thresholds and other psychophysical data. Pressure-pain detection threshold (PDT), pain tolerance threshold (PTT), pain intensity, PDT-PTT time and other psychophysical variables were evaluated.Pressure-pain detection thresholds recorded over the gastrocnemius muscle with a hand-held and with a cuff algometer, were 482 +/- 19 kPa and 26 +/- 1.6 kPa, respectively. Pressure and pain intensities were correlated during cuff algometry. During increasing cuff compression, the subjective pain tolerance limit on VAS was 5.6 +/- 0.95 cm. There was a direct correlation between the number of compressions, the compression rate and pain thresholds. The cuff algometry technique is appropriate for pressure-pain stimulus-response studies. Cuff algometry allowed quantification of psychophysical response to the change of stimulus configuration. Copyright 2001 European Federation of Chapters of the International Association for the Study of Pain.

  2. Validation of a low dose simulation technique for computed tomography images.

    Directory of Open Access Journals (Sweden)

    Daniela Muenzel

    Full Text Available PURPOSE: Evaluation of a new software tool for generation of simulated low-dose computed tomography (CT images from an original higher dose scan. MATERIALS AND METHODS: Original CT scan data (100 mAs, 80 mAs, 60 mAs, 40 mAs, 20 mAs, 10 mAs; 100 kV of a swine were acquired (approved by the regional governmental commission for animal protection. Simulations of CT acquisition with a lower dose (simulated 10-80 mAs were calculated using a low-dose simulation algorithm. The simulations were compared to the originals of the same dose level with regard to density values and image noise. Four radiologists assessed the realistic visual appearance of the simulated images. RESULTS: Image characteristics of simulated low dose scans were similar to the originals. Mean overall discrepancy of image noise and CT values was -1.2% (range -9% to 3.2% and -0.2% (range -8.2% to 3.2%, respectively, p>0.05. Confidence intervals of discrepancies ranged between 0.9-10.2 HU (noise and 1.9-13.4 HU (CT values, without significant differences (p>0.05. Subjective observer evaluation of image appearance showed no visually detectable difference. CONCLUSION: Simulated low dose images showed excellent agreement with the originals concerning image noise, CT density values, and subjective assessment of the visual appearance of the simulated images. An authentic low-dose simulation opens up opportunity with regard to staff education, protocol optimization and introduction of new techniques.

  3. Development of scan analysis techniques employing a small computer. Progress report, August 1, 1974--July 31, 1975

    International Nuclear Information System (INIS)

    Kuhl, D.E.

    1975-01-01

    Progress is reported in the development of equipment and counting techniques for transverse section scanning of the brain following the administration of radiopharmaceuticals to evaluate regional blood flow. The scanning instrument has an array of 32 scintillation detectors that surround the head and scan data are analyzed using a small computer. (U.S.)

  4. Assessing Market Development and Innovation Project Management Factors Using the PICEA-g Hybrid Evolutionary Multi-Criteria Decision Technique. The Calcimine Company Case Study

    Directory of Open Access Journals (Sweden)

    S. Ghaffari

    2017-12-01

    Full Text Available Project management includes the consideration of complex decision modes used in modern decision support techniques. The aim of this paper was to prioritize such factors and evaluate their effects on project management and optimal control. Their effect on management and optimal project control are evaluated in frame of a statistical hypothesis. A new algorithm, "IPICEA-g" is proposed for the assessment. A questionnaire is used for data collection distributed between 56 employees of the CALCIMINE Company. T-test, two-sentence test, ANP method, FUZZY SEAMATEL and the IPICEA-g hybrid algorithm, are employed for data analyzing. Results are further discussed and conclusions are drawn.

  5. Exploiting stock data: a survey of state of the art computational techniques aimed at producing beliefs regarding investment portfolios

    Directory of Open Access Journals (Sweden)

    Mario Linares Vásquez

    2008-01-01

    Full Text Available Selecting an investment portfolio has inspired several models aimed at optimising the set of securities which an in-vesttor may select according to a number of specific decision criteria such as risk, expected return and planning hori-zon. The classical approach has been developed for supporting the two stages of portfolio selection and is supported by disciplines such as econometrics, technical analysis and corporative finance. However, with the emerging field of computational finance, new and interesting techniques have arisen in line with the need for the automatic processing of vast volumes of information. This paper surveys such new techniques which belong to the body of knowledge con-cerning computing and systems engineering, focusing on techniques particularly aimed at producing beliefs regar-ding investment portfolios.

  6. The State of Software for Evolutionary Biology.

    Science.gov (United States)

    Darriba, Diego; Flouri, Tomáš; Stamatakis, Alexandros

    2018-05-01

    With Next Generation Sequencing data being routinely used, evolutionary biology is transforming into a computational science. Thus, researchers have to rely on a growing number of increasingly complex software. All widely used core tools in the field have grown considerably, in terms of the number of features as well as lines of code and consequently, also with respect to software complexity. A topic that has received little attention is the software engineering quality of widely used core analysis tools. Software developers appear to rarely assess the quality of their code, and this can have potential negative consequences for end-users. To this end, we assessed the code quality of 16 highly cited and compute-intensive tools mainly written in C/C++ (e.g., MrBayes, MAFFT, SweepFinder, etc.) and JAVA (BEAST) from the broader area of evolutionary biology that are being routinely used in current data analysis pipelines. Because, the software engineering quality of the tools we analyzed is rather unsatisfying, we provide a list of best practices for improving the quality of existing tools and list techniques that can be deployed for developing reliable, high quality scientific software from scratch. Finally, we also discuss journal as well as science policy and, more importantly, funding issues that need to be addressed for improving software engineering quality as well as ensuring support for developing new and maintaining existing software. Our intention is to raise the awareness of the community regarding software engineering quality issues and to emphasize the substantial lack of funding for scientific software development.

  7. Computational Techniques for Model Predictive Control of Large-Scale Systems with Continuous-Valued and Discrete-Valued Inputs

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2013-01-01

    Full Text Available We propose computational techniques for model predictive control of large-scale systems with both continuous-valued control inputs and discrete-valued control inputs, which are a class of hybrid systems. In the proposed method, we introduce the notion of virtual control inputs, which are obtained by relaxing discrete-valued control inputs to continuous variables. In online computation, first, we find continuous-valued control inputs and virtual control inputs minimizing a cost function. Next, using the obtained virtual control inputs, only discrete-valued control inputs at the current time are computed in each subsystem. In addition, we also discuss the effect of quantization errors. Finally, the effectiveness of the proposed method is shown by a numerical example. The proposed method enables us to reduce and decentralize the computation load.

  8. A representation-theoretic approach to the calculation of evolutionary distance in bacteria

    Science.gov (United States)

    Sumner, Jeremy G.; Jarvis, Peter D.; Francis, Andrew R.

    2017-08-01

    In the context of bacteria and models of their evolution under genome rearrangement, we explore a novel application of group representation theory to the inference of evolutionary history. Our contribution is to show, in a very general maximum likelihood setting, how to use elementary matrix algebra to sidestep intractable combinatorial computations and convert the problem into one of eigenvalue estimation amenable to standard numerical approximation techniques.

  9. Remembering the evolutionary Freud.

    Science.gov (United States)

    Young, Allan

    2006-03-01

    Throughout his career as a writer, Sigmund Freud maintained an interest in the evolutionary origins of the human mind and its neurotic and psychotic disorders. In common with many writers then and now, he believed that the evolutionary past is conserved in the mind and the brain. Today the "evolutionary Freud" is nearly forgotten. Even among Freudians, he is regarded to be a red herring, relevant only to the extent that he diverts attention from the enduring achievements of the authentic Freud. There are three ways to explain these attitudes. First, the evolutionary Freud's key work is the "Overview of the Transference Neurosis" (1915). But it was published at an inopportune moment, forty years after the author's death, during the so-called "Freud wars." Second, Freud eventually lost interest in the "Overview" and the prospect of a comprehensive evolutionary theory of psychopathology. The publication of The Ego and the Id (1923), introducing Freud's structural theory of the psyche, marked the point of no return. Finally, Freud's evolutionary theory is simply not credible. It is based on just-so stories and a thoroughly discredited evolutionary mechanism, Lamarckian use-inheritance. Explanations one and two are probably correct but also uninteresting. Explanation number three assumes that there is a fundamental difference between Freud's evolutionary narratives (not credible) and the evolutionary accounts of psychopathology that currently circulate in psychiatry and mainstream journals (credible). The assumption is mistaken but worth investigating.

  10. [The application of computer aided design and computer aided engineering technique in separation of Pygopagus conjoined twins].

    Science.gov (United States)

    Zhang, Zhi-cheng; Sun, Tian-sheng; Li, Fang; Tang, Guo-lin

    2009-05-19

    To explore the effect of CAD and CAE related technique in separation of Pygopagus Conjoined Twins. CT images of Pygopagus conjoined twins were obtained and reconstructed in three-dimensional by Mimics software. 3D entity model of skin and spine of conjoined twins were made by fast plastic technique and equipment according to 3D data model. The circumference and area of fused and independent dural sac were measured by software of AutoCAD. The entity model is real reflection of skin and spine of Pygopagus. It was used in the procedures of discussion, sham operation, skin flap design and informed consent. In the measure of MRI, the circumference and area of fused dural sac was more than of independent dural sac, that is to say, the defect of dural sac can be repaired by direct suture. The intraoperative finding match with imaging measure results. The application of CAD and CAE in the procedure of preoperative plan have gave big help to successful separation of Pygopagus Conjoined Twins.

  11. Draft of diagnostic techniques for primary coolant circuit facilities using control computer

    International Nuclear Information System (INIS)

    Suchy, R.; Procka, V.; Murin, V.; Rybarova, D.

    A method is proposed of in-service on-line diagnostics of primary circuit selected parts by means of a control computer. Computer processing will involve the measurements of neutron flux, pressure difference in pumps and in the core, and the vibrations of primary circuit mechanical parts. (H.S.)

  12. Virtual photons in imaginary time: Computing exact Casimir forces via standard numerical electromagnetism techniques

    NARCIS (Netherlands)

    Rodriguez, A.; Ibanescu, M.; Iannuzzi, D.; Joannopoulos, J. D.; Johnson, S.T.

    2007-01-01

    We describe a numerical method to compute Casimir forces in arbitrary geometries, for arbitrary dielectric and metallic materials, with arbitrary accuracy (given sufficient computational resources). Our approach, based on well-established integration of the mean stress tensor evaluated via the

  13. Using Animation to Support the Teaching of Computer Game Development Techniques

    Science.gov (United States)

    Taylor, Mark John; Pountney, David C.; Baskett, M.

    2008-01-01

    In this paper, we examine the potential use of animation for supporting the teaching of some of the mathematical concepts that underlie computer games development activities, such as vector and matrix algebra. An experiment was conducted with a group of UK undergraduate computing students to compare the perceived usefulness of animated and static…

  14. Computational methods for molecular structure determination: theory and technique. NRCC Proceedings No. 8

    International Nuclear Information System (INIS)

    1979-01-01

    Goal of this workshop was to provide an introduction to the use of state-of-the-art computer codes for the semi-empirical and ab initio computation of the electronic structure and geometry of small and large molecules. The workshop consisted of 15 lectures on the theoretical foundations of the codes, followed by laboratory sessions which utilized these codes

  15. Linking Computer Algebra Systems and Paper-and-Pencil Techniques To Support the Teaching of Mathematics.

    Science.gov (United States)

    van Herwaarden, Onno A.; Gielen, Joseph L. W.

    2002-01-01

    Focuses on students showing a lack of conceptual insight while using computer algebra systems (CAS) in the setting of an elementary calculus and linear algebra course for first year university students in social sciences. The use of a computer algebra environment has been incorporated into a more traditional course but with special attention on…

  16. Time series analysis of reference crop evapotranspiration using soft computing techniques for Ganjam District, Odisha, India

    Science.gov (United States)

    Patra, S. R.

    2017-12-01

    minimization principle. The reliability of these computational models was analysed in light of simulation results and it was found out that SVM model produces better results among the three. The future research should be routed to extend the validation data set and to check the validity of our results on different areas with hybrid intelligence techniques.

  17. Effect of various veneering techniques on mechanical strength of computer-controlled zirconia framework designs.

    Science.gov (United States)

    Kanat, Burcu; Cömlekoğlu, Erhan M; Dündar-Çömlekoğlu, Mine; Hakan Sen, Bilge; Ozcan, Mutlu; Ali Güngör, Mehmet

    2014-08-01

    The objectives of this study were to evaluate the fracture resistance (FR), flexural strength (FS), and shear bond strength (SBS) of zirconia framework material veneered with different methods and to assess the stress distributions using finite element analysis (FEA). Zirconia frameworks fabricated in the forms of crowns for FR, bars for FS, and disks for SBS (N = 90, n = 10) were veneered with either (a) file splitting (CAD-on) (CD), (b) layering (L), or (c) overpressing (P) methods. For crown specimens, stainless steel dies (N = 30; 1 mm chamfer) were scanned using the labside contrast spray. A bilayered design was produced for CD, whereas a reduced design (1 mm) was used for L and P to support the veneer by computer-aided design and manufacturing. For bar (1.5 × 5 × 25 mm(3) ) and disk (2.5 mm diameter, 2.5 mm height) specimens, zirconia blocks were sectioned under water cooling with a low-speed diamond saw and sintered. To prepare the suprastructures in the appropriate shapes for the three mechanical tests, nano-fluorapatite ceramic was layered and fired for L, fluorapatite-ceramic was pressed for P, and the milled lithium-disilicate ceramics were fused with zirconia by a thixotropic glass ceramic for CD and then sintered for crystallization of veneering ceramic. Crowns were then cemented to the metal dies. All specimens were stored at 37°C, 100% humidity for 48 hours. Mechanical tests were performed, and data were statistically analyzed (ANOVA, Tukey's, α = 0.05). Stereomicroscopy and scanning electron microscopy (SEM) were used to evaluate the failure modes and surface structure. FEA modeling of the crowns was obtained. Mean FR values (N ± SD) of CD (4408 ± 608) and L (4323 ± 462) were higher than P (2507 ± 594) (p mechanical tests, whereas a layering technique increased the FR when an anatomical core design was employed. File splitting (CAD-on) or layering veneering ceramic on zirconia with a reduced framework design may reduce ceramic chipping

  18. Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †

    Directory of Open Access Journals (Sweden)

    Muhammad Harist Murdani

    2018-03-01

    Full Text Available In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes (Ad-Hoc and neighborhood proximity (Top-K. Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space.

  19. Efficient Proximity Computation Techniques Using ZIP Code Data for Smart Cities †.

    Science.gov (United States)

    Murdani, Muhammad Harist; Kwon, Joonho; Choi, Yoon-Ho; Hong, Bonghee

    2018-03-24

    In this paper, we are interested in computing ZIP code proximity from two perspectives, proximity between two ZIP codes ( Ad-Hoc ) and neighborhood proximity ( Top-K ). Such a computation can be used for ZIP code-based target marketing as one of the smart city applications. A naïve approach to this computation is the usage of the distance between ZIP codes. We redefine a distance metric combining the centroid distance with the intersecting road network between ZIP codes by using a weighted sum method. Furthermore, we prove that the results of our combined approach conform to the characteristics of distance measurement. We have proposed a general and heuristic approach for computing Ad-Hoc proximity, while for computing Top-K proximity, we have proposed a general approach only. Our experimental results indicate that our approaches are verifiable and effective in reducing the execution time and search space.

  20. GRAVTool, Advances on the Package to Compute Geoid Model path by the Remove-Compute-Restore Technique, Following Helmert's Condensation Method

    Science.gov (United States)

    Marotta, G. S.

    2017-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astrogeodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove Compute Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and Global Geopotential Model (GGM), respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and adjust these models to one local vertical datum. This research presents the advances on the package called GRAVTool to compute geoid models path by the RCR, following Helmert's condensation method, and its application in a study area. The studied area comprehends the federal district of Brazil, with 6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show a geoid model computed by the GRAVTool package, after analysis of the density, DTM and GGM values, more adequate to the reference values used on the study area. The accuracy of the computed model (σ = ± 0.058 m, RMS = 0.067 m, maximum = 0.124 m and minimum = -0.155 m), using density value of 2.702 g/cm³ ±0.024 g/cm³, DTM SRTM Void Filled 3 arc-second and GGM EIGEN-6C4 up to degree and order 250, matches the uncertainty (σ =± 0.073) of 26 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.076 m, RMS = 0.098 m, maximum = 0.320 m and minimum = -0.061 m).

  1. An observer study comparing spot imaging regions selected by radiologists and a computer for an automated stereo spot mammography technique

    International Nuclear Information System (INIS)

    Goodsitt, Mitchell M.; Chan, Heang-Ping; Lydick, Justin T.; Gandra, Chaitanya R.; Chen, Nelson G.; Helvie, Mark A.; Bailey, Janet E.; Roubidoux, Marilyn A.; Paramagul, Chintana; Blane, Caroline E.; Sahiner, Berkman; Petrick, Nicholas A.

    2004-01-01

    We are developing an automated stereo spot mammography technique for improved imaging of suspicious dense regions within digital mammograms. The technique entails the acquisition of a full-field digital mammogram, automated detection of a suspicious dense region within that mammogram by a computer aided detection (CAD) program, and acquisition of a stereo pair of images with automated collimation to the suspicious region. The latter stereo spot image is obtained within seconds of the original full-field mammogram, without releasing the compression paddle. The spot image is viewed on a stereo video display. A critical element of this technique is the automated detection of suspicious regions for spot imaging. We performed an observer study to compare the suspicious regions selected by radiologists with those selected by a CAD program developed at the University of Michigan. True regions of interest (TROIs) were separately determined by one of the radiologists who reviewed the original mammograms, biopsy images, and histology results. We compared the radiologist and computer-selected regions of interest (ROIs) to the TROIs. Both the radiologists and the computer were allowed to select up to 3 regions in each of 200 images (mixture of 100 CC and 100 MLO views). We computed overlap indices (the overlap index is defined as the ratio of the area of intersection to the area of interest) to quantify the agreement between the selected regions in each image. The averages of the largest overlap indices per image for the 5 radiologist-to-computer comparisons were directly related to the average number of regions per image traced by the radiologists (about 50% for 1 region/image, 84% for 2 regions/image and 96% for 3 regions/image). The average of the overlap indices with all of the TROIs was 73% for CAD and 76.8%+/-10.0% for the radiologists. This study indicates that the CAD determined ROIs could potentially be useful for a screening technique that includes stereo spot

  2. Clinical Evaluation of a Dual-Side Readout Technique Computed Radiography System in Chest Radiography of Premature Neonates

    International Nuclear Information System (INIS)

    Carlander, A.; Hansson, J.; Soederberg, J.; Steneryd, K.; Baath, M.

    2008-01-01

    Background: Recently, the dual-side readout technique has been introduced in computed radiography, leading to an increase in detective quantum efficiency (DQE) compared with the single-side readout technique. Purpose: To evaluate if the increase in DQE with the dual-side readout technique results in a higher clinical image quality in chest radiography of premature neonates at no increase in radiation dose. Material and Methods: Twenty-four chest radiographs of premature neonates were collected from both a single-side readout technique system and a double-side readout technique system. The images were processed in the same image-processing station in order for the comparison to be only dependent on the difference in readout technique. Five radiologists rated the fulfillment of four image quality criteria, which were based on important anatomical landmarks. The given ratings were analyzed using visual grading characteristics (VGC) analysis. Results: The VGC analysis showed that the reproduction of the carina with the main bronchi and the thoracic vertebrae behind the heart was better with the dual-side readout technique, whereas no significant difference for the reproduction of the central vessels or the peripheral vessels could be observed. Conclusions: The results indicate that the higher DQE of the dual-side readout technique leads to higher clinical image quality in chest radiography of premature neonates at no increase in radiation dose. Keywords: Digital radiography; lung; observer performance; pediatrics; thorax

  3. Phylogenetic inference with weighted codon evolutionary distances.

    Science.gov (United States)

    Criscuolo, Alexis; Michel, Christian J

    2009-04-01

    We develop a new approach to estimate a matrix of pairwise evolutionary distances from a codon-based alignment based on a codon evolutionary model. The method first computes a standard distance matrix for each of the three codon positions. Then these three distance matrices are weighted according to an estimate of the global evolutionary rate of each codon position and averaged into a unique distance matrix. Using a large set of both real and simulated codon-based alignments of nucleotide sequences, we show that this approach leads to distance matrices that have a significantly better treelikeness compared to those obtained by standard nucleotide evolutionary distances. We also propose an alternative weighting to eliminate the part of the noise often associated with some codon positions, particularly the third position, which is known to induce a fast evolutionary rate. Simulation results show that fast distance-based tree reconstruction algorithms on distance matrices based on this codon position weighting can lead to phylogenetic trees that are at least as accurate as, if not better, than those inferred by maximum likelihood. Finally, a well-known multigene dataset composed of eight yeast species and 106 codon-based alignments is reanalyzed and shows that our codon evolutionary distances allow building a phylogenetic tree which is similar to those obtained by non-distance-based methods (e.g., maximum parsimony and maximum likelihood) and also significantly improved compared to standard nucleotide evolutionary distance estimates.

  4. Image-processing techniques used in the computer-aided detection of radiographic lesions in anatomic background

    International Nuclear Information System (INIS)

    Giger, M.L.; Doi, K.; MacMahon, H.; Yin, F.F.

    1988-01-01

    The authors developed feature-extraction techniques for use in the computer-aided detection of pulmonary nodules in digital chest images. Use of such a computer-aided detection scheme, which would alert radiologists to the locations of suspected lung nodules, is expected to reduce the number of false-negative diagnoses. False-negative diagnoses (i.e., misses) are a current problem in chest radiology with ''miss-rates'' as high as 30%. This may be due to the camouflaging effect of surrounding anatomic background on the nodule, or to the subjective and varying decision criteria used by radiologists

  5. Rehabilitation of patients with motor disabilities using computer vision based techniques

    Directory of Open Access Journals (Sweden)

    Alejandro Reyes-Amaro

    2012-05-01

    Full Text Available In this paper we present details about the implementation of computer vision based applications for the rehabilitation of patients with motor disabilities. The applications are conceived as serious games, where the computer-patient interaction during playing contributes to the development of different motor skills. The use of computer vision methods allows the automatic guidance of the patient’s movements making constant specialized supervision unnecessary. The hardware requirements are limited to low-cost devices like usual webcams and Netbooks.

  6. A Computationally-Efficient, Multi-Mechanism Based Framework for the Comprehensive Modeling of the Evolutionary Behavior of Shape Memory Alloys

    Science.gov (United States)

    Saleeb, Atef F.; Vaidyanathan, Raj

    2016-01-01

    The report summarizes the accomplishments made during the 4-year duration of the project. Here, the major emphasis is placed on the different tasks performed by the two research teams; i.e., the modeling activities by the University of Akron (UA) team and the experimental and neutron diffraction studies conducted by the University of Central Florida (UCF) team, during this 4-year period. Further technical details are given in the upcoming sections by UA and UCF for each of the milestones/years (together with the corresponding figures and captions).The project majorly involved the development, validation, and application of a general theoretical model that is capable of capturing the nonlinear hysteretic responses, including pseudoelasticity, shape memory effect, rate-dependency, multi-axiality, asymmetry in tension versus compression response of shape memory alloys. Among the targeted goals for the SMA model was its ability to account for the evolutionary character response (including transient and long term behavior under sustained cycles) for both conventional and high temperature (HT) SMAs, as well as being able to simulate some of the devices which exploit these unique material systems. This required extensive (uniaxial and multi-axial) experiments needed to guide us in calibrating and characterizing the model. Moreover, since the model is formulated on the theoretical notion of internal state variables (ISVs), neutron diffraction experiments were needed to establish the linkage between the micromechanical changes and these ISVs. In addition, the design of the model should allow easy implementation in large scale finite element application to study the behavior of devices making use of these SMA materials under different loading controls. Summary of the activities, progress/achievements made during this period is given below in details for the University of Akron and the University (Section 2.0) of Central Florida (Section 3.0).

  7. Attractive evolutionary equilibria

    OpenAIRE

    Roorda, Berend; Joosten, Reinoud

    2011-01-01

    We present attractiveness, a refinement criterion for evolutionary equilibria. Equilibria surviving this criterion are robust to small perturbations of the underlying payoff system or the dynamics at hand. Furthermore, certain attractive equilibria are equivalent to others for certain evolutionary dynamics. For instance, each attractive evolutionarily stable strategy is an attractive evolutionarily stable equilibrium for certain barycentric ray-projection dynamics, and vice versa.

  8. Evolutionary Robotics: What, Why, and Where to

    Directory of Open Access Journals (Sweden)

    Stephane eDoncieux

    2015-03-01

    Full Text Available Evolutionary robotics applies the selection, variation, and heredity principles of natural evolution to the design of robots with embodied intelligence. It can be considered as a subfield of robotics that aims to create more robust and adaptive robots. A pivotal feature of the evolutionary approach is that it considers the whole robot at once, and enables the exploitation of robot features in a holistic manner. Evolutionary robotics can also be seen as an innovative approach to the study of evolution based on a new kind of experimentalism. The use of robots as a substrate can help address questions that are difficult, if not impossible, to investigate through computer simulations or biological studies. In this paper we consider the main achievements of evolutionary robotics, focusing particularly on its contributions to both engineering and biology. We briefly elaborate on methodological issues, review some of the most interesting findings, and discuss important open issues and promising avenues for future work.

  9. Mean-Potential Law in Evolutionary Games

    Science.gov (United States)

    Nałecz-Jawecki, Paweł; Miekisz, Jacek

    2018-01-01

    The Letter presents a novel way to connect random walks, stochastic differential equations, and evolutionary game theory. We introduce a new concept of a potential function for discrete-space stochastic systems. It is based on a correspondence between one-dimensional stochastic differential equations and random walks, which may be exact not only in the continuous limit but also in finite-state spaces. Our method is useful for computation of fixation probabilities in discrete stochastic dynamical systems with two absorbing states. We apply it to evolutionary games, formulating two simple and intuitive criteria for evolutionary stability of pure Nash equilibria in finite populations. In particular, we show that the 1 /3 law of evolutionary games, introduced by Nowak et al. [Nature, 2004], follows from a more general mean-potential law.

  10. Computer techniques for experimental work in GDR nuclear power plants with WWER

    International Nuclear Information System (INIS)

    Stemmler, G.

    1985-01-01

    Nuclear power plant units with WWER are being increasingly equipped with high-performance, programmable process control computers. There are, however, essential reasons for further advancing the development of computer-aided measuring systems, in particular for experimental work. A special structure of such systems, which is based on the division into relatively rigid data registration and primary handling and into further processing by advanced programming language, has proved useful in the GDR. (author)

  11. A SURVEY ON LOAD BALANCING IN CLOUD COMPUTING USING ARTIFICIAL INTELLIGENCE TECHNIQUES

    OpenAIRE

    Amandeep Kaur; Pooja Nagpal

    2016-01-01

    Since its inception, the cloud computing paradigm has gained the widespread popularity in the industry and academia. The economical, scalable, expedient, ubiquitous, and on-demand access to shared resources are some of the characteristics of the cloud that have resulted in shifting the business processes to the cloud. The cloud computing attracts the attention of research community due to its potential to provide tremendous benefits to the industry and the community. But with the increasing d...

  12. Optimal reliability allocation for large software projects through soft computing techniques

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albeanu, Grigore; Popentiu-Vladicescu, Florin

    2012-01-01

    or maximizing the system reliability subject to budget constraints. These kinds of optimization problems were considered both in deterministic and stochastic frameworks in literature. Recently, the intuitionistic-fuzzy optimization approach was considered as a soft computing successful modelling approach....... Firstly, a review on existing soft computing approaches to optimization is given. The main section extends the results considering self-organizing migrating algorithms for solving intuitionistic-fuzzy optimization problems attached to complex fault-tolerant software architectures which proved...

  13. Polymorphic Evolutionary Games.

    Science.gov (United States)

    Fishman, Michael A

    2016-06-07

    In this paper, I present an analytical framework for polymorphic evolutionary games suitable for explicitly modeling evolutionary processes in diploid populations with sexual reproduction. The principal aspect of the proposed approach is adding diploid genetics cum sexual recombination to a traditional evolutionary game, and switching from phenotypes to haplotypes as the new game׳s pure strategies. Here, the relevant pure strategy׳s payoffs derived by summing the payoffs of all the phenotypes capable of producing gametes containing that particular haplotype weighted by the pertinent probabilities. The resulting game is structurally identical to the familiar Evolutionary Games with non-linear pure strategy payoffs (Hofbauer and Sigmund, 1998. Cambridge University Press), and can be analyzed in terms of an established analytical framework for such games. And these results can be translated into the terms of genotypic, and whence, phenotypic evolutionary stability pertinent to the original game. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. An effective chaos-geometric computational approach to analysis and prediction of evolutionary dynamics of the environmental systems: Atmospheric pollution dynamics

    Science.gov (United States)

    Buyadzhi, V. V.; Glushkov, A. V.; Khetselius, O. Yu; Bunyakova, Yu Ya; Florko, T. A.; Agayar, E. V.; Solyanikova, E. P.

    2017-10-01

    The present paper concerns the results of computational studying dynamics of the atmospheric pollutants (dioxide of nitrogen, sulphur etc) concentrations in an atmosphere of the industrial cities (Odessa) by using the dynamical systems and chaos theory methods. A chaotic behaviour in the nitrogen dioxide and sulphurous anhydride concentration time series at several sites of the Odessa city is numerically investigated. As usually, to reconstruct the corresponding attractor, the time delay and embedding dimension are needed. The former is determined by the methods of autocorrelation function and average mutual information, and the latter is calculated by means of a correlation dimension method and algorithm of false nearest neighbours. Further, the Lyapunov’s exponents spectrum, Kaplan-Yorke dimension and Kolmogorov entropy are computed. It has been found an existence of a low-D chaos in the time series of the atmospheric pollutants concentrations.

  15. Development of optimized techniques and requirements for computer enhancement of structural weld radiographs. Volume 1: Technical report

    Science.gov (United States)

    Adams, J. R.; Hawley, S. W.; Peterson, G. R.; Salinger, S. S.; Workman, R. A.

    1971-01-01

    A hardware and software specification covering requirements for the computer enhancement of structural weld radiographs was considered. Three scanning systems were used to digitize more than 15 weld radiographs. The performance of these systems was evaluated by determining modulation transfer functions and noise characteristics. Enhancement techniques were developed and applied to the digitized radiographs. The scanning parameters of spot size and spacing and film density were studied to optimize the information content of the digital representation of the image.

  16. An Improved Evolutionary Programming with Voting and Elitist Dispersal Scheme

    Science.gov (United States)

    Maity, Sayan; Gunjan, Kumar; Das, Swagatam

    Although initially conceived for evolving finite state machines, Evolutionary Programming (EP), in its present form, is largely used as a powerful real parameter optimizer. For function optimization, EP mainly relies on its mutation operators. Over past few years several mutation operators have been proposed to improve the performance of EP on a wide variety of numerical benchmarks. However, unlike real-coded GAs, there has been no fitness-induced bias in parent selection for mutation in EP. That means the i-th population member is selected deterministically for mutation and creation of the i-th offspring in each generation. In this article we present an improved EP variant called Evolutionary Programming with Voting and Elitist Dispersal (EPVE). The scheme encompasses a voting process which not only gives importance to best solutions but also consider those solutions which are converging fast. By introducing Elitist Dispersal Scheme we maintain the elitism by keeping the potential solutions intact and other solutions are perturbed accordingly, so that those come out of the local minima. By applying these two techniques we can be able to explore those regions which have not been explored so far that may contain optima. Comparison with the recent and best-known versions of EP over 25 benchmark functions from the CEC (Congress on Evolutionary Computation) 2005 test-suite for real parameter optimization reflects the superiority of the new scheme in terms of final accuracy, speed, and robustness.

  17. Survey on Security Issues in Cloud Computing and Associated Mitigation Techniques

    Science.gov (United States)

    Bhadauria, Rohit; Sanyal, Sugata

    2012-06-01

    Cloud Computing holds the potential to eliminate the requirements for setting up of high-cost computing infrastructure for IT-based solutions and services that the industry uses. It promises to provide a flexible IT architecture, accessible through internet for lightweight portable devices. This would allow multi-fold increase in the capacity or capabilities of the existing and new software. In a cloud computing environment, the entire data reside over a set of networked resources, enabling the data to be accessed through virtual machines. Since these data-centers may lie in any corner of the world beyond the reach and control of users, there are multifarious security and privacy challenges that need to be understood and taken care of. Also, one can never deny the possibility of a server breakdown that has been witnessed, rather quite often in the recent times. There are various issues that need to be dealt with respect to security and privacy in a cloud computing scenario. This extensive survey paper aims to elaborate and analyze the numerous unresolved issues threatening the cloud computing adoption and diffusion affecting the various stake-holders linked to it.

  18. Preparing systems engineering and computing science students in disciplined methods, quantitative, and advanced statistical techniques to improve process performance

    Science.gov (United States)

    McCray, Wilmon Wil L., Jr.

    The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization

  19. Data mining techniques used to analyze students’ opinions about computization in the educational system

    Directory of Open Access Journals (Sweden)

    Nicoleta PETCU

    2015-06-01

    Full Text Available Both the educational process and the research one, together with institutional management are unthinkable without the information technologies. Thru them one can harness the work capacity and creativity of both students and professors. The aim of this paper is to present the results of a quantitative research regarding: scope of using computers, the importance of using them, faculty activities that involve computer usage, number of hours students work with them at university, Internet and web-sites usage, e-learning platforms, investments in technology in the faculty and access to computers and other IT resources. The major conclusions of this research allow us to propose strategies for increasing the quality, efficiency and transparency of didactic, scientific, administrative and communication processes.

  20. Computed simulation of radiographies of pipes - validation of techniques for wall thickness measurements

    International Nuclear Information System (INIS)

    Bellon, C.; Tillack, G.R.; Nockemann, C.; Wenzel, L.

    1995-01-01

    A macroscopic model of radiographic NDE methods and applications is given. A computer-aided approach for determination of wall thickness from radiographs is presented, guaranteeing high accuracy and reproducibility of wall thickness determination by means of projection radiography. The algorithm was applied to computed simulations of radiographies. The simulation thus offers an effective means for testing such automated wall thickness determination as a function of imaging conditions, pipe geometries, coatings, and media tracking, and likewise is a tool for validation and optimization of the method. (orig.) [de

  1. Application of computer techniques to charpy impact testing of irradiated pressure vessel steels

    International Nuclear Information System (INIS)

    Landow, M.P.; Fromm, E.O.; Perrin, J.S.

    1982-01-01

    A Rockwell AIM 65 microcomputer has been modified to control a remote Charpy V-notch impact test machine. It controls not only handling and testing of the specimen but also transference and storage of instrumented Charpy test data. A system of electrical solenoid activated pneumatic cylinders and switches provides the interface between the computer and the test apparatus. A command language has been designated that allows the operator to command checkout, test procedure, and data storage via the computer. Automatic compliance with ASTM test procedures is built into the program

  2. Emission computer tomography on a Dodewaard mixed oxide fuel pin. Comparative PIE work with non-destructive and destructive techniques

    Energy Technology Data Exchange (ETDEWEB)

    Buurveld, H.A.; Dassel, G.

    1993-12-01

    A nondestructive technique as well as a destructive PIE technique have been used to verify the results obtained with a newly 8-e computer tomography (GECT) system. Multi isotope Scanning (MIS), electron probe micro analysis (EPMA) and GECT were used on a mixed oxide (MOX) fuel rod from the Dodewaard reactor with an average burnup of 24 MWd/kg fuel. GECT shows migration of Cs to the periphery of fuel pellets and to radial cracks and pores in the fuel, whereas MIS shows Cs migration to pellet interfaces. The EPMA technique appeared not to be useful to show migration of Cs but, it shows the distribution of fission products from Pu. EPMA clearly shows the distribution of fission products from Pu, but did not reveal the Cs-migration. (orig./HP)

  3. On three-dimensional nuclear thermo-hydraulic computation techniques for ATR

    International Nuclear Information System (INIS)

    1997-08-01

    The three-dimensional computation code for nuclear thermo-hydraulic combination core LAYMON-2A is used for the calculation of the power distribution and the control rod reactivity value of the ATR. This code possesses various functions which are required for planning the core operation such as the search function for critical boric acid concentration, and can do various simulation calculations such as core burning calculation. Further, the three-dimensional analysis code for xenon dynamic characteristics in the core LAYMON-2C, in which the dynamic characteristic equation of xenon-samarium was incorporated into the LAYMON-2A code can take the change with time lapse of xenon-samarium concentration accompanying the change of power level and power distribution into account, and it is used for the analysis of the spatial vibration characteristics of power and the regional power control characteristics due to xenon in the core. As to the LAYMON-2A, the computation flow, power distribution and thermo-hydraulic computation models, and critical search function are explained. As to the LAYMON-2C, the computation flow is described. The comparison of the calculated values by using the LAYMON-2A code and the operation data of the Fugen is reported. (K.I.)

  4. BUILD-IT : a computer vision-based interaction technique for a planning tool

    NARCIS (Netherlands)

    Rauterberg, G.W.M.; Fjeld, M.; Krueger, H.; Bichsel, M.; Leonhardt, U.; Meier, M.; Thimbleby, H.; O'Conaill, B.; Thomas, P.J.

    1997-01-01

    Shows a method that goes beyond the established approaches of human-computer interaction. We first bring a serious critique of traditional interface types, showing their major drawbacks and limitations. Promising alternatives are offered by virtual (or immersive) reality (VR) and by augmented

  5. The study of radiographic technique with low exposure using computed panoramic tomography

    International Nuclear Information System (INIS)

    Saito, Yasuhiro

    1987-01-01

    A new imaging system for the dental field that combines recent advances in both the electronics and computer technologies was developed. This new imaging system is a computed panoramic tomography process based on the newly developed laser-scan system. In this study a quantitative image evaluation was performed comparing anatomical landmark in computed panoramic tomography at a low exposure (LPT) and in conventional panoramic tomography at a routin (CPT), and the following results were obtained: 1. The diagnostic value of the CPT decreased with decreasing exposure, paticularly with regard to the normal anatomical landmarks of such microstructural parts as the periodontal space, lamina dura and the enamel-dentin border. 2. The LPT was highly diagnostic value for all normal anatomical landmark, averaging about twice as valuable diagnostically as CPT. 3. The visually diagnostic value of the periodontal space, lamina dura, enamel-dentin border and the anatomical morphology of the teeth on the LPT beeing slightly dependent on the spatial frequency enhancement rank. 4. The LPT formed images with almost the same range of density as the CPT. 5. Computed panoramic tomographs taken at a low exposure revealed more information of the trabecular bone pattern on the image than conventional panoramic tomographs taken under routine condition in the visual spatial frequency range (0.1 - 5.0 cycle/mm). (author) 67 refs

  6. Virtual photons in imaginary time: Computing exact Casimir forces via standard numerical electromagnetism techniques

    International Nuclear Information System (INIS)

    Rodriguez, Alejandro; Ibanescu, Mihai; Joannopoulos, J. D.; Johnson, Steven G.; Iannuzzi, Davide

    2007-01-01

    We describe a numerical method to compute Casimir forces in arbitrary geometries, for arbitrary dielectric and metallic materials, with arbitrary accuracy (given sufficient computational resources). Our approach, based on well-established integration of the mean stress tensor evaluated via the fluctuation-dissipation theorem, is designed to directly exploit fast methods developed for classical computational electromagnetism, since it only involves repeated evaluation of the Green's function for imaginary frequencies (equivalently, real frequencies in imaginary time). We develop the approach by systematically examining various formulations of Casimir forces from the previous decades and evaluating them according to their suitability for numerical computation. We illustrate our approach with a simple finite-difference frequency-domain implementation, test it for known geometries such as a cylinder and a plate, and apply it to new geometries. In particular, we show that a pistonlike geometry of two squares sliding between metal walls, in both two and three dimensions with both perfect and realistic metallic materials, exhibits a surprising nonmonotonic ''lateral'' force from the walls

  7. Computer-Based Techniques for Collection of Pulmonary Function Variables during Rest and Exercise.

    Science.gov (United States)

    1991-03-01

    routinely Included in experimental protocols involving hyper- and hypobaric excursions. Unfortunately, the full potential of those tests Is often not...for a Pulmonary Function data acquisition system that has proven useful in the hyperbaric research laboratory. It illustrates how computers can

  8. Role of computer techniques for knowledge propagation about nuclear energetics safety

    International Nuclear Information System (INIS)

    Osachkin, V.S.

    1996-01-01

    The development of nuclear power engineering depends on the levels of nuclear, radiological and ecological safety. To ensure the approval of such levels by the community to spread the knowledge on Safety of Nuclear Engineering in understandable forms. New computer technologies may play an important role in the safety education of the public and upgrading of qualification of personnel. The progress in computer nets development makes it possible to use besides e-mail qualification of personnel. The progress in computer in nets development makes it possible to use besides e-mail and BBS the Internet system for remote education. As an example a computer course on Atomic Energy and its safety presented. This course now written in Russian consists of 6 parts, namely: physical basis of utilization of Nuclear energy; technical bases of uses of Nuclear energy; nuclear Reactors and their Systems; safety Principles, Goals, Nuclear Safety Regulation; the Environmental Impact of the us of Nuclear Power, severe accident consequences and scenarios

  9. Optimization of the cumulative risk assessment of pesticides and biocides using computational techniques: Pilot project

    DEFF Research Database (Denmark)

    Jonsdottir, Svava Osk; Reffstrup, Trine Klein; Petersen, Annette

    This pilot project is intended as the first step in developing a computational strategy to assist in refining methods for higher tier cumulative and aggregate risk assessment of exposure to mixture of pesticides and biocides. For this purpose, physiologically based toxicokinetic (PBTK) models were...

  10. A review of Computational Intelligence techniques in coral reef-related applications

    NARCIS (Netherlands)

    Salcedo-Sanz, S.; Cuadra, L.; Vermeij, M.J.A.

    Studies on coral reefs increasingly combine aspects of science and technology to understand the complex dynamics and processes that shape these benthic ecosystems. Recently, the use of advanced computational algorithms has entered coral reef science as new powerful tools that help solve complex

  11. Evolutionary optimization of production materials workflow processes

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Hansen, Zaza Nadja Lee; Jacobsen, Peter

    2014-01-01

    We present an evolutionary optimisation technique for stochastic production processes, which is able to find improved production materials workflow processes with respect to arbitrary combinations of numerical quantities associated with the production process. Working from a core fragment...... of the BPMN language, we employ an evolutionary algorithm where stochastic model checking is used as a fitness function to determine the degree of improvement of candidate processes derived from the original process through mutation and cross-over operations. We illustrate this technique using a case study...

  12. EvoluCode: Evolutionary Barcodes as a Unifying Framework for Multilevel Evolutionary Data.

    Science.gov (United States)

    Linard, Benjamin; Nguyen, Ngoc Hoan; Prosdocimi, Francisco; Poch, Olivier; Thompson, Julie D

    2012-01-01

    Evolutionary systems biology aims to uncover the general trends and principles governing the evolution of biological networks. An essential part of this process is the reconstruction and analysis of the evolutionary histories of these complex, dynamic networks. Unfortunately, the methodologies for representing and exploiting such complex evolutionary histories in large scale studies are currently limited. Here, we propose a new formalism, called EvoluCode (Evolutionary barCode), which allows the integration of different evolutionary parameters (eg, sequence conservation, orthology, synteny …) in a unifying format and facilitates the multilevel analysis and visualization of complex evolutionary histories at the genome scale. The advantages of the approach are demonstrated by constructing barcodes representing the evolution of the complete human proteome. Two large-scale studies are then described: (i) the mapping and visualization of the barcodes on the human chromosomes and (ii) automatic clustering of the barcodes to highlight protein subsets sharing similar evolutionary histories and their functional analysis. The methodologies developed here open the way to the efficient application of other data mining and knowledge extraction techniques in evolutionary systems biology studies. A database containing all EvoluCode data is available at: http://lbgi.igbmc.fr/barcodes.

  13. Selective evolutionary generation systems: Theory and applications

    Science.gov (United States)

    Menezes, Amor A.

    This dissertation is devoted to the problem of behavior design, which is a generalization of the standard global optimization problem: instead of generating the optimizer, the generalization produces, on the space of candidate optimizers, a probability density function referred to as the behavior. The generalization depends on a parameter, the level of selectivity, such that as this parameter tends to infinity, the behavior becomes a delta function at the location of the global optimizer. The motivation for this generalization is that traditional off-line global optimization is non-resilient and non-opportunistic. That is, traditional global optimization is unresponsive to perturbations of the objective function. On-line optimization methods that are more resilient and opportunistic than their off-line counterparts typically consist of the computationally expensive sequential repetition of off-line techniques. A novel approach to inexpensive resilience and opportunism is to utilize the theory of Selective Evolutionary Generation Systems (SECS), which sequentially and probabilistically selects a candidate optimizer based on the ratio of the fitness values of two candidates and the level of selectivity. Using time-homogeneous, irreducible, ergodic Markov chains to model a sequence of local, and hence inexpensive, dynamic transitions, this dissertation proves that such transitions result in behavior that is called rational; such behavior is desirable because it can lead to both efficient search for an optimizer as well as resilient and opportunistic behavior. The dissertation also identifies system-theoretic properties of the proposed scheme, including equilibria, their stability and their optimality. Moreover, this dissertation demonstrates that the canonical genetic algorithm with fitness proportional selection and the (1+1) evolutionary strategy are particular cases of the scheme. Applications in three areas illustrate the versatility of the SECS theory: flight

  14. A Computational Study on the Magnetic Resonance Coupling Technique for Wireless Power Transfer

    Directory of Open Access Journals (Sweden)

    Zakaria N.A.

    2017-01-01

    Full Text Available Non-radiative wireless power transfer (WPT system using magnetic resonance coupling (MRC technique has recently been a topic of discussion among researchers. This technique discussed more scenarios in mid-range field of wireless power transmission reflected to the distance and efficiency. The WPT system efficiency varies when the coupling distance between two coils involved changes. This could lead to a decisive issue of high efficient power transfer. This paper presents case studies on the relationship of operating range with the efficiency of the MRC technique. Demonstrative WPT system operates at two different frequencies are projected in order to verify performance. The resonance frequencies used are less than 100MHz within range of 10cm to 20cm.

  15. Outcomes of Orbital Floor Reconstruction After Extensive Maxillectomy Using the Computer-Assisted Fabricated Individual Titanium Mesh Technique.

    Science.gov (United States)

    Zhang, Wen-Bo; Mao, Chi; Liu, Xiao-Jing; Guo, Chuan-Bin; Yu, Guang-Yan; Peng, Xin

    2015-10-01

    Orbital floor defects after extensive maxillectomy can cause severe esthetic and functional deformities. Orbital floor reconstruction using the computer-assisted fabricated individual titanium mesh technique is a promising method. This study evaluated the application and clinical outcomes of this technique. This retrospective study included 10 patients with orbital floor defects after maxillectomy performed from 2012 through 2014. A 3-dimensional individual stereo model based on mirror images of the unaffected orbit was obtained to fabricate an anatomically adapted titanium mesh using computer-assisted design and manufacturing. The titanium mesh was inserted into the defect using computer navigation. The postoperative globe projection and orbital volume were measured and the incidence of postoperative complications was evaluated. The average postoperative globe projection was 15.91 ± 1.80 mm on the affected side and 16.24 ± 2.24 mm on the unaffected side (P = .505), and the average postoperative orbital volume was 26.01 ± 1.28 and 25.57 ± 1.89 mL, respectively (P = .312). The mean mesh depth was 25.11 ± 2.13 mm. The mean follow-up period was 23.4 ± 7.7 months (12 to 34 months). Of the 10 patients, 9 did not develop diplopia or a decrease in visual acuity and ocular motility. Titanium mesh exposure was not observed in any patient. All patients were satisfied with their postoperative facial symmetry. Orbital floor reconstruction after extensive maxillectomy with an individual titanium mesh fabricated using computer-assisted techniques can preserve globe projection and orbital volume, resulting in successful clinical outcomes. Copyright © 2015 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  16. Technique of application of contrast media in computed tomography of the heart

    International Nuclear Information System (INIS)

    Heuser, L.; Friedmann, G.

    1982-01-01

    Cardiac imaging by means of CT requires administration of intravenous contrast medium which can be applied by infusion or by rapid bolus injection. Contrast infusion is easier in performance and yields opacification of all cardiac cavities. Using bolus technique selective enhancement of cardiac chambers can be obtained which provides increased image quality and better resolution of cardiac structures. Both techniques are described and the results of 221 examinations are analysed with special respect to image quality, technical amount and contrast media side effects. (orig.) [de

  17. SALP (Sensitivity Analysis by List Processing), a computer assisted technique for binary systems reliability analysis

    International Nuclear Information System (INIS)

    Astolfi, M.; Mancini, G.; Volta, G.; Van Den Muyzenberg, C.L.; Contini, S.; Garribba, S.

    1978-01-01

    A computerized technique which allows the modelling by AND, OR, NOT binary trees, of various complex situations encountered in safety and reliability assessment, is described. By the use of list-processing, numerical and non-numerical types of information are used together. By proper marking of gates and primary events, stand-by systems, common cause failure and multiphase systems can be analyzed. The basic algorithms used in this technique are shown in detail. Application to a stand-by and multiphase system is then illustrated

  18. International Conference on Computer, Communication and Computational Sciences

    CERN Document Server

    Mishra, Krishn; Tiwari, Shailesh; Singh, Vivek

    2017-01-01

    Exchange of information and innovative ideas are necessary to accelerate the development of technology. With advent of technology, intelligent and soft computing techniques came into existence with a wide scope of implementation in engineering sciences. Keeping this ideology in preference, this book includes the insights that reflect the ‘Advances in Computer and Computational Sciences’ from upcoming researchers and leading academicians across the globe. It contains high-quality peer-reviewed papers of ‘International Conference on Computer, Communication and Computational Sciences (ICCCCS 2016), held during 12-13 August, 2016 in Ajmer, India. These papers are arranged in the form of chapters. The content of the book is divided into two volumes that cover variety of topics such as intelligent hardware and software design, advanced communications, power and energy optimization, intelligent techniques used in internet of things, intelligent image processing, advanced software engineering, evolutionary and ...

  19. Multi-objective evolutionary algorithms for fuzzy classification in survival prediction.

    Science.gov (United States)

    Jiménez, Fernando; Sánchez, Gracia; Juárez, José M

    2014-03-01

    This paper presents a novel rule-based fuzzy classification methodology for survival/mortality prediction in severe burnt patients. Due to the ethical aspects involved in this medical scenario, physicians tend not to accept a computer-based evaluation unless they understand why and how such a recommendation is given. Therefore, any fuzzy classifier model must be both accurate and interpretable. The proposed methodology is a three-step process: (1) multi-objective constrained optimization of a patient's data set, using Pareto-based elitist multi-objective evolutionary algorithms to maximize accuracy and minimize the complexity (number of rules) of classifiers, subject to interpretability constraints; this step produces a set of alternative (Pareto) classifiers; (2) linguistic labeling, which assigns a linguistic label to each fuzzy set of the classifiers; this step is essential to the interpretability of the classifiers; (3) decision making, whereby a classifier is chosen, if it is satisfactory, according to the preferences of the decision maker. If no classifier is satisfactory for the decision maker, the process starts again in step (1) with a different input parameter set. The performance of three multi-objective evolutionary algorithms, niched pre-selection multi-objective algorithm, elitist Pareto-based multi-objective evolutionary algorithm for diversity reinforcement (ENORA) and the non-dominated sorting genetic algorithm (NSGA-II), was tested using a patient's data set from an intensive care burn unit and a standard machine learning data set from an standard machine learning repository. The results are compared using the hypervolume multi-objective metric. Besides, the results have been compared with other non-evolutionary techniques and validated with a multi-objective cross-validation technique. Our proposal improves the classification rate obtained by other non-evolutionary techniques (decision trees, artificial neural networks, Naive Bayes, and case

  20. Cone beam computed tomography in veterinary dentistry: description and standardization of the technique

    International Nuclear Information System (INIS)

    Roza, Marcello R.; Silva, Luiz A.F.; Fioravanti, Maria C. S.; Barriviera, Mauricio

    2009-01-01

    Eleven dogs and four cats with buccodental alterations, treated in the Centro Veterinario do Gama, in Brasilia, DF, Brazil, were submitted to cone beam computed tomography. The exams were carried out in a i-CAT tomograph, using for image acquisition six centimeters height, 40 seconds time, 0.2 voxel, 120 kilovolts and 46.72 milli amperes per second. The ideal positioning of the animal for the exam was also determined in this study and it proved to be fundamental for successful examination, which required a simple and safe anesthetic protocol due to the relatively short period of time necessary to obtain the images. Several alterations and diseases were identified with accurate imaging, demonstrating that cone beam computed tomography is a safe, accessible and feasible imaging method which could be included in the small animal dentistry routine diagnosis. (author)