WorldWideScience

Sample records for slam algorithm applied

  1. SLAM algorithm applied to robotics assistance for navigation in unknown environments

    Directory of Open Access Journals (Sweden)

    Lobo Pereira Fernando

    2010-02-01

    Full Text Available Abstract Background The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous. The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI. Methods In this paper, a sequential Extended Kalman Filter (EKF feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. Results The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how

  2. AEKF-SLAM: A New Algorithm for Robotic Underwater Navigation

    Directory of Open Access Journals (Sweden)

    Xin Yuan

    2017-05-01

    Full Text Available In this work, we focus on key topics related to underwater Simultaneous Localization and Mapping (SLAM applications. Moreover, a detailed review of major studies in the literature and our proposed solutions for addressing the problem are presented. The main goal of this paper is the enhancement of the accuracy and robustness of the SLAM-based navigation problem for underwater robotics with low computational costs. Therefore, we present a new method called AEKF-SLAM that employs an Augmented Extended Kalman Filter (AEKF-based SLAM algorithm. The AEKF-based SLAM approach stores the robot poses and map landmarks in a single state vector, while estimating the state parameters via a recursive and iterative estimation-update process. Hereby, the prediction and update state (which exist as well in the conventional EKF are complemented by a newly proposed augmentation stage. Applied to underwater robot navigation, the AEKF-SLAM has been compared with the classic and popular FastSLAM 2.0 algorithm. Concerning the dense loop mapping and line mapping experiments, it shows much better performances in map management with respect to landmark addition and removal, which avoid the long-term accumulation of errors and clutters in the created map. Additionally, the underwater robot achieves more precise and efficient self-localization and a mapping of the surrounding landmarks with much lower processing times. Altogether, the presented AEKF-SLAM method achieves reliably map revisiting, and consistent map upgrading on loop closure.

  3. Analysis of Different Feature Selection Criteria Based on a Covariance Convergence Perspective for a SLAM Algorithm

    Science.gov (United States)

    Auat Cheein, Fernando A.; Carelli, Ricardo

    2011-01-01

    This paper introduces several non-arbitrary feature selection techniques for a Simultaneous Localization and Mapping (SLAM) algorithm. The feature selection criteria are based on the determination of the most significant features from a SLAM convergence perspective. The SLAM algorithm implemented in this work is a sequential EKF (Extended Kalman filter) SLAM. The feature selection criteria are applied on the correction stage of the SLAM algorithm, restricting it to correct the SLAM algorithm with the most significant features. This restriction also causes a decrement in the processing time of the SLAM. Several experiments with a mobile robot are shown in this work. The experiments concern the map reconstruction and a comparison between the different proposed techniques performance. The experiments were carried out at an outdoor environment composed by trees, although the results shown herein are not restricted to a special type of features. PMID:22346568

  4. Research of cartographer laser SLAM algorithm

    Science.gov (United States)

    Xu, Bo; Liu, Zhengjun; Fu, Yiran; Zhang, Changsai

    2017-11-01

    As the indoor is a relatively closed and small space, total station, GPS, close-range photogrammetry technology is difficult to achieve fast and accurate indoor three-dimensional space reconstruction task. LIDAR SLAM technology does not rely on the external environment a priori knowledge, only use their own portable lidar, IMU, odometer and other sensors to establish an independent environment map, a good solution to this problem. This paper analyzes the Google Cartographer laser SLAM algorithm from the point cloud matching and closed loop detection. Finally, the algorithm is presented in the 3D visualization tool RViz from the data acquisition and processing to create the environment map, complete the SLAM technology and realize the process of indoor threedimensional space reconstruction

  5. Applying FastSLAM to Articulated Rovers

    Science.gov (United States)

    Hewitt, Robert Alexander

    This thesis presents the navigation algorithms designed for use on Kapvik, a 30 kg planetary micro-rover built for the Canadian Space Agency; the simulations used to test the algorithm; and novel techniques for terrain classification using Kapvik's LIDAR (Light Detection And Ranging) sensor. Kapvik implements a six-wheeled, skid-steered, rocker-bogie mobility system. This warrants a more complicated kinematic model for navigation than a typical 4-wheel differential drive system. The design of a 3D navigation algorithm is presented that includes nonlinear Kalman filtering and Simultaneous Localization and Mapping (SLAM). A neural network for terrain classification is used to improve navigation performance. Simulation is used to train the neural network and validate the navigation algorithms. Real world tests of the terrain classification algorithm validate the use of simulation for training and the improvement to SLAM through the reduction of extraneous LIDAR measurements in each scan.

  6. Feature Selection Criteria for Real Time EKF-SLAM Algorithm

    Directory of Open Access Journals (Sweden)

    Fernando Auat Cheein

    2010-02-01

    Full Text Available This paper presents a seletion procedure for environmet features for the correction stage of a SLAM (Simultaneous Localization and Mapping algorithm based on an Extended Kalman Filter (EKF. This approach decreases the computational time of the correction stage which allows for real and constant-time implementations of the SLAM. The selection procedure consists in chosing the features the SLAM system state covariance is more sensible to. The entire system is implemented on a mobile robot equipped with a range sensor laser. The features extracted from the environment correspond to lines and corners. Experimental results of the real time SLAM algorithm and an analysis of the processing-time consumed by the SLAM with the feature selection procedure proposed are shown. A comparison between the feature selection approach proposed and the classical sequential EKF-SLAM along with an entropy feature selection approach is also performed.

  7. Multirobot FastSLAM Algorithm Based on Landmark Consistency Correction

    Directory of Open Access Journals (Sweden)

    Shi-Ming Chen

    2014-01-01

    Full Text Available Considering the influence of uncertain map information on multirobot SLAM problem, a multirobot FastSLAM algorithm based on landmark consistency correction is proposed. Firstly, electromagnetism-like mechanism is introduced to the resampling procedure in single-robot FastSLAM, where we assume that each sampling particle is looked at as a charged electron and attraction-repulsion mechanism in electromagnetism field is used to simulate interactive force between the particles to improve the distribution of particles. Secondly, when multiple robots observe the same landmarks, every robot is regarded as one node and Kalman-Consensus Filter is proposed to update landmark information, which further improves the accuracy of localization and mapping. Finally, the simulation results show that the algorithm is suitable and effective.

  8. a Laser-Slam Algorithm for Indoor Mobile Mapping

    Science.gov (United States)

    Zhang, Wenjun; Zhang, Qiao; Sun, Kai; Guo, Sheng

    2016-06-01

    A novel Laser-SLAM algorithm is presented for real indoor environment mobile mapping. SLAM algorithm can be divided into two classes, Bayes filter-based and graph optimization-based. The former is often difficult to guarantee consistency and accuracy in largescale environment mapping because of the accumulative error during incremental mapping. Graph optimization-based SLAM method often assume predetermined landmarks, which is difficult to be got in unknown environment mapping. And there most likely has large difference between the optimize result and the real data, because the constraints are too few. This paper designed a kind of sub-map method, which could map more accurately without predetermined landmarks and avoid the already-drawn map impact on agent's location. The tree structure of sub-map can be indexed quickly and reduce the amount of memory consuming when mapping. The algorithm combined Bayes-based and graph optimization-based SLAM algorithm. It created virtual landmarks automatically by associating data of sub-maps for graph optimization. Then graph optimization guaranteed consistency and accuracy in large-scale environment mapping and improved the reasonability and reliability of the optimize results. Experimental results are presented with a laser sensor (UTM 30LX) in official buildings and shopping centres, which prove that the proposed algorithm can obtain 2D maps within 10cm precision in indoor environment range from several hundreds to 12000 square meter.

  9. An EKF-SLAM algorithm with consistency properties

    OpenAIRE

    Barrau, Axel; Bonnabel, Silvere

    2015-01-01

    In this paper we address the inconsistency of the EKF-based SLAM algorithm that stems from non-observability of the origin and orientation of the global reference frame. We prove on the non-linear two-dimensional problem with point landmarks observed that this type of inconsistency is remedied using the Invariant EKF, a recently introduced variant ot the EKF meant to account for the symmetries of the state space. Extensive Monte-Carlo runs illustrate the theoretical results.

  10. Algorithm of Particle Data Association for SLAM Based on Improved Ant Algorithm

    Directory of Open Access Journals (Sweden)

    KeKe Gen

    2015-01-01

    Full Text Available The article considers a problem of data association algorithm for simultaneous localization and mapping guidelines in determining the route of unmanned aerial vehicles (UAVs. Currently, these equipments are already widely used, but mainly controlled from the remote operator. An urgent task is to develop a control system that allows for autonomous flight. Algorithm SLAM (simultaneous localization and mapping, which allows to predict the location, speed, the ratio of flight parameters and the coordinates of landmarks and obstacles in an unknown environment, is one of the key technologies to achieve real autonomous UAV flight. The aim of this work is to study the possibility of solving this problem by using an improved ant algorithm.The data association for SLAM algorithm is meant to establish a matching set of observed landmarks and landmarks in the state vector. Ant algorithm is one of the widely used optimization algorithms with positive feedback and the ability to search in parallel, so the algorithm is suitable for solving the problem of data association for SLAM. But the traditional ant algorithm in the process of finding routes easily falls into local optimum. Adding random perturbations in the process of updating the global pheromone to avoid local optima. Setting limits pheromone on the route can increase the search space with a reasonable amount of calculations for finding the optimal route.The paper proposes an algorithm of the local data association for SLAM algorithm based on an improved ant algorithm. To increase the speed of calculation, local data association is used instead of the global data association. The first stage of the algorithm defines targets in the matching space and the observed landmarks with the possibility of association by the criterion of individual compatibility (IC. The second stage defines the matched landmarks and their coordinates using improved ant algorithm. Simulation results confirm the efficiency and

  11. ON CONSTRUCTION OF A RELIABLE GROUND TRUTH FOR EVALUATION OF VISUAL SLAM ALGORITHMS

    Directory of Open Access Journals (Sweden)

    Jan Bayer

    2016-11-01

    Full Text Available In this work we are concerning the problem of localization accuracy evaluation of visual-based Simultaneous Localization and Mapping (SLAM techniques. Quantitative evaluation of the SLAM algorithm performance is usually done using the established metrics of Relative pose error and Absolute trajectory error which require a precise and reliable ground truth. Such a ground truth is usually hard to obtain, while it requires an expensive external localization system. In this work we are proposing to use the SLAM algorithm itself to construct a reliable ground-truth by offline frame-by-frame processing. The generated ground-truth is suitable for evaluation of different SLAM systems, as well as for tuning the parametrization of the on-line SLAM. The presented practical experimental results indicate the feasibility of the proposed approach.

  12. A Fast Map Merging Algorithm in the Field of Multirobot SLAM

    Directory of Open Access Journals (Sweden)

    Yanli Liu

    2013-01-01

    Full Text Available In recent years, the research on single-robot simultaneous localization and mapping (SLAM has made a great success. However, multirobot SLAM faces many challenging problems, including unknown robot poses, unshared map, and unstable communication. In this paper, a map merging algorithm based on virtual robot motion is proposed for multi-robot SLAM. The thinning algorithm is used to construct the skeleton of the grid map’s empty area, and a mobile robot is simulated in one map. The simulated data is used as information sources in the other map to do partial map Monte Carlo localization; if localization succeeds, the relative pose hypotheses between the two maps can be computed easily. We verify these hypotheses using the rendezvous technique and use them as initial values to optimize the estimation by a heuristic random search algorithm.

  13. Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles.

    Science.gov (United States)

    Munguia, Rodrigo; Urzua, Sarquis; Grau, Antoni

    2016-01-01

    In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM) method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.

  14. Delayed Monocular SLAM Approach Applied to Unmanned Aerial Vehicles.

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguia

    Full Text Available In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.

  15. A Comparison of SLAM Algorithms Based on a Graph of Relations

    OpenAIRE

    Burgard, W.; Stachniss, C.; Grisetti, G.; Steder, B.; Kümmerle, R.; Dornhege, C.; Ruhnke, M.; Kleiner, Alexander; Tardós, Juan D.

    2009-01-01

    In this paper, we address the problem of creating an objective benchmark for comparing SLAM approaches. We propose a framework for analyzing the results of SLAM approaches based on a metric for measuring the error of the corrected trajectory. The metric uses only relative relations between poses and does not rely on a global reference frame. The idea is related to graph-based SLAM approaches, namely to consider the energy that is needed to deform the trajectory estimated by a SLAM approach in...

  16. MonoSLAM: real-time single camera SLAM.

    Science.gov (United States)

    Davison, Andrew J; Reid, Ian D; Molton, Nicholas D; Stasse, Olivier

    2007-06-01

    We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to Structure from Motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera.

  17. Line-based monocular graph SLAM algorithm%基于图优化的单目线特征SLAM算法

    Institute of Scientific and Technical Information of China (English)

    董蕊芳; 柳长安; 杨国田; 程瑞营

    2017-01-01

    A new line based 6-DOF monocular algorithm for using graph simultaneous localization and mapping(SLAM) algoritm was proposed.First,the straight line were applied as a feature instead of points,due to a map consisting of a sparse set of 3D points is unable to describe the structure of the surrounding world.Secondly,most of previous line-based SLAM algorithms were focused on filtering-based solutions suffering from the inconsistent when applied to the inherently non-linear SLAM problem,in contrast,the graph-based solution was used to improve the accuracy of the localization and the consistency of mapping.Thirdly,a special line representation was exploited for combining the Plücker coordinates with the Cayley representation.The Plücker coordinates were used for the 3D line projection function,and the Cayley representation helps to update the line parameters during the non-linear optimization process.Finally,the simulation experiment shows that the proposed algorithm outperforms odometry and EKF-based SLAM in terms of the pose estimation,while the sum of the squared errors (SSE) and root-mean-square error (RMSE) of proposed method are 2.5% and 10.5% of odometry,and 22.4% and 33% of EKF-based SLAM.The reprojection error is only 45.5 pixels.The real image experiment shows that the proposed algorithm obtains only 958 cm2 and 3.941 3 cm the SSE and RMSE of pose estimation.Therefore,it can be concluded that the proposed algorithm is effective and accuracy.%提出了基于图优化的单目线特征同时定位和地图构建(SLAM)的方法.首先,针对主流视觉SLAM算法因采用点作为特征而导致构建的点云地图稀疏、难以准确表达环境结构信息等缺点,采用直线作为特征来构建地图.然后,根据现有线特征的SLAM算法都是基于滤波器的SLAM框架、存在线性化及更新效率的问题,采用基于图优化的SLAM解决方案以提高定位精度及地图构建的一致性和准确性.将线特征的Plücker坐

  18. Seismo-Lineament Analysis Method (SLAM) Applied to the South Napa Earthquake

    Science.gov (United States)

    Worrell, V. E.; Cronin, V. S.

    2014-12-01

    We used the seismo-lineament analysis method (SLAM; http://bearspace.baylor.edu/Vince_Cronin/www/SLAM/) to "predict" the location of the fault that produced the M 6.0 South Napa earthquake of 24 August 2014, using hypocenter and focal mechanism data from NCEDC (http://www.ncedc.org/ncedc/catalog-search.html) and a digital elevation model from the USGS National Elevation Dataset (http://viewer.nationalmap.gov/viewer/). The ground-surface trace of the causative fault (i.e., the Browns Valley strand of the West Napa fault zone; Bryant, 2000, 1982) and virtually all of the ground-rupture sites reported by the USGS and California Geological Survey (http://www.eqclearinghouse.org/2014-08-24-south-napa/) were located within the north-striking seismo-lineament. We also used moment tensors published online by the USGS and GCMT (http://comcat.cr.usgs.gov/earthquakes/eventpage/nc72282711#scientific_moment-tensor) as inputs to SLAM and found that their northwest-striking seismo-lineaments correlated spatially with the causative fault. We concluded that SLAM could have been used as soon as these mechanism solutions were available to help direct the search for the trace of the causative fault and possible rupture-related damage. We then considered whether the seismogenic fault could have been identified using SLAM prior to the 24 August event, based on the focal mechanisms of smaller prior earthquakes reported by the NCEDC or ISC (http://www.isc.ac.uk). Seismo-lineaments from three M~3.5 events from 1990 and 2012, located in the Vallejo-Crockett area, correlate spatially with the Napa County Airport strand of the West Napa fault and extend along strike toward the Browns Valley strand (Bryant, 2000, 1982). Hence, we might have used focal mechanisms from smaller earthquakes to establish that the West Napa fault is likely seismogenic prior to the South Napa earthquake. Early recognition that a fault with a mapped ground-surface trace is seismogenic, based on smaller earthquakes

  19. A novel combined SLAM based on RBPF-SLAM and EIF-SLAM for mobile system sensing in a large scale environment.

    Science.gov (United States)

    He, Bo; Zhang, Shujing; Yan, Tianhong; Zhang, Tao; Liang, Yan; Zhang, Hongjin

    2011-01-01

    Mobile autonomous systems are very important for marine scientific investigation and military applications. Many algorithms have been studied to deal with the computational efficiency problem required for large scale simultaneous localization and mapping (SLAM) and its related accuracy and consistency. Among these methods, submap-based SLAM is a more effective one. By combining the strength of two popular mapping algorithms, the Rao-Blackwellised particle filter (RBPF) and extended information filter (EIF), this paper presents a combined SLAM-an efficient submap-based solution to the SLAM problem in a large scale environment. RBPF-SLAM is used to produce local maps, which are periodically fused into an EIF-SLAM algorithm. RBPF-SLAM can avoid linearization of the robot model during operating and provide a robust data association, while EIF-SLAM can improve the whole computational speed, and avoid the tendency of RBPF-SLAM to be over-confident. In order to further improve the computational speed in a real time environment, a binary-tree-based decision-making strategy is introduced. Simulation experiments show that the proposed combined SLAM algorithm significantly outperforms currently existing algorithms in terms of accuracy and consistency, as well as the computing efficiency. Finally, the combined SLAM algorithm is experimentally validated in a real environment by using the Victoria Park dataset.

  20. Deconvolution algorithms applied in ultrasonics

    International Nuclear Information System (INIS)

    Perrot, P.

    1993-12-01

    In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs

  1. A Novel Combined SLAM Based on RBPF-SLAM and EIF-SLAM for Mobile System Sensing in a Large Scale Environment

    Directory of Open Access Journals (Sweden)

    Hongjin Zhang

    2011-10-01

    Full Text Available Mobile autonomous systems are very important for marine scientific investigation and military applications. Many algorithms have been studied to deal with the computational efficiency problem required for large scale Simultaneous Localization and Mapping (SLAM and its related accuracy and consistency. Among these methods, submap-based SLAM is a more effective one. By combining the strength of two popular mapping algorithms, the Rao-Blackwellised particle filter (RBPF and extended information filter (EIF, this paper presents a Combined SLAM—an efficient submap-based solution to the SLAM problem in a large scale environment. RBPF-SLAM is used to produce local maps, which are periodically fused into an EIF-SLAM algorithm. RBPF-SLAM can avoid linearization of the robot model during operating and provide a robust data association, while EIF-SLAM can improve the whole computational speed, and avoid the tendency of RBPF-SLAM to be over-confident. In order to further improve the computational speed in a real time environment, a binary-tree-based decision-making strategy is introduced. Simulation experiments show that the proposed Combined SLAM algorithm significantly outperforms currently existing algorithms in terms of accuracy and consistency, as well as the computing efficiency. Finally, the Combined SLAM algorithm is experimentally validated in a real environment by using the Victoria Park dataset.

  2. Distributed SLAM

    Science.gov (United States)

    Binns, Lewis A.; Valachis, Dimitris; Anderson, Sean; Gough, David W.; Nicholson, David; Greenway, Phil

    2002-07-01

    Previously, we have developed techniques for Simultaneous Localization and Map Building based on the augmented state Kalman filter. Here we report the results of experiments conducted over multiple vehicles each equipped with a laser range finder for sensing the external environment, and a laser tracking system to provide highly accurate ground truth. The goal is simultaneously to build a map of an unknown environment and to use that map to navigate a vehicle that otherwise would have no way of knowing its location, and to distribute this process over several vehicles. We have constructed an on-line, distributed implementation to demonstrate the principle. In this paper we describe the system architecture, the nature of the experimental set up, and the results obtained. These are compared with the estimated ground truth. We show that distributed SLAM has a clear advantage in the sense that it offers a potential super-linear speed-up over single vehicle SLAM. In particular, we explore the time taken to achieve a given quality of map, and consider the repeatability and accuracy of the method. Finally, we discuss some practical implementation issues.

  3. Distributed SLAM Using Improved Particle Filter for Mobile Robot Localization

    Directory of Open Access Journals (Sweden)

    Fujun Pei

    2014-01-01

    Full Text Available The distributed SLAM system has a similar estimation performance and requires only one-fifth of the computation time compared with centralized particle filter. However, particle impoverishment is inevitably because of the random particles prediction and resampling applied in generic particle filter, especially in SLAM problem that involves a large number of dimensions. In this paper, particle filter use in distributed SLAM was improved in two aspects. First, we improved the important function of the local filters in particle filter. The adaptive values were used to replace a set of constants in the computational process of importance function, which improved the robustness of the particle filter. Second, an information fusion method was proposed by mixing the innovation method and the number of effective particles method, which combined the advantages of these two methods. And this paper extends the previously known convergence results for particle filter to prove that improved particle filter converges to the optimal filter in mean square as the number of particles goes to infinity. The experiment results show that the proposed algorithm improved the virtue of the DPF-SLAM system in isolate faults and enabled the system to have a better tolerance and robustness.

  4. Bearing-only SLAM: comparison between probabilistic and deterministic methods

    OpenAIRE

    Joly , Cyril; Rives , Patrick

    2008-01-01

    This work deals with the problem of simultaneous localization and mapping (SLAM). Classical methods for solving the SLAM problem are based on the Extended Kalman Filter (EKF-SLAM) or particle filter (FastSLAM). These kinds of algorithms allow on-line solving but could be inconsistent. In this report, the above-mentioned algorithms are not studied but global ones. Global approaches need all measurements from the initial step to the final step in order to compute the trajectory of the robot and...

  5. Situations in Construction of 3D Mapping for Slam

    OpenAIRE

    Nguyen Hoang Thuy Trang; Shydlouski Stanislav

    2018-01-01

    Nowadays, the simultaneous localization and mapping (SLAM) approach has become one of the most advanced engineering methods used for mobile robots to build maps in unknown or inaccessible spaces. Update maps before a certain area while tracking current location and distance. The motivation behind writing this paper is mainly to help us better understand about SLAM and the study situation of SLAM in the world today. Through this, we find the optimal algorithm for moving robots in three dimensi...

  6. Situations in Construction of 3D Mapping for Slam

    Directory of Open Access Journals (Sweden)

    Nguyen Hoang Thuy Trang

    2018-01-01

    Full Text Available Nowadays, the simultaneous localization and mapping (SLAM approach has become one of the most advanced engineering methods used for mobile robots to build maps in unknown or inaccessible spaces. Update maps before a certain area while tracking current location and distance. The motivation behind writing this paper is mainly to help us better understand about SLAM and the study situation of SLAM in the world today. Through this, we find the optimal algorithm for moving robots in three dimensions.

  7. Visual SLAM Using Variance Grid Maps

    Science.gov (United States)

    Howard, Andrew B.; Marks, Tim K.

    2011-01-01

    An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance

  8. Environment exploration and SLAM experiment research based on ROS

    Science.gov (United States)

    Li, Zhize; Zheng, Wei

    2017-11-01

    Robots need to get the information of surrounding environment by means of map learning. SLAM or navigation based on mobile robots is developing rapidly. ROS (Robot Operating System) is widely used in the field of robots because of the convenient code reuse and open source. Numerous excellent algorithms of SLAM or navigation are ported to ROS package. hector_slam is one of them that can set up occupancy grid maps on-line fast with low computation resources requiring. Its characters above make the embedded handheld mapping system possible. Similarly, hector_navigation also does well in the navigation field. It can finish path planning and environment exploration by itself using only an environmental sensor. Combining hector_navigation with hector_slam can realize low cost environment exploration, path planning and slam at the same time

  9. Benchmark of 6D SLAM (6D Simultaneous Localisation and Mapping Algorithms with Robotic Mobile Mapping Systems

    Directory of Open Access Journals (Sweden)

    Bedkowski Janusz

    2017-09-01

    Full Text Available This work concerns the study of 6DSLAM algorithms with an application of robotic mobile mapping systems. The architecture of the 6DSLAM algorithm is designed for evaluation of different data registration strategies. The algorithm is composed of the iterative registration component, thus ICP (Iterative Closest Point, ICP (point to projection, ICP with semantic discrimination of points, LS3D (Least Square Surface Matching, NDT (Normal Distribution Transform can be chosen. Loop closing is based on LUM and LS3D. The main research goal was to investigate the semantic discrimination of measured points that improve the accuracy of final map especially in demanding scenarios such as multi-level maps (e.g., climbing stairs. The parallel programming based nearest neighborhood search implementation such as point to point, point to projection, semantic discrimination of points is used. The 6DSLAM framework is based on modified 3DTK and PCL open source libraries and parallel programming techniques using NVIDIA CUDA. The paper shows experiments that are demonstrating advantages of proposed approach in relation to practical applications. The major added value of presented research is the qualitative and quantitative evaluation based on realistic scenarios including ground truth data obtained by geodetic survey. The research novelty looking from mobile robotics is the evaluation of LS3D algorithm well known in geodesy.

  10. An Indoor Slam Method Based on Kinect and Multi-Feature Extended Information Filter

    Science.gov (United States)

    Chang, M.; Kang, Z.

    2017-09-01

    Based on the frame of ORB-SLAM in this paper the transformation parameters between adjacent Kinect image frames are computed using ORB keypoints, from which priori information matrix and information vector are calculated. The motion update of multi-feature extended information filter is then realized. According to the point cloud data formed by depth image, ICP algorithm was used to extract the point features of the point cloud data in the scene and built an observation model while calculating a-posteriori information matrix and information vector, and weakening the influences caused by the error accumulation in the positioning process. Furthermore, this paper applied ORB-SLAM frame to realize autonomous positioning in real time in interior unknown environment. In the end, Lidar was used to get data in the scene in order to estimate positioning accuracy put forward in this paper.

  11. AN INDOOR SLAM METHOD BASED ON KINECT AND MULTI-FEATURE EXTENDED INFORMATION FILTER

    Directory of Open Access Journals (Sweden)

    M. Chang

    2017-09-01

    Full Text Available Based on the frame of ORB-SLAM in this paper the transformation parameters between adjacent Kinect image frames are computed using ORB keypoints, from which priori information matrix and information vector are calculated. The motion update of multi-feature extended information filter is then realized. According to the point cloud data formed by depth image, ICP algorithm was used to extract the point features of the point cloud data in the scene and built an observation model while calculating a-posteriori information matrix and information vector, and weakening the influences caused by the error accumulation in the positioning process. Furthermore, this paper applied ORB-SLAM frame to realize autonomous positioning in real time in interior unknown environment. In the end, Lidar was used to get data in the scene in order to estimate positioning accuracy put forward in this paper.

  12. HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN

    Science.gov (United States)

    While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...

  13. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-08-01

    Full Text Available Simultaneous Location and Mapping (SLAM is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hypothesis Compatibility Test, HOHCT. The Delayed Inverse-Depth technique is used to initialize new features in the system and defines a single hypothesis for the initial depth of features with the use of a stochastic technique of triangulation. The introduced HOHCT method is based on the evaluation of statistically compatible hypotheses and a search algorithm designed to exploit the strengths of the Delayed Inverse-Depth technique to achieve good performance results. This work presents the HOHCT with a detailed formulation of the monocular DI-D SLAM problem. The performance of the proposed HOHCT is validated with experimental results, in both indoor and outdoor environments, while its costs are compared with other popular approaches.

  14. Application of kinect sensors for SLAM and DATMO

    CSIR Research Space (South Africa)

    Pancham, A

    2011-10-01

    Full Text Available This work involves the development of algorithms for the implementation of multiple Kinect sensors for SLAM and DATMO. The algorithms will allow the mobile robot to navigate in a dynamic environment and simutaneously create a map of the environment...

  15. Monocular Vision SLAM for Indoor Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Koray Çelik

    2013-01-01

    Full Text Available This paper presents a novel indoor navigation and ranging strategy via monocular camera. By exploiting the architectural orthogonality of the indoor environments, we introduce a new method to estimate range and vehicle states from a monocular camera for vision-based SLAM. The navigation strategy assumes an indoor or indoor-like manmade environment whose layout is previously unknown, GPS-denied, representable via energy based feature points, and straight architectural lines. We experimentally validate the proposed algorithms on a fully self-contained microaerial vehicle (MAV with sophisticated on-board image processing and SLAM capabilities. Building and enabling such a small aerial vehicle to fly in tight corridors is a significant technological challenge, especially in the absence of GPS signals and with limited sensing options. Experimental results show that the system is only limited by the capabilities of the camera and environmental entropy.

  16. Numerical prediction of slamming loads

    DEFF Research Database (Denmark)

    Seng, Sopheak; Jensen, Jørgen J; Pedersen, Preben T

    2012-01-01

    It is important to include the contribution of the slamming-induced response in the structural design of large vessels with a significant bow flare. At the same time it is a challenge to develop rational tools to determine the slamming-induced loads and the prediction of their occurrence. Today i...

  17. A Bioinspired Neural Model Based Extended Kalman Filter for Robot SLAM

    Directory of Open Access Journals (Sweden)

    Jianjun Ni

    2014-01-01

    Full Text Available Robot simultaneous localization and mapping (SLAM problem is a very important and challenging issue in the robotic field. The main tasks of SLAM include how to reduce the localization error and the estimated error of the landmarks and improve the robustness and accuracy of the algorithms. The extended Kalman filter (EKF based method is one of the most popular methods for SLAM. However, the accuracy of the EKF based SLAM algorithm will be reduced when the noise model is inaccurate. To solve this problem, a novel bioinspired neural model based SLAM approach is proposed in this paper. In the proposed approach, an adaptive EKF based SLAM structure is proposed, and a bioinspired neural model is used to adjust the weights of system noise and observation noise adaptively, which can guarantee the stability of the filter and the accuracy of the SLAM algorithm. The proposed approach can deal with the SLAM problem in various situations, for example, the noise is in abnormal conditions. Finally, some simulation experiments are carried out to validate and demonstrate the efficiency of the proposed approach.

  18. Performing poetry slam

    DEFF Research Database (Denmark)

    Schweppenhäuser, Jakob; Pedersen, Birgitte Stougaard

    2017-01-01

    – namely the contemporary Western literary poetry reading and a literary network, on the one side, and, on the other side, the rap battle connected to hip hop culture (other genres, such as e.g. stand-up comedy, could also have been drawn into the discussion, but in order to clarify our argument we have...... chosen to keep focus on the two mentioned). The article builds on a generalised perspective negotiating poetry slam as an aesthetic and cultural phenomenon in between hip hop culture and literary culture, but it also includes a close reading/listening aspect deriving from a specific example, namely...

  19. Geometric projection filter: an efficient solution to the SLAM problem

    Science.gov (United States)

    Newman, Paul M.; Durrant-Whyte, Hugh F.

    2001-10-01

    This paper is concerned with the simultaneous localization and map building (SLAM) problem. The SLAM problem asks if it is possible for an autonomous vehicle to start in an unknown location in an unknown environment and then to incrementally build a map of this environment while simultaneously using this map to compute absolute vehicle location. Conventional approaches to this problem are plagued with a prohibitively large increase in computation with the size of the environment. This paper offers a new solution to the SLAM problem that is both consistent and computationally feasible. The proposed algorithm builds a map expressing the relationships between landmarks which is then transformed into landmark locations. Experimental results are presented employing the new algorithm on a subsea vehicle using a scanning sonar sensor.

  20. Review of ship slamming loads and responses

    Science.gov (United States)

    Wang, Shan; Guedes Soares, C.

    2017-12-01

    The paper presents an overview of studies of slamming on ship structures. This work focuses on the hull slamming, which is one of the most important types of slamming problems to be considered in the ship design process and the assessment of the ship safety. There are three main research aspects related to the hull slamming phenomenon, a) where and how often a slamming event occurs, b) slamming load prediction and c) structural response due to slamming loads. The approaches used in each aspect are reviewed and commented, together with the presentation of some typical results. The methodology, which combines the seakeeping analysis and slamming load prediction, is discussed for the global analysis of the hull slamming of a ship in waves. Some physical phenomena during the slamming event are discussed also. Recommendations for the future research and developments are made.

  1. Gradient algorithm applied to laboratory quantum control

    International Nuclear Information System (INIS)

    Roslund, Jonathan; Rabitz, Herschel

    2009-01-01

    The exploration of a quantum control landscape, which is the physical observable as a function of the control variables, is fundamental for understanding the ability to perform observable optimization in the laboratory. For high control variable dimensions, trajectory-based methods provide a means for performing such systematic explorations by exploiting the measured gradient of the observable with respect to the control variables. This paper presents a practical, robust, easily implemented statistical method for obtaining the gradient on a general quantum control landscape in the presence of noise. In order to demonstrate the method's utility, the experimentally measured gradient is utilized as input in steepest-ascent trajectories on the landscapes of three model quantum control problems: spectrally filtered and integrated second harmonic generation as well as excitation of atomic rubidium. The gradient algorithm achieves efficiency gains of up to approximately three times that of the standard genetic algorithm and, as such, is a promising tool for meeting quantum control optimization goals as well as landscape analyses. The landscape trajectories directed by the gradient should aid in the continued investigation and understanding of controlled quantum phenomena.

  2. Multiobjective Genetic Algorithm applied to dengue control.

    Science.gov (United States)

    Florentino, Helenice O; Cantane, Daniela R; Santos, Fernando L P; Bannwart, Bettina F

    2014-12-01

    Dengue fever is an infectious disease caused by a virus of the Flaviridae family and transmitted to the person by a mosquito of the genus Aedes aegypti. This disease has been a global public health problem because a single mosquito can infect up to 300 people and between 50 and 100 million people are infected annually on all continents. Thus, dengue fever is currently a subject of research, whether in the search for vaccines and treatments for the disease or efficient and economical forms of mosquito control. The current study aims to study techniques of multiobjective optimization to assist in solving problems involving the control of the mosquito that transmits dengue fever. The population dynamics of the mosquito is studied in order to understand the epidemic phenomenon and suggest strategies of multiobjective programming for mosquito control. A Multiobjective Genetic Algorithm (MGA_DENGUE) is proposed to solve the optimization model treated here and we discuss the computational results obtained from the application of this technique. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Bio-inspired algorithms applied to molecular docking simulations.

    Science.gov (United States)

    Heberlé, G; de Azevedo, W F

    2011-01-01

    Nature as a source of inspiration has been shown to have a great beneficial impact on the development of new computational methodologies. In this scenario, analyses of the interactions between a protein target and a ligand can be simulated by biologically inspired algorithms (BIAs). These algorithms mimic biological systems to create new paradigms for computation, such as neural networks, evolutionary computing, and swarm intelligence. This review provides a description of the main concepts behind BIAs applied to molecular docking simulations. Special attention is devoted to evolutionary algorithms, guided-directed evolutionary algorithms, and Lamarckian genetic algorithms. Recent applications of these methodologies to protein targets identified in the Mycobacterium tuberculosis genome are described.

  4. Cooperative Airborne Inertial-SLAM for Improved Platform and Feature/Target Localisation

    National Research Council Canada - National Science Library

    Sukkarieh, Salah; Bryson, Mitch

    2008-01-01

    .... The benefit of using the SLAM algorithm is that it can determine the accuracy of both platform and target locations, both of which improve as a function of feature/target revisitation or sharing...

  5. Performance Analysis of the Microsoft Kinect Sensor for 2D Simultaneous Localization and Mapping (SLAM Techniques

    Directory of Open Access Journals (Sweden)

    Kamarulzaman Kamarudin

    2014-12-01

    Full Text Available This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM techniques (i.e., Gmapping and Hector SLAM using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS. The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect’s depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks.

  6. Evolutionary algorithms applied to Landau-gauge fixing

    International Nuclear Information System (INIS)

    Markham, J.F.

    1998-01-01

    Current algorithms used to put a lattice gauge configuration into Landau gauge either suffer from the problem of critical slowing-down or involve an additions computational expense to overcome it. Evolutionary Algorithms (EAs), which have been widely applied to other global optimisation problems, may be of use in gauge fixing. Also, being global, they should not suffer from critical slowing-down as do local gradient based algorithms. We apply EA'S and also a Steepest Descent (SD) based method to the problem of Landau Gauge Fixing and compare their performance. (authors)

  7. Using external sensors in solution of SLAM task

    Science.gov (United States)

    Provkov, V. S.; Starodubtsev, I. S.

    2018-05-01

    This article describes the algorithms of spatial orientation of SLAM, PTAM and their positive and negative sides. Based on the SLAM method, a method that uses an RGBD camera and additional sensors was developed: an accelerometer, a gyroscope, and a magnetometer. The investigated orientation methods have their advantages when moving along a straight trajectory or when rotating a moving platform. As a result of experiments and a weighted linear combination of the positions obtained from data of the RGBD camera and the nine-axis sensor, it became possible to improve the accuracy of the original algorithm even using a constant as a weight function. In the future, it is planned to develop an algorithm for the dynamic construction of a weight function, as a result of which an increase in the accuracy of the algorithm is expected.

  8. FastSLAM Using Compressed Occupancy Grids

    Directory of Open Access Journals (Sweden)

    Christopher Cain

    2016-01-01

    Full Text Available Robotic vehicles working in unknown environments require the ability to determine their location while learning about obstacles located around them. In this paper a method of solving the SLAM problem that makes use of compressed occupancy grids is presented. The presented approach is an extension of the FastSLAM algorithm which stores a compressed form of the occupancy grid to reduce the amount of memory required to store the set of occupancy grids maintained by the particle filter. The performance of the algorithm is presented using experimental results obtained using a small inexpensive ground vehicle equipped with LiDAR, compass, and downward facing camera that provides the vehicle with visual odometry measurements. The presented results demonstrate that although with our approach the occupancy grid maintained by each particle uses only 40% of the data needed to store the uncompressed occupancy grid, we can still achieve almost identical results to the approach where each particle filter stores the full occupancy grid.

  9. Distributed Monocular SLAM for Indoor Map Building

    OpenAIRE

    Ruwan Egodagamage; Mihran Tuceryan

    2017-01-01

    Utilization and generation of indoor maps are critical elements in accurate indoor tracking. Simultaneous Localization and Mapping (SLAM) is one of the main techniques for such map generation. In SLAM an agent generates a map of an unknown environment while estimating its location in it. Ubiquitous cameras lead to monocular visual SLAM, where a camera is the only sensing device for the SLAM process. In modern applications, multiple mobile agents may be involved in the generation of such maps,...

  10. Grand slam on cancer.

    Science.gov (United States)

    Gartrell, Nanette

    2014-01-01

    A winner of 59 Grand Slam championships including a record 9 Wimbledon singles titles, Martina Navratilova is the most successful woman tennis player of the modern era. Martina was inducted into the International Tennis Hall of Fame, named "Tour Player of the Year" seven times by the Women's Tennis Association, declared "Female Athlete of the Year" by the Associated Press, and ranked one of the "Top Forty Athletes of All-Time" by Sports Illustrated. Equally accomplished off the court, Martina is an author, philanthropist, TV commentator, and activist who has dedicated her life to educating people about prejudice and stereotypes. After coming out as a lesbian in 1981, Martina became a tireless advocate of equal rights for lesbian, gay, bisexual, and transgender (LGBT) people, and she has contributed generously to the LGBT community. Martina is the author of seven books, including most recently Shape Your Self: My 6-Step Diet and Fitness Plan to Achieve the Best Shape of your Life, an inspiring guide to healthy living and personal fitness. Martina was diagnosed with breast cancer in 2010.

  11. SLAM in a van

    Science.gov (United States)

    Binns, Lewis A.; Valachis, Dimitris; Anderson, Sean; Gough, David W.; Nicholson, David; Greenway, Phil

    2002-07-01

    We have developed techniques for Simultaneous Localization and Map Building based on the augmented state Kalman filter, and demonstrated this in real time using laboratory robots. Here we report the results of experiments conducted out doors in an unstructured, unknown, representative environment, using a van equipped with a laser range finder for sensing the external environment, and GPS to provide an estimate of ground truth. The goal is simultaneously to build a map of an unknown environment and to use that map to navigate a vehicle that otherwise would have no way of knowing its location. In this paper we describe the system architecture, the nature of the experimental set up, and the results obtained. These are compared with the estimated ground truth. We show that SLAM is both feasible and useful in real environments. In particular, we explore its repeatability and accuracy, and discuss some practical implementation issues. Finally, we look at the way forward for a real implementation on ground and air vehicles operating in very demanding, harsh environments.

  12. Literature review of SLAM and DATMO

    CSIR Research Space (South Africa)

    Pancham, A

    2011-11-01

    Full Text Available IV compares the different techniques. Section V concludes the paper, and Section VI describes the intended application. II. SLAM AND DATMO A. SLAM and DATMO processes SLAM and DATMO provide a basis for the development of driverless cars...

  13. SLAM: a sodium-limestone concrete ablation model

    International Nuclear Information System (INIS)

    Suo-Anttila, A.J.

    1983-12-01

    SLAM is a three-region model, containing a pool (sodium and reaction debris) region, a dry (boundary layer and dehydrated concrete) region, and a wet (hydrated concrete) region. The model includes a solution to the mass, momentum, and energy equations in each region. A chemical kinetics model is included to provide heat sources due to chemical reactions between the sodium and the concrete. Both isolated model as well as integrated whole code evaluations have been made with good results. The chemical kinetics and water migration models were evaluated separately, with good results. Several small and large-scale sodium limestone concrete experiments were simulated with reasonable agreement between SLAM and the experimental results. The SLAM code was applied to investigate the effects of mixing, pool temperature, pool depth and fluidization. All these phenomena were found to be of significance in the predicted response of the sodium concrete interaction. Pool fluidization is predicted to be the most important variable in large scale interactions

  14. EVALUATING CONTINUOUS-TIME SLAM USING A PREDEFINED TRAJECTORY PROVIDED BY A ROBOTIC ARM

    Directory of Open Access Journals (Sweden)

    B. Koch

    2017-09-01

    Full Text Available Recently published approaches to SLAM algorithms process laser sensor measurements and output a map as a point cloud of the environment. Often the actual precision of the map remains unclear, since SLAMalgorithms apply local improvements to the resulting map. Unfortunately, it is not trivial to compare the performance of SLAMalgorithms objectively, especially without an accurate ground truth. This paper presents a novel benchmarking technique that allows to compare a precise map generated with an accurate ground truth trajectory to a map with a manipulated trajectory which was distorted by different forms of noise. The accurate ground truth is acquired by mounting a laser scanner on an industrial robotic arm. The robotic arm is moved on a predefined path while the position and orientation of the end-effector tool are monitored. During this process the 2D profile measurements of the laser scanner are recorded in six degrees of freedom and afterwards used to generate a precise point cloud of the test environment. For benchmarking, an offline continuous-time SLAM algorithm is subsequently applied to remove the inserted distortions. Finally, it is shown that the manipulated point cloud is reversible to its previous state and is slightly improved compared to the original version, since small errors that came into account by imprecise assumptions, sensor noise and calibration errors are removed as well.

  15. Evaluating Continuous-Time Slam Using a Predefined Trajectory Provided by a Robotic Arm

    Science.gov (United States)

    Koch, B.; Leblebici, R.; Martell, A.; Jörissen, S.; Schilling, K.; Nüchter, A.

    2017-09-01

    Recently published approaches to SLAM algorithms process laser sensor measurements and output a map as a point cloud of the environment. Often the actual precision of the map remains unclear, since SLAMalgorithms apply local improvements to the resulting map. Unfortunately, it is not trivial to compare the performance of SLAMalgorithms objectively, especially without an accurate ground truth. This paper presents a novel benchmarking technique that allows to compare a precise map generated with an accurate ground truth trajectory to a map with a manipulated trajectory which was distorted by different forms of noise. The accurate ground truth is acquired by mounting a laser scanner on an industrial robotic arm. The robotic arm is moved on a predefined path while the position and orientation of the end-effector tool are monitored. During this process the 2D profile measurements of the laser scanner are recorded in six degrees of freedom and afterwards used to generate a precise point cloud of the test environment. For benchmarking, an offline continuous-time SLAM algorithm is subsequently applied to remove the inserted distortions. Finally, it is shown that the manipulated point cloud is reversible to its previous state and is slightly improved compared to the original version, since small errors that came into account by imprecise assumptions, sensor noise and calibration errors are removed as well.

  16. Genetic algorithms applied to nuclear reactor design optimization

    International Nuclear Information System (INIS)

    Pereira, C.M.N.A.; Schirru, R.; Martinez, A.S.

    2000-01-01

    A genetic algorithm is a powerful search technique that simulates natural evolution in order to fit a population of computational structures to the solution of an optimization problem. This technique presents several advantages over classical ones such as linear programming based techniques, often used in nuclear engineering optimization problems. However, genetic algorithms demand some extra computational cost. Nowadays, due to the fast computers available, the use of genetic algorithms has increased and its practical application has become a reality. In nuclear engineering there are many difficult optimization problems related to nuclear reactor design. Genetic algorithm is a suitable technique to face such kind of problems. This chapter presents applications of genetic algorithms for nuclear reactor core design optimization. A genetic algorithm has been designed to optimize the nuclear reactor cell parameters, such as array pitch, isotopic enrichment, dimensions and cells materials. Some advantages of this genetic algorithm implementation over a classical method based on linear programming are revealed through the application of both techniques to a simple optimization problem. In order to emphasize the suitability of genetic algorithms for design optimization, the technique was successfully applied to a more complex problem, where the classical method is not suitable. Results and comments about the applications are also presented. (orig.)

  17. Integration of IMU and Velodyne LiDAR sensor in an ICP-SLAM framework

    OpenAIRE

    Zhang, Erik

    2016-01-01

    Simultaneous localization and mapping (SLAM) of an unknown environment is a critical step for many autonomous processes. For this work, we propose a solution which does not rely on storing descriptors of the environment and performing descriptors filtering. Compared to most SLAM based methods this work with general sparse point clouds with the underlying generalized ICP (GICP) algorithm for point cloud registration. This thesis presents a modified GICP method and an investigation of how and i...

  18. Swarm, genetic and evolutionary programming algorithms applied to multiuser detection

    Directory of Open Access Journals (Sweden)

    Paul Jean Etienne Jeszensky

    2005-02-01

    Full Text Available In this paper, the particles swarm optimization technique, recently published in the literature, and applied to Direct Sequence/Code Division Multiple Access systems (DS/CDMA with multiuser detection (MuD is analyzed, evaluated and compared. The Swarm algorithm efficiency when applied to the DS-CDMA multiuser detection (Swarm-MuD is compared through the tradeoff performance versus computational complexity, being the complexity expressed in terms of the number of necessary operations in order to reach the performance obtained through the optimum detector or the Maximum Likelihood detector (ML. The comparison is accomplished among the genetic algorithm, evolutionary programming with cloning and Swarm algorithm under the same simulation basis. Additionally, it is proposed an heuristics-MuD complexity analysis through the number of computational operations. Finally, an analysis is carried out for the input parameters of the Swarm algorithm in the attempt to find the optimum parameters (or almost-optimum for the algorithm applied to the MuD problem.

  19. Parameterless evolutionary algorithm applied to the nuclear reload problem

    International Nuclear Information System (INIS)

    Caldas, Gustavo Henrique Flores; Schirru, Roberto

    2008-01-01

    In this work, an evolutionary algorithm with no parameters called FPBIL (parameter free PBIL) is developed based on PBIL (population-based incremental learning). Moreover, the analysis reveals how the parameters from PBIL can be replaced by self-adaptable mechanisms which appear from the radically different form by which the evolution is processed. Despite the advantages, the FPBIL reveals itself compact and relatively modest in the use of computational resources. The FPBIL is then applied to the nuclear reload problem. The experimental results observed are compared to those of other works and corroborate to affirm the superiority of the new algorithm

  20. Localisation accuracy of semi-dense monocular SLAM

    Science.gov (United States)

    Schreve, Kristiaan; du Plessies, Pieter G.; Rätsch, Matthias

    2017-06-01

    Understanding the factors that influence the accuracy of visual SLAM algorithms is very important for the future development of these algorithms. So far very few studies have done this. In this paper, a simulation model is presented and used to investigate the effect of the number of scene points tracked, the effect of the baseline length in triangulation and the influence of image point location uncertainty. It is shown that the latter is very critical, while the other all play important roles. Experiments with a well known semi-dense visual SLAM approach are also presented, when used in a monocular visual odometry mode. The experiments shows that not including sensor bias and scale factor uncertainty is very detrimental to the accuracy of the simulation results.

  1. A SLAM based on auxiliary marginalised particle filter and differential evolution

    Science.gov (United States)

    Havangi, R.; Nekoui, M. A.; Teshnehlab, M.; Taghirad, H. D.

    2014-09-01

    FastSLAM is a framework for simultaneous localisation and mapping (SLAM) using a Rao-Blackwellised particle filter. In FastSLAM, particle filter is used for the robot pose (position and orientation) estimation, and parametric filter (i.e. EKF and UKF) is used for the feature location's estimation. However, in the long term, FastSLAM is an inconsistent algorithm. In this paper, a new approach to SLAM based on hybrid auxiliary marginalised particle filter and differential evolution (DE) is proposed. In the proposed algorithm, the robot pose is estimated based on auxiliary marginal particle filter that operates directly on the marginal distribution, and hence avoids performing importance sampling on a space of growing dimension. In addition, static map is considered as a set of parameters that are learned using DE. Compared to other algorithms, the proposed algorithm can improve consistency for longer time periods and also, improve the estimation accuracy. Simulations and experimental results indicate that the proposed algorithm is effective.

  2. Clustered features for use in stereo vision SLAM

    CSIR Research Space (South Africa)

    Joubert, D

    2010-07-01

    Full Text Available SLAM, or simultaneous localization and mapping, is a key component in the development of truly independent robots. Vision-based SLAM utilising stereo vision is a promising approach to SLAM but it is computationally expensive and difficult...

  3. Applied economic model development algorithm for electronics company

    Directory of Open Access Journals (Sweden)

    Mikhailov I.

    2017-01-01

    Full Text Available The purpose of this paper is to report about received experience in the field of creating the actual methods and algorithms that help to simplify development of applied decision support systems. It reports about an algorithm, which is a result of two years research and have more than one-year practical verification. In a case of testing electronic components, the time of the contract conclusion is crucial point to make the greatest managerial mistake. At this stage, it is difficult to achieve a realistic assessment of time-limit and of wage-fund for future work. The creation of estimating model is possible way to solve this problem. In the article is represented an algorithm for creation of those models. The algorithm is based on example of the analytical model development that serves for amount of work estimation. The paper lists the algorithm’s stages and explains their meanings with participants’ goals. The implementation of the algorithm have made possible twofold acceleration of these models development and fulfilment of management’s requirements. The resulting models have made a significant economic effect. A new set of tasks was identified to be further theoretical study.

  4. Visual SLAM and Moving-object Detection for a Small-size Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Yin-Tien Wang

    2010-09-01

    Full Text Available In the paper, a novel moving object detection (MOD algorithm is developed and integrated with robot visual Simultaneous Localization and Mapping (vSLAM. The moving object is assumed to be a rigid body and its coordinate system in space is represented by a position vector and a rotation matrix. The MOD algorithm is composed of detection of image features, initialization of image features, and calculation of object coordinates. Experimentation is implemented on a small-size humanoid robot and the results show that the performance of the proposed algorithm is efficient for robot visual SLAM and moving object detection.

  5. Fuzzy model predictive control algorithm applied in nuclear power plant

    International Nuclear Information System (INIS)

    Zuheir, Ahmad

    2006-01-01

    The aim of this paper is to design a predictive controller based on a fuzzy model. The Takagi-Sugeno fuzzy model with an Adaptive B-splines neuro-fuzzy implementation is used and incorporated as a predictor in a predictive controller. An optimization approach with a simplified gradient technique is used to calculate predictions of the future control actions. In this approach, adaptation of the fuzzy model using dynamic process information is carried out to build the predictive controller. The easy description of the fuzzy model and the easy computation of the gradient sector during the optimization procedure are the main advantages of the computation algorithm. The algorithm is applied to the control of a U-tube steam generation unit (UTSG) used for electricity generation. (author)

  6. Multi-Objective Optimization of Grillages Applying the Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Darius Mačiūnas

    2012-01-01

    Full Text Available The article analyzes the optimization of grillage-type foundations seeking for the least possible reactive forces in the poles for a given number of poles and for the least possible bending moments of absolute values in the connecting beams of the grillage. Therefore, we suggest using a compromise objective function (to be minimized that consists of the maximum reactive force arising in all poles and the maximum bending moment of the absolute value in connecting beams; both components include the given weights. The variables of task design are pole positions under connecting beams. The optimization task is solved applying the algorithm containing all the initial data of the problem. Reactive forces and bending moments are calculated using an original program (finite element method is applied. This program is integrated into the optimization algorithm using the “black-box” principle. The “black-box” finite element program sends back the corresponding value of the objective function. Numerical experiments revealed the optimal quantity of points to compute bending moments. The obtained results show a certain ratio of weights in the objective function where the contribution of reactive forces and bending moments to the objective function are equivalent. This solution can serve as a pilot project for more detailed design.Article in Lithuanian

  7. Resolution enhancement of slam using transverse wave

    International Nuclear Information System (INIS)

    Ko, Dae Sik; Moon, Gun; Kim, Young H.

    1997-01-01

    We studied the resolution enhancement of a novel scanning laser acoustic microscope (SLAM) using transverse waves. Mode conversion of the ultrasonic wave takes place at the liquid-solid interface and some energy of the insonifying longitudinal waves in the water will convert to transverse wave energy within the solid specimen. The resolution of SLAM depends on the size of detecting laser spot and the wavelength of the insonifying ultrasonic waves. Since the wavelength of the transverse wave is shorter than that of the longitudinal wave, we are able to achieve the high resolution by using transverse waves. In order to operate SLAM in the transverse wave mode, we made wedge for changing the incident angle. Our experimental results with model 2140 SLAM and an aluminum specimen showed higher contrast of the SLAM Image In the transverse wave mode than that in the longitudinal wave mode.

  8. Genetic algorithms applied to nonlinear and complex domains; TOPICAL

    International Nuclear Information System (INIS)

    Barash, D; Woodin, A E

    1999-01-01

    The dissertation, titled ''Genetic Algorithms Applied to Nonlinear and Complex Domains'', describes and then applies a new class of powerful search algorithms (GAS) to certain domains. GAS are capable of solving complex and nonlinear problems where many parameters interact to produce a ''final'' result such as the optimization of the laser pulse in the interaction of an atom with an intense laser field. GAS can very efficiently locate the global maximum by searching parameter space in problems which are unsuitable for a search using traditional methods. In particular, the dissertation contains new scientific findings in two areas. First, the dissertation examines the interaction of an ultra-intense short laser pulse with atoms. GAS are used to find the optimal frequency for stabilizing atoms in the ionization process. This leads to a new theoretical formulation, to explain what is happening during the ionization process and how the electron is responding to finite (real-life) laser pulse shapes. It is shown that the dynamics of the process can be very sensitive to the ramp of the pulse at high frequencies. The new theory which is formulated, also uses a novel concept (known as the (t,t') method) to numerically solve the time-dependent Schrodinger equation Second, the dissertation also examines the use of GAS in modeling decision making problems. It compares GAS with traditional techniques to solve a class of problems known as Markov Decision Processes. The conclusion of the dissertation should give a clear idea of where GAS are applicable, especially in the physical sciences, in problems which are nonlinear and complex, i.e. difficult to analyze by other means

  9. Genetic algorithms applied to nonlinear and complex domains

    International Nuclear Information System (INIS)

    Barash, D; Woodin, A E

    1999-01-01

    The dissertation, titled ''Genetic Algorithms Applied to Nonlinear and Complex Domains'', describes and then applies a new class of powerful search algorithms (GAS) to certain domains. GAS are capable of solving complex and nonlinear problems where many parameters interact to produce a ''final'' result such as the optimization of the laser pulse in the interaction of an atom with an intense laser field. GAS can very efficiently locate the global maximum by searching parameter space in problems which are unsuitable for a search using traditional methods. In particular, the dissertation contains new scientific findings in two areas. First, the dissertation examines the interaction of an ultra-intense short laser pulse with atoms. GAS are used to find the optimal frequency for stabilizing atoms in the ionization process. This leads to a new theoretical formulation, to explain what is happening during the ionization process and how the electron is responding to finite (real-life) laser pulse shapes. It is shown that the dynamics of the process can be very sensitive to the ramp of the pulse at high frequencies. The new theory which is formulated, also uses a novel concept (known as the (t,t') method) to numerically solve the time-dependent Schrodinger equation Second, the dissertation also examines the use of GAS in modeling decision making problems. It compares GAS with traditional techniques to solve a class of problems known as Markov Decision Processes. The conclusion of the dissertation should give a clear idea of where GAS are applicable, especially in the physical sciences, in problems which are nonlinear and complex, i.e. difficult to analyze by other means

  10. Applying Kitaev's algorithm in an ion trap quantum computer

    International Nuclear Information System (INIS)

    Travaglione, B.; Milburn, G.J.

    2000-01-01

    Full text: Kitaev's algorithm is a method of estimating eigenvalues associated with an operator. Shor's factoring algorithm, which enables a quantum computer to crack RSA encryption codes, is a specific example of Kitaev's algorithm. It has been proposed that the algorithm can also be used to generate eigenstates. We extend this proposal for small quantum systems, identifying the conditions under which the algorithm can successfully generate eigenstates. We then propose an implementation scheme based on an ion trap quantum computer. This scheme allows us to illustrate a simple example, in which the algorithm effectively generates eigenstates

  11. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    Directory of Open Access Journals (Sweden)

    Adis Alihodzic

    2014-01-01

    Full Text Available Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed.

  12. Continuous firefly algorithm applied to PWR core pattern enhancement

    Energy Technology Data Exchange (ETDEWEB)

    Poursalehi, N., E-mail: npsalehi@yahoo.com [Engineering Department, Shahid Beheshti University, G.C., P.O. Box 1983963113, Tehran (Iran, Islamic Republic of); Zolfaghari, A.; Minuchehr, A.; Moghaddam, H.K. [Engineering Department, Shahid Beheshti University, G.C., P.O. Box 1983963113, Tehran (Iran, Islamic Republic of)

    2013-05-15

    Highlights: ► Numerical results indicate the reliability of CFA for the nuclear reactor LPO. ► The major advantages of CFA are its light computational cost and fast convergence. ► Our experiments demonstrate the ability of CFA to obtain the near optimal loading pattern. -- Abstract: In this research, the new meta-heuristic optimization strategy, firefly algorithm, is developed for the nuclear reactor loading pattern optimization problem. Two main goals in reactor core fuel management optimization are maximizing the core multiplication factor (K{sub eff}) in order to extract the maximum cycle energy and minimizing the power peaking factor due to safety constraints. In this work, we define a multi-objective fitness function according to above goals for the core fuel arrangement enhancement. In order to evaluate and demonstrate the ability of continuous firefly algorithm (CFA) to find the near optimal loading pattern, we developed CFA nodal expansion code (CFANEC) for the fuel management operation. This code consists of two main modules including CFA optimization program and a developed core analysis code implementing nodal expansion method to calculate with coarse meshes by dimensions of fuel assemblies. At first, CFA is applied for the Foxholes test case with continuous variables in order to validate CFA and then for KWU PWR using a decoding strategy for discrete variables. Results indicate the efficiency and relatively fast convergence of CFA in obtaining near optimal loading pattern with respect to considered fitness function. At last, our experience with the CFA confirms that the CFA is easy to implement and reliable.

  13. Continuous firefly algorithm applied to PWR core pattern enhancement

    International Nuclear Information System (INIS)

    Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.; Moghaddam, H.K.

    2013-01-01

    Highlights: ► Numerical results indicate the reliability of CFA for the nuclear reactor LPO. ► The major advantages of CFA are its light computational cost and fast convergence. ► Our experiments demonstrate the ability of CFA to obtain the near optimal loading pattern. -- Abstract: In this research, the new meta-heuristic optimization strategy, firefly algorithm, is developed for the nuclear reactor loading pattern optimization problem. Two main goals in reactor core fuel management optimization are maximizing the core multiplication factor (K eff ) in order to extract the maximum cycle energy and minimizing the power peaking factor due to safety constraints. In this work, we define a multi-objective fitness function according to above goals for the core fuel arrangement enhancement. In order to evaluate and demonstrate the ability of continuous firefly algorithm (CFA) to find the near optimal loading pattern, we developed CFA nodal expansion code (CFANEC) for the fuel management operation. This code consists of two main modules including CFA optimization program and a developed core analysis code implementing nodal expansion method to calculate with coarse meshes by dimensions of fuel assemblies. At first, CFA is applied for the Foxholes test case with continuous variables in order to validate CFA and then for KWU PWR using a decoding strategy for discrete variables. Results indicate the efficiency and relatively fast convergence of CFA in obtaining near optimal loading pattern with respect to considered fitness function. At last, our experience with the CFA confirms that the CFA is easy to implement and reliable

  14. IMLS-SLAM: scan-to-model matching based on 3D data

    OpenAIRE

    Deschaud, Jean-Emmanuel

    2018-01-01

    The Simultaneous Localization And Mapping (SLAM) problem has been well studied in the robotics community, especially using mono, stereo cameras or depth sensors. 3D depth sensors, such as Velodyne LiDAR, have proved in the last 10 years to be very useful to perceive the environment in autonomous driving, but few methods exist that directly use these 3D data for odometry. We present a new low-drift SLAM algorithm based only on 3D LiDAR data. Our method relies on a scan-to-model matching framew...

  15. Slamming Simulations in a Conditional Wave

    DEFF Research Database (Denmark)

    Seng, Sopheak; Jensen, Jørgen Juncher

    2012-01-01

    A study of slamming events in conditional waves is presented in this paper. The ship is sailing in head sea and the motion is solved for under the assumption of rigid body motion constrained to two degree-of-freedom i.e. heave and pitch. Based on a time domain non-linear strip theory most probable...... surface NS/VOF CFD simulations under the same wave conditions. In moderate seas and no occurrence of slamming the structural responses predicted by the methods agree well. When slamming occurs the strip theory overpredicts VBM but the peak values of VBM occurs at approximately the same time as predicted...... by the CFD method implying the possibility to use the more accurate CFD results to improve the estimation of slamming loads in the strip theory through a rational correction coefficient....

  16. Multi-Sensor SLAM Approach for Robot Navigation

    Directory of Open Access Journals (Sweden)

    Sid Ahmed BERRABAH

    2010-12-01

    Full Text Available o be able to operate and act successfully, the robot needs to know at any time where it is. This means the robot has to find out its location relative to the environment. This contribution introduces the increase of accuracy of mobile robot positioning in large outdoor environments based on data fusion from different sensors: camera, GPS, inertial navigation system (INS, and wheel encoders. The fusion is done in a Simultaneous Localization and Mapping (SLAM approach. The paper gives an overview on the proposed algorithm and discusses the obtained results.

  17. Applying Biomimetic Algorithms for Extra-Terrestrial Habitat Generation

    Science.gov (United States)

    Birge, Brian

    2012-01-01

    The objective is to simulate and optimize distributed cooperation among a network of robots tasked with cooperative excavation on an extra-terrestrial surface. Additionally to examine the concept of directed Emergence among a group of limited artificially intelligent agents. Emergence is the concept of achieving complex results from very simple rules or interactions. For example, in a termite mound each individual termite does not carry a blueprint of how to make their home in a global sense, but their interactions based strictly on local desires create a complex superstructure. Leveraging this Emergence concept applied to a simulation of cooperative agents (robots) will allow an examination of the success of non-directed group strategy achieving specific results. Specifically the simulation will be a testbed to evaluate population based robotic exploration and cooperative strategies while leveraging the evolutionary teamwork approach in the face of uncertainty about the environment and partial loss of sensors. Checking against a cost function and 'social' constraints will optimize cooperation when excavating a simulated tunnel. Agents will act locally with non-local results. The rules by which the simulated robots interact will be optimized to the simplest possible for the desired result, leveraging Emergence. Sensor malfunction and line of sight issues will be incorporated into the simulation. This approach falls under Swarm Robotics, a subset of robot control concerned with finding ways to control large groups of robots. Swarm Robotics often contains biologically inspired approaches, research comes from social insect observation but also data from among groups of herding, schooling, and flocking animals. Biomimetic algorithms applied to manned space exploration is the method under consideration for further study.

  18. Slam!

    Science.gov (United States)

    2006-01-01

    2 August 2006 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows an impact crater on the martian northern plains. This crater is roughly the size of the famous Meteor Crater in Arizona on the North American continent. Location near: 43.0oN, 231.7oW Image width: 3 km (1.9 mi) Illumination from: lower left Season: Northern Spring

  19. Slam estimation in dynamic outdoor environments

    OpenAIRE

    Lu, Zheyuan; Hu, Zhencheng; Uchimura, Keiichi; コ, シンテイ; ウチムラ, ケイイチ; 胡, 振程; 内村, 圭一

    2010-01-01

    This paper describes and compares three different approaches to estimate simultaneous localization and mapping (SLAM) in dynamic outdoor environments. SLAM has been intensively researched in recent years in the field of robotics and intelligent vehicles, many approaches have been proposed including occupancy grid mapping method (Bayesian, Dempster-Shafer and Fuzzy Logic), Localization estimation method (edge or point features based direct scan matching techniques, probabilistic likelihood, EK...

  20. SLAMM: Visual monocular SLAM with continuous mapping using multiple maps.

    Directory of Open Access Journals (Sweden)

    Hayyan Afeef Daoud

    Full Text Available This paper presents the concept of Simultaneous Localization and Multi-Mapping (SLAMM. It is a system that ensures continuous mapping and information preservation despite failures in tracking due to corrupted frames or sensor's malfunction; making it suitable for real-world applications. It works with single or multiple robots. In a single robot scenario the algorithm generates a new map at the time of tracking failure, and later it merges maps at the event of loop closure. Similarly, maps generated from multiple robots are merged without prior knowledge of their relative poses; which makes this algorithm flexible. The system works in real time at frame-rate speed. The proposed approach was tested on the KITTI and TUM RGB-D public datasets and it showed superior results compared to the state-of-the-arts in calibrated visual monocular keyframe-based SLAM. The mean tracking time is around 22 milliseconds. The initialization is twice as fast as it is in ORB-SLAM, and the retrieved map can reach up to 90 percent more in terms of information preservation depending on tracking loss and loop closure events. For the benefit of the community, the source code along with a framework to be run with Bebop drone are made available at https://github.com/hdaoud/ORBSLAMM.

  1. Orientation estimation algorithm applied to high-spin projectiles

    International Nuclear Information System (INIS)

    Long, D F; Lin, J; Zhang, X M; Li, J

    2014-01-01

    High-spin projectiles are low cost military weapons. Accurate orientation information is critical to the performance of the high-spin projectiles control system. However, orientation estimators have not been well translated from flight vehicles since they are too expensive, lack launch robustness, do not fit within the allotted space, or are too application specific. This paper presents an orientation estimation algorithm specific for these projectiles. The orientation estimator uses an integrated filter to combine feedback from a three-axis magnetometer, two single-axis gyros and a GPS receiver. As a new feature of this algorithm, the magnetometer feedback estimates roll angular rate of projectile. The algorithm also incorporates online sensor error parameter estimation performed simultaneously with the projectile attitude estimation. The second part of the paper deals with the verification of the proposed orientation algorithm through numerical simulation and experimental tests. Simulations and experiments demonstrate that the orientation estimator can effectively estimate the attitude of high-spin projectiles. Moreover, online sensor calibration significantly enhances the estimation performance of the algorithm. (paper)

  2. Orientation estimation algorithm applied to high-spin projectiles

    Science.gov (United States)

    Long, D. F.; Lin, J.; Zhang, X. M.; Li, J.

    2014-06-01

    High-spin projectiles are low cost military weapons. Accurate orientation information is critical to the performance of the high-spin projectiles control system. However, orientation estimators have not been well translated from flight vehicles since they are too expensive, lack launch robustness, do not fit within the allotted space, or are too application specific. This paper presents an orientation estimation algorithm specific for these projectiles. The orientation estimator uses an integrated filter to combine feedback from a three-axis magnetometer, two single-axis gyros and a GPS receiver. As a new feature of this algorithm, the magnetometer feedback estimates roll angular rate of projectile. The algorithm also incorporates online sensor error parameter estimation performed simultaneously with the projectile attitude estimation. The second part of the paper deals with the verification of the proposed orientation algorithm through numerical simulation and experimental tests. Simulations and experiments demonstrate that the orientation estimator can effectively estimate the attitude of high-spin projectiles. Moreover, online sensor calibration significantly enhances the estimation performance of the algorithm.

  3. Convergence and Consistency Analysis for A 3D Invariant-EKF SLAM

    OpenAIRE

    Zhang, Teng; Wu, Kanzhi; Song, Jingwei; Huang, Shoudong; Dissanayake, Gamini

    2017-01-01

    In this paper, we investigate the convergence and consistency properties of an Invariant-Extended Kalman Filter (RI-EKF) based Simultaneous Localization and Mapping (SLAM) algorithm. Basic convergence properties of this algorithm are proven. These proofs do not require the restrictive assumption that the Jacobians of the motion and observation models need to be evaluated at the ground truth. It is also shown that the output of RI-EKF is invariant under any stochastic rigid body transformation...

  4. Distributed Monocular SLAM for Indoor Map Building

    Directory of Open Access Journals (Sweden)

    Ruwan Egodagamage

    2017-01-01

    Full Text Available Utilization and generation of indoor maps are critical elements in accurate indoor tracking. Simultaneous Localization and Mapping (SLAM is one of the main techniques for such map generation. In SLAM an agent generates a map of an unknown environment while estimating its location in it. Ubiquitous cameras lead to monocular visual SLAM, where a camera is the only sensing device for the SLAM process. In modern applications, multiple mobile agents may be involved in the generation of such maps, thus requiring a distributed computational framework. Each agent can generate its own local map, which can then be combined into a map covering a larger area. By doing so, they can cover a given environment faster than a single agent. Furthermore, they can interact with each other in the same environment, making this framework more practical, especially for collaborative applications such as augmented reality. One of the main challenges of distributed SLAM is identifying overlapping maps, especially when relative starting positions of agents are unknown. In this paper, we are proposing a system having multiple monocular agents, with unknown relative starting positions, which generates a semidense global map of the environment.

  5. Differential Evolution algorithm applied to FSW model calibration

    Science.gov (United States)

    Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.

    2014-03-01

    Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.

  6. Applying genetic algorithms for programming manufactoring cell tasks

    Directory of Open Access Journals (Sweden)

    Efredy Delgado

    2005-05-01

    Full Text Available This work was aimed for developing computational intelligence for scheduling a manufacturing cell's tasks, based manily on genetic algorithms. The manufacturing cell was modelled as beign a production-line; the makespan was calculated by using heuristics adapted from several libraries for genetic algorithms computed in C++ builder. Several problems dealing with small, medium and large list of jobs and machinery were resolved. The results were compared with other heuristics. The approach developed here would seem to be promising for future research concerning scheduling manufacturing cell tasks involving mixed batches.

  7. Optimising a shaft's geometry by applying genetic algorithms

    Directory of Open Access Journals (Sweden)

    María Alejandra Guzmán

    2005-05-01

    Full Text Available Many engnieering design tasks involve optimising several conflicting goals; these types of problem are known as Multiobjective Optimisation Problems (MOPs. Evolutionary techniques have proved to be an effective tool for finding solutions to these MOPs during the last decade, Variations on the basic generic algorithm have been particulary proposed by different researchers for finding rapid optimal solutions to MOPs. The NSGA (Non-dominated Sorting Generic Algorithm has been implemented in this paper for finding an optimal design for a shaft subjected to cyclic loads, the conflycting goals being minimum weight and minimum lateral deflection.

  8. Parallel preconditioned conjugate gradient algorithm applied to neutron diffusion problem

    International Nuclear Information System (INIS)

    Majumdar, A.; Martin, W.R.

    1992-01-01

    Numerical solution of the neutron diffusion problem requires solving a linear system of equations such as Ax = b, where A is an n x n symmetric positive definite (SPD) matrix; x and b are vectors with n components. The preconditioned conjugate gradient (PCG) algorithm is an efficient iterative method for solving such a linear system of equations. In this paper, the authors describe the implementation of a parallel PCG algorithm on a shared memory machine (BBN TC2000) and on a distributed workstation (IBM RS6000) environment created by the parallel virtual machine parallelization software

  9. Performance evaluation of the EM algorithm applied to radiographic images

    International Nuclear Information System (INIS)

    Brailean, J.C.; Giger, M.L.; Chen, C.T.; Sullivan, B.J.

    1990-01-01

    In this paper the authors evaluate the expectation maximization (EM) algorithm, both qualitatively and quantitatively, as a technique for enhancing radiographic images. Previous studies have qualitatively shown the usefulness of the EM algorithm but have failed to quantify and compare its performance with those of other image processing techniques. Recent studies by Loo et al, Ishida et al, and Giger et al, have explained improvements in image quality quantitatively in terms of a signal-to-noise ratio (SNR) derived from signal detection theory. In this study, we take a similar approach in quantifying the effect of the EM algorithm on detection of simulated low-contrast square objects superimposed on radiographic mottle. The SNRs of the original and processed images are calculated taking into account both the human visual system response and the screen-film transfer function as well as a noise component internal to the eye-brain system. The EM algorithm was also implemented on digital screen-film images of test patterns and clinical mammograms

  10. The PBIL algorithm applied to a nuclear reactor design optimization

    Energy Technology Data Exchange (ETDEWEB)

    Machado, Marcelo D.; Medeiros, Jose A.C.C.; Lima, Alan M.M. de; Schirru, Roberto [Instituto Alberto Luiz Coimbra de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ-RJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear. Lab. de Monitoracao de Processos]. E-mails: marcelo@lmp.ufrj.br; canedo@lmp.ufrj.br; alan@lmp.ufrj.br; schirru@lmp.ufrj.br

    2007-07-01

    The Population-Based Incremental Learning (PBIL) algorithm is a method that combines the mechanism of genetic algorithm with the simple competitive learning, creating an important tool to be used in the optimization of numeric functions and combinatory problems. PBIL works with a set of solutions to the problems, called population, whose objective is create a probability vector, containing real values in each position, that when used in a decoding procedure gives subjects that present the best solutions for the function to be optimized. In this work a new form of learning for algorithm PBIL is developed, having aimed at to reduce the necessary time for the optimization process. This new algorithm will be used in the nuclear reactor design optimization. The optimization problem consists in adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the average peak-factor in a 3-enrichment zone reactor, considering some restrictions. In this optimization is used the computational code HAMMER, and the results compared with other methods of optimization by artificial intelligence. (author)

  11. The PBIL algorithm applied to a nuclear reactor design optimization

    International Nuclear Information System (INIS)

    Machado, Marcelo D.; Medeiros, Jose A.C.C.; Lima, Alan M.M. de; Schirru, Roberto

    2007-01-01

    The Population-Based Incremental Learning (PBIL) algorithm is a method that combines the mechanism of genetic algorithm with the simple competitive learning, creating an important tool to be used in the optimization of numeric functions and combinatory problems. PBIL works with a set of solutions to the problems, called population, whose objective is create a probability vector, containing real values in each position, that when used in a decoding procedure gives subjects that present the best solutions for the function to be optimized. In this work a new form of learning for algorithm PBIL is developed, having aimed at to reduce the necessary time for the optimization process. This new algorithm will be used in the nuclear reactor design optimization. The optimization problem consists in adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the average peak-factor in a 3-enrichment zone reactor, considering some restrictions. In this optimization is used the computational code HAMMER, and the results compared with other methods of optimization by artificial intelligence. (author)

  12. Genetic algorithms applied to the nuclear power plant operation

    International Nuclear Information System (INIS)

    Schirru, R.; Martinez, A.S.; Pereira, C.M.N.A.

    2000-01-01

    Nuclear power plant operation often involves very important human decisions, such as actions to be taken after a nuclear accident/transient, or finding the best core reload pattern, a complex combinatorial optimization problem which requires expert knowledge. Due to the complexity involved in the decisions to be taken, computerized systems have been intensely explored in order to aid the operator. Following hardware advances, soft computing has been improved and, nowadays, intelligent technologies, such as genetic algorithms, neural networks and fuzzy systems, are being used to support operator decisions. In this chapter two main problems are explored: transient diagnosis and nuclear core refueling. Here, solutions to such kind of problems, based on genetic algorithms, are described. A genetic algorithm was designed to optimize the nuclear fuel reload of Angra-1 nuclear power plant. Results compared to those obtained by an expert reveal a gain in the burn-up cycle. Two other genetic algorithm approaches were used to optimize real time diagnosis systems. The first one learns partitions in the time series that represents the transients, generating a set of classification centroids. The other one involves the optimization of an adaptive vector quantization neural network. Results are shown and commented. (orig.)

  13. An Improved Crow Search Algorithm Applied to Energy Problems

    Directory of Open Access Journals (Sweden)

    Primitivo Díaz

    2018-03-01

    Full Text Available The efficient use of energy in electrical systems has become a relevant topic due to its environmental impact. Parameter identification in induction motors and capacitor allocation in distribution networks are two representative problems that have strong implications in the massive use of energy. From an optimization perspective, both problems are considered extremely complex due to their non-linearity, discontinuity, and high multi-modality. These characteristics make difficult to solve them by using standard optimization techniques. On the other hand, metaheuristic methods have been widely used as alternative optimization algorithms to solve complex engineering problems. The Crow Search Algorithm (CSA is a recent metaheuristic method based on the intelligent group behavior of crows. Although CSA presents interesting characteristics, its search strategy presents great difficulties when it faces high multi-modal formulations. In this paper, an improved version of the CSA method is presented to solve complex optimization problems of energy. In the new algorithm, two features of the original CSA are modified: (I the awareness probability (AP and (II the random perturbation. With such adaptations, the new approach preserves solution diversity and improves the convergence to difficult high multi-modal optima. In order to evaluate its performance, the proposed algorithm has been tested in a set of four optimization problems which involve induction motors and distribution networks. The results demonstrate the high performance of the proposed method when it is compared with other popular approaches.

  14. Applied algorithm in the liner inspection of solid rocket motors

    Science.gov (United States)

    Hoffmann, Luiz Felipe Simões; Bizarria, Francisco Carlos Parquet; Bizarria, José Walter Parquet

    2018-03-01

    In rocket motors, the bonding between the solid propellant and thermal insulation is accomplished by a thin adhesive layer, known as liner. The liner application method involves a complex sequence of tasks, which includes in its final stage, the surface integrity inspection. Nowadays in Brazil, an expert carries out a thorough visual inspection to detect defects on the liner surface that may compromise the propellant interface bonding. Therefore, this paper proposes an algorithm that uses the photometric stereo technique and the K-nearest neighbor (KNN) classifier to assist the expert in the surface inspection. Photometric stereo allows the surface information recovery of the test images, while the KNN method enables image pixels classification into two classes: non-defect and defect. Tests performed on a computer vision based prototype validate the algorithm. The positive results suggest that the algorithm is feasible and when implemented in a real scenario, will be able to help the expert in detecting defective areas on the liner surface.

  15. ANTQ evolutionary algorithm applied to nuclear fuel reload problem

    International Nuclear Information System (INIS)

    Machado, Liana; Schirru, Roberto

    2000-01-01

    Nuclear fuel reload optimization is a NP-complete combinatorial optimization problem where the aim is to find fuel rods' configuration that maximizes burnup or minimizes the power peak factor. For decades this problem was solved exclusively using an expert's knowledge. From the eighties, however, there have been efforts to automatize fuel reload. The first relevant effort used Simulated Annealing, but more recent publications show Genetic Algorithm's (GA) efficiency on this problem's solution. Following this direction, our aim is to optimize nuclear fuel reload using Ant-Q, a reinforcement learning algorithm based on the Cellular Computing paradigm. Ant-Q's results on the Travelling Salesmen Problem, which is conceptually similar to fuel reload, are better than the GA's ones. Ant-Q was tested on fuel reload by the simulation of the first cycle in-out reload of Bibils, a 193 fuel element PWR. Comparing An-Q's result with the GA's ones, it can b seen that even without a local heuristics, the former evolutionary algorithm can be used to solve the nuclear fuel reload problem. (author)

  16. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    OpenAIRE

    Edmundo Guerra; Rodrigo Munguia; Yolanda Bolea; Antoni Grau

    2013-01-01

    Simultaneous Location and Mapping (SLAM) is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D) Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hyp...

  17. Multibeam 3D Underwater SLAM with Probabilistic Registration

    Directory of Open Access Journals (Sweden)

    Albert Palomer

    2016-04-01

    Full Text Available This paper describes a pose-based underwater 3D Simultaneous Localization and Mapping (SLAM using a multibeam echosounder to produce high consistency underwater maps. The proposed algorithm compounds swath profiles of the seafloor with dead reckoning localization to build surface patches (i.e., point clouds. An Iterative Closest Point (ICP with a probabilistic implementation is then used to register the point clouds, taking into account their uncertainties. The registration process is divided in two steps: (1 point-to-point association for coarse registration and (2 point-to-plane association for fine registration. The point clouds of the surfaces to be registered are sub-sampled in order to decrease both the computation time and also the potential of falling into local minima during the registration. In addition, a heuristic is used to decrease the complexity of the association step of the ICP from O ( n 2 to O ( n . The performance of the SLAM framework is tested using two real world datasets: First, a 2.5D bathymetric dataset obtained with the usual down-looking multibeam sonar configuration, and second, a full 3D underwater dataset acquired with a multibeam sonar mounted on a pan and tilt unit.

  18. Validation of Underwater Sensor Package Using Feature Based SLAM

    Directory of Open Access Journals (Sweden)

    Christopher Cain

    2016-03-01

    Full Text Available Robotic vehicles working in new, unexplored environments must be able to locate themselves in the environment while constructing a picture of the objects in the environment that could act as obstacles that would prevent the vehicles from completing their desired tasks. In enclosed environments, underwater range sensors based off of acoustics suffer performance issues due to reflections. Additionally, their relatively high cost make them less than ideal for usage on low cost vehicles designed to be used underwater. In this paper we propose a sensor package composed of a downward facing camera, which is used to perform feature tracking based visual odometry, and a custom vision-based two dimensional rangefinder that can be used on low cost underwater unmanned vehicles. In order to examine the performance of this sensor package in a SLAM framework, experimental tests are performed using an unmanned ground vehicle and two feature based SLAM algorithms, the extended Kalman filter based approach and the Rao-Blackwellized, particle filter based approach, to validate the sensor package.

  19. Validation of Underwater Sensor Package Using Feature Based SLAM

    Science.gov (United States)

    Cain, Christopher; Leonessa, Alexander

    2016-01-01

    Robotic vehicles working in new, unexplored environments must be able to locate themselves in the environment while constructing a picture of the objects in the environment that could act as obstacles that would prevent the vehicles from completing their desired tasks. In enclosed environments, underwater range sensors based off of acoustics suffer performance issues due to reflections. Additionally, their relatively high cost make them less than ideal for usage on low cost vehicles designed to be used underwater. In this paper we propose a sensor package composed of a downward facing camera, which is used to perform feature tracking based visual odometry, and a custom vision-based two dimensional rangefinder that can be used on low cost underwater unmanned vehicles. In order to examine the performance of this sensor package in a SLAM framework, experimental tests are performed using an unmanned ground vehicle and two feature based SLAM algorithms, the extended Kalman filter based approach and the Rao-Blackwellized, particle filter based approach, to validate the sensor package. PMID:26999142

  20. Applying Planning Algorithms to Argue in Cooperative Work

    Science.gov (United States)

    Monteserin, Ariel; Schiaffino, Silvia; Amandi, Analía

    Negotiation is typically utilized in cooperative work scenarios for solving conflicts. Anticipating possible arguments in this negotiation step represents a key factor since we can take decisions about our participation in the cooperation process. In this context, we present a novel application of planning algorithms for argument generation, where the actions of a plan represent the arguments that a person might use during the argumentation process. In this way, we can plan how to persuade the other participants in cooperative work for reaching an expected agreement in terms of our interests. This approach allows us to take advantages since we can test anticipated argumentative solutions in advance.

  1. Semantic data association for planar features in outdoor 6D-SLAM using lidar

    Science.gov (United States)

    Ulas, C.; Temeltas, H.

    2013-05-01

    Simultaneous Localization and Mapping (SLAM) is a fundamental problem of the autonomous systems in GPS (Global Navigation System) denied environments. The traditional probabilistic SLAM methods uses point features as landmarks and hold all the feature positions in their state vector in addition to the robot pose. The bottleneck of the point-feature based SLAM methods is the data association problem, which are mostly based on a statistical measure. The data association performance is very critical for a robust SLAM method since all the filtering strategies are applied after a known correspondence. For point-features, two different but very close landmarks in the same scene might be confused while giving the correspondence decision when their positions and error covariance matrix are solely taking into account. Instead of using the point features, planar features can be considered as an alternative landmark model in the SLAM problem to be able to provide a more consistent data association. Planes contain rich information for the solution of the data association problem and can be distinguished easily with respect to point features. In addition, planar maps are very compact since an environment has only very limited number of planar structures. The planar features does not have to be large structures like building wall or roofs; the small plane segments can also be used as landmarks like billboards, traffic posts and some part of the bridges in urban areas. In this paper, a probabilistic plane-feature extraction method from 3DLiDAR data and the data association based on the extracted semantic information of the planar features is introduced. The experimental results show that the semantic data association provides very satisfactory result in outdoor 6D-SLAM.

  2. Genetic Algorithm Applied to the Eigenvalue Equalization Filtered-x LMS Algorithm (EE-FXLMS

    Directory of Open Access Journals (Sweden)

    Stephan P. Lovstedt

    2008-01-01

    Full Text Available The FXLMS algorithm, used extensively in active noise control (ANC, exhibits frequency-dependent convergence behavior. This leads to degraded performance for time-varying tonal noise and noise with multiple stationary tones. Previous work by the authors proposed the eigenvalue equalization filtered-x least mean squares (EE-FXLMS algorithm. For that algorithm, magnitude coefficients of the secondary path transfer function are modified to decrease variation in the eigenvalues of the filtered-x autocorrelation matrix, while preserving the phase, giving faster convergence and increasing overall attenuation. This paper revisits the EE-FXLMS algorithm, using a genetic algorithm to find magnitude coefficients that give the least variation in eigenvalues. This method overcomes some of the problems with implementing the EE-FXLMS algorithm arising from finite resolution of sampled systems. Experimental control results using the original secondary path model, and a modified secondary path model for both the previous implementation of EE-FXLMS and the genetic algorithm implementation are compared.

  3. Automated microaneurysm detection algorithms applied to diabetic retinopathy retinal images

    Directory of Open Access Journals (Sweden)

    Akara Sopharak

    2013-07-01

    Full Text Available Diabetic retinopathy is the commonest cause of blindness in working age people. It is characterised and graded by the development of retinal microaneurysms, haemorrhages and exudates. The damage caused by diabetic retinopathy can be prevented if it is treated in its early stages. Therefore, automated early detection can limit the severity of the disease, improve the follow-up management of diabetic patients and assist ophthalmologists in investigating and treating the disease more efficiently. This review focuses on microaneurysm detection as the earliest clinically localised characteristic of diabetic retinopathy, a frequently observed complication in both Type 1 and Type 2 diabetes. Algorithms used for microaneurysm detection from retinal images are reviewed. A number of features used to extract microaneurysm are summarised. Furthermore, a comparative analysis of reported methods used to automatically detect microaneurysms is presented and discussed. The performance of methods and their complexity are also discussed.

  4. Slamming Simulations in a Conditional Wave

    DEFF Research Database (Denmark)

    Seng, Sopheak; Jensen, Jørgen Juncher

    2012-01-01

    A study of slamming events in conditional waves is presented in this paper. The ship is sailing in head sea and the motion is solved for under the assumption of rigid body motion constrained to two degree-of-freedom i.e. heave and pitch. Based on a time domain non-linear strip theory most probable...

  5. Visual EKF-SLAM from Heterogeneous Landmarks.

    Science.gov (United States)

    Esparza-Jiménez, Jorge Othón; Devy, Michel; Gordillo, José L

    2016-04-07

    Many applications require the localization of a moving object, e.g., a robot, using sensory data acquired from embedded devices. Simultaneous localization and mapping from vision performs both the spatial and temporal fusion of these data on a map when a camera moves in an unknown environment. Such a SLAM process executes two interleaved functions: the front-end detects and tracks features from images, while the back-end interprets features as landmark observations and estimates both the landmarks and the robot positions with respect to a selected reference frame. This paper describes a complete visual SLAM solution, combining both point and line landmarks on a single map. The proposed method has an impact on both the back-end and the front-end. The contributions comprehend the use of heterogeneous landmark-based EKF-SLAM (the management of a map composed of both point and line landmarks); from this perspective, the comparison between landmark parametrizations and the evaluation of how the heterogeneity improves the accuracy on the camera localization, the development of a front-end active-search process for linear landmarks integrated into SLAM and the experimentation methodology.

  6. Apply lightweight recognition algorithms in optical music recognition

    Science.gov (United States)

    Pham, Viet-Khoi; Nguyen, Hai-Dang; Nguyen-Khac, Tung-Anh; Tran, Minh-Triet

    2015-02-01

    The problems of digitalization and transformation of musical scores into machine-readable format are necessary to be solved since they help people to enjoy music, to learn music, to conserve music sheets, and even to assist music composers. However, the results of existing methods still require improvements for higher accuracy. Therefore, the authors propose lightweight algorithms for Optical Music Recognition to help people to recognize and automatically play musical scores. In our proposal, after removing staff lines and extracting symbols, each music symbol is represented as a grid of identical M ∗ N cells, and the features are extracted and classified with multiple lightweight SVM classifiers. Through experiments, the authors find that the size of 10 ∗ 12 cells yields the highest precision value. Experimental results on the dataset consisting of 4929 music symbols taken from 18 modern music sheets in the Synthetic Score Database show that our proposed method is able to classify printed musical scores with accuracy up to 99.56%.

  7. Applying Intelligent Algorithms to Automate the Identification of Error Factors.

    Science.gov (United States)

    Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han

    2018-05-03

    Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.

  8. A TRMM-Calibrated Infrared Rainfall Algorithm Applied Over Brazil

    Science.gov (United States)

    Negri, A. J.; Xu, L.; Adler, R. F.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The development of a satellite infrared technique for estimating convective and stratiform rainfall and its application in studying the diurnal variability of rainfall in Amazonia are presented. The Convective-Stratiform. Technique, calibrated by coincident, physically retrieved rain rates from the Tropical Rain Measuring Mission (TRMM) Microwave Imager (TMI), is applied during January to April 1999 over northern South America. The diurnal cycle of rainfall, as well as the division between convective and stratiform rainfall is presented. Results compare well (a one-hour lag) with the diurnal cycle derived from Tropical Ocean-Global Atmosphere (TOGA) radar-estimated rainfall in Rondonia. The satellite estimates reveal that the convective rain constitutes, in the mean, 24% of the rain area while accounting for 67% of the rain volume. The effects of geography (rivers, lakes, coasts) and topography on the diurnal cycle of convection are examined. In particular, the Amazon River, downstream of Manaus, is shown to both enhance early morning rainfall and inhibit afternoon convection. Monthly estimates from this technique, dubbed CST/TMI, are verified over a dense rain gage network in the state of Ceara, in northeast Brazil. The CST/TMI showed a high bias equal to +33% of the gage mean, indicating that possibly the TMI estimates alone are also high. The root mean square difference (after removal of the bias) equaled 36.6% of the gage mean. The correlation coefficient was 0.77 based on 72 station-months.

  9. Adaptive Monocular Visual-Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices.

    Science.gov (United States)

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-11-07

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  10. Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices

    Directory of Open Access Journals (Sweden)

    Jin-Chun Piao

    2017-11-01

    Full Text Available Simultaneous localization and mapping (SLAM is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  11. Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices

    Science.gov (United States)

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-01-01

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. PMID:29112143

  12. The Great Deluge Algorithm applied to a nuclear reactor core design optimization problem

    International Nuclear Information System (INIS)

    Sacco, Wagner F.; Oliveira, Cassiano R.E. de

    2005-01-01

    The Great Deluge Algorithm (GDA) is a local search algorithm introduced by Dueck. It is an analogy with a flood: the 'water level' rises continuously and the proposed solution must lie above the 'surface' in order to survive. The crucial parameter is the 'rain speed', which controls convergence of the algorithm similarly to Simulated Annealing's annealing schedule. This algorithm is applied to the reactor core design optimization problem, which consists in adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the average peak-factor in a 3-enrichment-zone reactor, considering restrictions on the average thermal flux, criticality and sub-moderation. This problem was previously attacked by the canonical genetic algorithm (GA) and by a Niching Genetic Algorithm (NGA). NGAs were designed to force the genetic algorithm to maintain a heterogeneous population throughout the evolutionary process, avoiding the phenomenon known as genetic drift, where all the individuals converge to a single solution. The results obtained by the Great Deluge Algorithm are compared to those obtained by both algorithms mentioned above. The three algorithms are submitted to the same computational effort and GDA reaches the best results, showing its potential for other applications in the nuclear engineering field as, for instance, the nuclear core reload optimization problem. One of the great advantages of this algorithm over the GA is that it does not require special operators for discrete optimization. (author)

  13. A novel visual-inertial monocular SLAM

    Science.gov (United States)

    Yue, Xiaofeng; Zhang, Wenjuan; Xu, Li; Liu, JiangGuo

    2018-02-01

    With the development of sensors and computer vision research community, cameras, which are accurate, compact, wellunderstood and most importantly cheap and ubiquitous today, have gradually been at the center of robot location. Simultaneous localization and mapping (SLAM) using visual features, which is a system getting motion information from image acquisition equipment and rebuild the structure in unknown environment. We provide an analysis of bioinspired flights in insects, employing a novel technique based on SLAM. Then combining visual and inertial measurements to get high accuracy and robustness. we present a novel tightly-coupled Visual-Inertial Simultaneous Localization and Mapping system which get a new attempt to address two challenges which are the initialization problem and the calibration problem. experimental results and analysis show the proposed approach has a more accurate quantitative simulation of insect navigation, which can reach the positioning accuracy of centimeter level.

  14. Three main paradigms of simultaneous localization and mapping (SLAM) problem

    Science.gov (United States)

    Imani, Vandad; Haataja, Keijo; Toivanen, Pekka

    2018-04-01

    Simultaneous Localization and Mapping (SLAM) is one of the most challenging research areas within computer and machine vision for automated scene commentary and explanation. The SLAM technique has been a developing research area in the robotics context during recent years. By utilizing the SLAM method robot can estimate the different positions of the robot at the distinct points of time which can indicate the trajectory of robot as well as generate a map of the environment. SLAM has unique traits which are estimating the location of robot and building a map in the various types of environment. SLAM is effective in different types of environment such as indoor, outdoor district, Air, Underwater, Underground and Space. Several approaches have been investigated to use SLAM technique in distinct environments. The purpose of this paper is to provide an accurate perceptive review of case history of SLAM relied on laser/ultrasonic sensors and camera as perception input data. In addition, we mainly focus on three paradigms of SLAM problem with all its pros and cons. In the future, use intelligent methods and some new idea will be used on visual SLAM to estimate the motion intelligent underwater robot and building a feature map of marine environment.

  15. Real-time slicing algorithm for Stereolithography (STL) CAD model applied in additive manufacturing industry

    Science.gov (United States)

    Adnan, F. A.; Romlay, F. R. M.; Shafiq, M.

    2018-04-01

    Owing to the advent of the industrial revolution 4.0, the need for further evaluating processes applied in the additive manufacturing application particularly the computational process for slicing is non-trivial. This paper evaluates a real-time slicing algorithm for slicing an STL formatted computer-aided design (CAD). A line-plane intersection equation was applied to perform the slicing procedure at any given height. The application of this algorithm has found to provide a better computational time regardless the number of facet in the STL model. The performance of this algorithm is evaluated by comparing the results of the computational time for different geometry.

  16. SLAM - Based Approach to Dynamic Ship Positioning

    Directory of Open Access Journals (Sweden)

    Krzysztof Wrobel

    2014-03-01

    Full Text Available Dynamically positioned vessels, used by offshore industry, use not only satellite navigation but also different positioning systems, often referred to as reference' systems. Most of them use multiple technical devices located outside the vessel which creates some problems with their accessibility and performance. In this paper, a basic concept of reference system independent from any external device is presented, basing on hydroacoustics and Simultaneous Localization and Mapping (SLAM method. Theoretical analysis of its operability is also performed.

  17. Towards Informative Path Planning for Acoustic SLAM

    OpenAIRE

    Evers, C; Moore, A; Naylor, P

    2016-01-01

    Acoustic scene mapping is a challenging task as microphone arrays can often localize sound sources only in terms of their directions. Spatial diversity can be exploited constructively to infer source-sensor range when using microphone arrays installed on moving platforms, such as robots. As the absolute location of a moving robot is often unknown in practice, Acoustic Simultaneous Localization And Mapping (a-SLAM) is required in order to localize the moving robot?s positions and jointly map t...

  18. Examining applying high performance genetic data feature selection and classification algorithms for colon cancer diagnosis.

    Science.gov (United States)

    Al-Rajab, Murad; Lu, Joan; Xu, Qiang

    2017-07-01

    This paper examines the accuracy and efficiency (time complexity) of high performance genetic data feature selection and classification algorithms for colon cancer diagnosis. The need for this research derives from the urgent and increasing need for accurate and efficient algorithms. Colon cancer is a leading cause of death worldwide, hence it is vitally important for the cancer tissues to be expertly identified and classified in a rapid and timely manner, to assure both a fast detection of the disease and to expedite the drug discovery process. In this research, a three-phase approach was proposed and implemented: Phases One and Two examined the feature selection algorithms and classification algorithms employed separately, and Phase Three examined the performance of the combination of these. It was found from Phase One that the Particle Swarm Optimization (PSO) algorithm performed best with the colon dataset as a feature selection (29 genes selected) and from Phase Two that the Support Vector Machine (SVM) algorithm outperformed other classifications, with an accuracy of almost 86%. It was also found from Phase Three that the combined use of PSO and SVM surpassed other algorithms in accuracy and performance, and was faster in terms of time analysis (94%). It is concluded that applying feature selection algorithms prior to classification algorithms results in better accuracy than when the latter are applied alone. This conclusion is important and significant to industry and society. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. H-SLAM: Rao-Blackwellized Particle Filter SLAM Using Hilbert Maps

    Directory of Open Access Journals (Sweden)

    Guillem Vallicrosa

    2018-05-01

    Full Text Available Occupancy Grid maps provide a probabilistic representation of space which is important for a variety of robotic applications like path planning and autonomous manipulation. In this paper, a SLAM (Simultaneous Localization and Mapping framework capable of obtaining this representation online is presented. The H-SLAM (Hilbert Maps SLAM is based on Hilbert Map representation and uses a Particle Filter to represent the robot state. Hilbert Maps offer a continuous probabilistic representation with a small memory footprint. We present a series of experimental results carried both in simulation and with real AUVs (Autonomous Underwater Vehicles. These results demonstrate that our approach is able to represent the environment more consistently while capable of running online.

  20. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems

  1. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Elsheikh, Ahmed H., E-mail: aelsheikh@ices.utexas.edu [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Institute of Petroleum Engineering, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Wheeler, Mary F. [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Hoteit, Ibrahim [Department of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia)

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.

  2. An approach to robot SLAM based on incremental appearance learning with omnidirectional vision

    Science.gov (United States)

    Wu, Hua; Qin, Shi-Yin

    2011-03-01

    Localisation and mapping with an omnidirectional camera becomes more difficult as the landmark appearances change dramatically in the omnidirectional image. With conventional techniques, it is difficult to match the features of the landmark with the template. We present a novel robot simultaneous localisation and mapping (SLAM) algorithm with an omnidirectional camera, which uses incremental landmark appearance learning to provide posterior probability distribution for estimating the robot pose under a particle filtering framework. The major contribution of our work is to represent the posterior estimation of the robot pose by incremental probabilistic principal component analysis, which can be naturally incorporated into the particle filtering algorithm for robot SLAM. Moreover, the innovative method of this article allows the adoption of the severe distorted landmark appearances viewed with omnidirectional camera for robot SLAM. The experimental results demonstrate that the localisation error is less than 1 cm in an indoor environment using five landmarks, and the location of the landmark appearances can be estimated within 5 pixels deviation from the ground truth in the omnidirectional image at a fairly fast speed.

  3. Power to the People! Meta-algorithmic modelling in applied data science

    NARCIS (Netherlands)

    Spruit, M.; Jagesar, R.

    2016-01-01

    This position paper first defines the research field of applied data science at the intersection of domain expertise, data mining, and engineering capabilities, with particular attention to analytical applications. We then propose a meta-algorithmic approach for applied data science with societal

  4. On randomized algorithms for numerical solution of applied Fredholm integral equations of the second kind

    Science.gov (United States)

    Voytishek, Anton V.; Shipilov, Nikolay M.

    2017-11-01

    In this paper, the systematization of numerical (implemented on a computer) randomized functional algorithms for approximation of a solution of Fredholm integral equation of the second kind is carried out. Wherein, three types of such algorithms are distinguished: the projection, the mesh and the projection-mesh methods. The possibilities for usage of these algorithms for solution of practically important problems is investigated in detail. The disadvantages of the mesh algorithms, related to the necessity of calculation values of the kernels of integral equations in fixed points, are identified. On practice, these kernels have integrated singularities, and calculation of their values is impossible. Thus, for applied problems, related to solving Fredholm integral equation of the second kind, it is expedient to use not mesh, but the projection and the projection-mesh randomized algorithms.

  5. Expeditious 3D poisson vlasov algorithm applied to ion extraction from a plasma

    International Nuclear Information System (INIS)

    Whealton, J.H.; McGaffey, R.W.; Meszaros, P.S.

    1983-01-01

    A new 3D Poisson Vlasov algorithm is under development which differs from a previous algorithm, referenced in this paper, in two respects: the mesh lines are cartesian, and the Poisson equation is solved iteratively. The resulting algorithm has been used to examine the same boundary value problem as considered in the earlier algorithm except that the number of nodes is 2 times greater. The same physical results were obtained except the computational time was reduced by a factor of 60 and the memory requirement was reduced by a factor of 10. This algorithm at present restricts Neumann boundary conditions to orthogonal planes lying along mesh lines. No such restriction applies to Dirichlet boundaries. An emittance diagram is shown below where those points lying on the y = 0 line start on the axis of symmetry and those near the y = 1 line start near the slot end

  6. Using Symmetrical Regions of Interest to Improve Visual SLAM

    NARCIS (Netherlands)

    Kootstra, Geert; Schomaker, Lambertus

    2009-01-01

    Simultaneous Localization and Mapping (SLAM) based on visual information is a challenging problem. One of the main problems with visual SLAM is to find good quality landmarks, that can be detected despite noise and small changes in viewpoint. Many approaches use SIFT interest points as visual

  7. PSO-Based Algorithm Applied to Quadcopter Micro Air Vehicle Controller Design

    Directory of Open Access Journals (Sweden)

    Huu-Khoa Tran

    2016-09-01

    Full Text Available Due to the rapid development of science and technology in recent times, many effective controllers are designed and applied successfully to complicated systems. The significant task of controller design is to determine optimized control gains in a short period of time. With this purpose in mind, a combination of the particle swarm optimization (PSO-based algorithm and the evolutionary programming (EP algorithm is introduced in this article. The benefit of this integration algorithm is the creation of new best-parameters for control design schemes. The proposed controller designs are then demonstrated to have the best performance for nonlinear micro air vehicle models.

  8. The gravitational attraction algorithm: a new metaheuristic applied to a nuclear reactor core design optimization problem

    International Nuclear Information System (INIS)

    Sacco, Wagner F.; Oliveira, Cassiano R.E. de

    2005-01-01

    A new metaheuristic called 'Gravitational Attraction Algorithm' (GAA) is introduced in this article. It is an analogy with the gravitational force field, where a body attracts another proportionally to both masses and inversely to their distances. The GAA is a populational algorithm where, first of all, the solutions are clustered using the Fuzzy Clustering Means (FCM) algorithm. Following that, the gravitational forces of the individuals in relation to each cluster are evaluated and this individual or solution is displaced to the cluster with the greatest attractive force. Once it is inside this cluster, the solution receives small stochastic variations, performing a local exploration. Then the solutions are crossed over and the process starts all over again. The parameters required by the GAA are the 'diversity factor', which is used to create a random diversity in a fashion similar to genetic algorithm's mutation, and the number of clusters for the FCM. GAA is applied to the reactor core design optimization problem which consists in adjusting several reactor cell parameters in order to minimize the average peak-factor in a 3-enrichment-zone reactor, considering operational restrictions. This problem was previously attacked using the canonical genetic algorithm (GA) and a Niching Genetic Algorithm (NGA). The new metaheuristic is then compared to those two algorithms. The three algorithms are submitted to the same computational effort and GAA reaches the best results, showing its potential for other applications in the nuclear engineering field as, for instance, the nuclear core reload optimization problem. (author)

  9. Performances of the New Real Time Tsunami Detection Algorithm applied to tide gauges data

    Science.gov (United States)

    Chierici, F.; Embriaco, D.; Morucci, S.

    2017-12-01

    Real-time tsunami detection algorithms play a key role in any Tsunami Early Warning System. We have developed a new algorithm for tsunami detection (TDA) based on the real-time tide removal and real-time band-pass filtering of seabed pressure time series acquired by Bottom Pressure Recorders. The TDA algorithm greatly increases the tsunami detection probability, shortens the detection delay and enhances detection reliability with respect to the most widely used tsunami detection algorithm, while containing the computational cost. The algorithm is designed to be used also in autonomous early warning systems with a set of input parameters and procedures which can be reconfigured in real time. We have also developed a methodology based on Monte Carlo simulations to test the tsunami detection algorithms. The algorithm performance is estimated by defining and evaluating statistical parameters, namely the detection probability, the detection delay, which are functions of the tsunami amplitude and wavelength, and the occurring rate of false alarms. In this work we present the performance of the TDA algorithm applied to tide gauge data. We have adapted the new tsunami detection algorithm and the Monte Carlo test methodology to tide gauges. Sea level data acquired by coastal tide gauges in different locations and environmental conditions have been used in order to consider real working scenarios in the test. We also present an application of the algorithm to the tsunami event generated by Tohoku earthquake on March 11th 2011, using data recorded by several tide gauges scattered all over the Pacific area.

  10. Behavior Analysis of Novel Wearable Indoor Mapping System Based on 3D-SLAM.

    Science.gov (United States)

    Lagüela, Susana; Dorado, Iago; Gesto, Manuel; Arias, Pedro; González-Aguilera, Diego; Lorenzo, Henrique

    2018-03-02

    This paper presents a Wearable Prototype for indoor mapping developed by the University of Vigo. The system is based on a Velodyne LiDAR, acquiring points with 16 rays for a simplistic or low-density 3D representation of reality. With this, a Simultaneous Localization and Mapping (3D-SLAM) method is developed for the mapping and generation of 3D point clouds of scenarios deprived from GNSS signal. The quality of the system presented is validated through the comparison with a commercial indoor mapping system, Zeb-Revo, from the company GeoSLAM and with a terrestrial LiDAR, Faro Focus 3D X330. The first is considered as a relative reference with other mobile systems and is chosen due to its use of the same principle for mapping: SLAM techniques based on Robot Operating System (ROS), while the second is taken as ground-truth for the determination of the final accuracy of the system regarding reality. Results show that the accuracy of the system is mainly determined by the accuracy of the sensor, with little increment in the error introduced by the mapping algorithm.

  11. Behavior Analysis of Novel Wearable Indoor Mapping System Based on 3D-SLAM

    Directory of Open Access Journals (Sweden)

    Susana Lagüela

    2018-03-01

    Full Text Available This paper presents a Wearable Prototype for indoor mapping developed by the University of Vigo. The system is based on a Velodyne LiDAR, acquiring points with 16 rays for a simplistic or low-density 3D representation of reality. With this, a Simultaneous Localization and Mapping (3D-SLAM method is developed for the mapping and generation of 3D point clouds of scenarios deprived from GNSS signal. The quality of the system presented is validated through the comparison with a commercial indoor mapping system, Zeb-Revo, from the company GeoSLAM and with a terrestrial LiDAR, Faro Focus3D X330. The first is considered as a relative reference with other mobile systems and is chosen due to its use of the same principle for mapping: SLAM techniques based on Robot Operating System (ROS, while the second is taken as ground-truth for the determination of the final accuracy of the system regarding reality. Results show that the accuracy of the system is mainly determined by the accuracy of the sensor, with little increment in the error introduced by the mapping algorithm.

  12. A Multi-Sensorial Simultaneous Localization and Mapping (SLAM) System for Low-Cost Micro Aerial Vehicles in GPS-Denied Environments.

    Science.gov (United States)

    López, Elena; García, Sergio; Barea, Rafael; Bergasa, Luis M; Molinos, Eduardo J; Arroyo, Roberto; Romera, Eduardo; Pardo, Samuel

    2017-04-08

    One of the main challenges of aerial robots navigation in indoor or GPS-denied environments is position estimation using only the available onboard sensors. This paper presents a Simultaneous Localization and Mapping (SLAM) system that remotely calculates the pose and environment map of different low-cost commercial aerial platforms, whose onboard computing capacity is usually limited. The proposed system adapts to the sensory configuration of the aerial robot, by integrating different state-of-the art SLAM methods based on vision, laser and/or inertial measurements using an Extended Kalman Filter (EKF). To do this, a minimum onboard sensory configuration is supposed, consisting of a monocular camera, an Inertial Measurement Unit (IMU) and an altimeter. It allows to improve the results of well-known monocular visual SLAM methods (LSD-SLAM and ORB-SLAM are tested and compared in this work) by solving scale ambiguity and providing additional information to the EKF. When payload and computational capabilities permit, a 2D laser sensor can be easily incorporated to the SLAM system, obtaining a local 2.5D map and a footprint estimation of the robot position that improves the 6D pose estimation through the EKF. We present some experimental results with two different commercial platforms, and validate the system by applying it to their position control.

  13. Neural Network Blind Equalization Algorithm Applied in Medical CT Image Restoration

    Directory of Open Access Journals (Sweden)

    Yunshan Sun

    2013-01-01

    Full Text Available A new algorithm for iterative blind image restoration is presented in this paper. The method extends blind equalization found in the signal case to the image. A neural network blind equalization algorithm is derived and used in conjunction with Zigzag coding to restore the original image. As a result, the effect of PSF can be removed by using the proposed algorithm, which contributes to eliminate intersymbol interference (ISI. In order to obtain the estimation of the original image, what is proposed in this method is to optimize constant modulus blind equalization cost function applied to grayscale CT image by using conjugate gradient method. Analysis of convergence performance of the algorithm verifies the feasibility of this method theoretically; meanwhile, simulation results and performance evaluations of recent image quality metrics are provided to assess the effectiveness of the proposed method.

  14. Self-adaptive global best harmony search algorithm applied to reactor core fuel management optimization

    International Nuclear Information System (INIS)

    Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.; Valavi, K.

    2013-01-01

    Highlights: • SGHS enhanced the convergence rate of LPO using some improvements in comparison to basic HS and GHS. • SGHS optimization algorithm obtained averagely better fitness relative to basic HS and GHS algorithms. • Upshot of the SGHS implementation in the LPO reveals its flexibility, efficiency and reliability. - Abstract: The aim of this work is to apply the new developed optimization algorithm, Self-adaptive Global best Harmony Search (SGHS), for PWRs fuel management optimization. SGHS algorithm has some modifications in comparison with basic Harmony Search (HS) and Global-best Harmony Search (GHS) algorithms such as dynamically change of parameters. For the demonstration of SGHS ability to find an optimal configuration of fuel assemblies, basic Harmony Search (HS) and Global-best Harmony Search (GHS) algorithms also have been developed and investigated. For this purpose, Self-adaptive Global best Harmony Search Nodal Expansion package (SGHSNE) has been developed implementing HS, GHS and SGHS optimization algorithms for the fuel management operation of nuclear reactor cores. This package uses developed average current nodal expansion code which solves the multi group diffusion equation by employment of first and second orders of Nodal Expansion Method (NEM) for two dimensional, hexagonal and rectangular geometries, respectively, by one node per a FA. Loading pattern optimization was performed using SGHSNE package for some test cases to present the SGHS algorithm capability in converging to near optimal loading pattern. Results indicate that the convergence rate and reliability of the SGHS method are quite promising and practically, SGHS improves the quality of loading pattern optimization results relative to HS and GHS algorithms. As a result, it has the potential to be used in the other nuclear engineering optimization problems

  15. Slamming: Recent Progress in the Evaluation of Impact Pressures

    Science.gov (United States)

    Dias, Frédéric; Ghidaglia, Jean-Michel

    2018-01-01

    Slamming, the violent impact between a liquid and solid, has been known to be important for a long time in the ship hydrodynamics community. More recently, applications ranging from the transport of liquefied natural gas (LNG) in LNG carriers to the harvesting of wave energy with oscillating wave surge converters have led to renewed interest in the topic. The main reason for this renewed interest is that the extreme impact pressures generated during slamming can affect the integrity of the structures involved. Slamming fluid mechanics is challenging to describe, as much from an experimental viewpoint as from a numerical viewpoint, because of the large span of spatial and temporal scales involved. Even the physical mechanisms of slamming are challenging: What physical phenomena must be included in slamming models? An important issue deals with the practical modeling of slamming: Are there any simple models available? Are numerical models viable? What are the consequences for the design of structures? This article describes the loading processes involved in slamming, offers state-of-the-art results, and highlights unresolved issues worthy of further research.

  16. An Intuitive Dominant Test Algorithm of CP-nets Applied on Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Liu Zhaowei

    2014-07-01

    Full Text Available A wireless sensor network is of spatially distributed with autonomous sensors, just like a multi-Agent system with single Agent. Conditional Preference networks is a qualitative tool for representing ceteris paribus (all other things being equal preference statements, it has been a research hotspot in artificial intelligence recently. But the algorithm and complexity of strong dominant test with respect to binary-valued structure CP-nets have not been solved, and few researchers address the application to other domain. In this paper, strong dominant test and application of CP-nets are studied in detail. Firstly, by constructing induced graph of CP-nets and studying its properties, we make a conclusion that the problem of strong dominant test on binary-valued CP-nets is single source shortest path problem essentially, so strong dominant test problem can be solved by improved Dijkstra’s algorithm. Secondly, we apply the algorithm above mentioned to the completeness of wireless sensor network, and design a completeness judging algorithm based on strong dominant test. Thirdly, we apply the algorithm on wireless sensor network to solve routing problem. In the end, we point out some interesting work in the future.

  17. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.

  18. Short Large-Amplitude Magnetic Structures (SLAMS) at Venus

    Science.gov (United States)

    Collinson, G. A.; Wilson, L. B.; Sibeck, D. G.; Shane, N.; Zhang, T. L.; Moore, T. E.; Coates, A. J.; Barabash, S.

    2012-01-01

    We present the first observation of magnetic fluctuations consistent with Short Large-Amplitude Magnetic Structures (SLAMS) in the foreshock of the planet Venus. Three monolithic magnetic field spikes were observed by the Venus Express on the 11th of April 2009. The structures were approx.1.5->11s in duration, had magnetic compression ratios between approx.3->6, and exhibited elliptical polarization. These characteristics are consistent with the SLAMS observed at Earth, Jupiter, and Comet Giacobini-Zinner, and thus we hypothesize that it is possible SLAMS may be found at any celestial body with a foreshock.

  19. A Neuro-Fuzzy Multi Swarm FastSLAM Framework

    OpenAIRE

    Havangi, R.; Teshnehlab, M.; Nekoui, M. A.

    2010-01-01

    FastSLAM is a framework for simultaneous localization using a Rao-Blackwellized particle filter. In FastSLAM, particle filter is used for the mobile robot pose (position and orientation) estimation, and an Extended Kalman Filter (EKF) is used for the feature location's estimation. However, FastSLAM degenerates over time. This degeneracy is due to the fact that a particle set estimating the pose of the robot loses its diversity. One of the main reasons for loosing particle diversity in FastSLA...

  20. A Fast and Robust Feature-Based Scan-Matching Method in 3D SLAM and the Effect of Sampling Strategies

    Directory of Open Access Journals (Sweden)

    Cihan Ulas

    2013-11-01

    Full Text Available Simultaneous localization and mapping (SLAM plays an important role in fully autonomous systems when a GNSS (global navigation satellite system is not available. Studies in both 2D indoor and 3D outdoor SLAM are based on the appearance of environments and utilize scan-matching methods to find rigid body transformation parameters between two consecutive scans. In this study, a fast and robust scan-matching method based on feature extraction is introduced. Since the method is based on the matching of certain geometric structures, like plane segments, the outliers and noise in the point cloud are considerably eliminated. Therefore, the proposed scan-matching algorithm is more robust than conventional methods. Besides, the registration time and the number of iterations are significantly reduced, since the number of matching points is efficiently decreased. As a scan-matching framework, an improved version of the normal distribution transform (NDT is used. The probability density functions (PDFs of the reference scan are generated as in the traditional NDT, and the feature extraction - based on stochastic plane detection - is applied to the only input scan. By using experimental dataset belongs to an outdoor environment like a university campus, we obtained satisfactory performance results. Moreover, the feature extraction part of the algorithm is considered as a special sampling strategy for scan-matching and compared to other sampling strategies, such as random sampling and grid-based sampling, the latter of which is first used in the NDT. Thus, this study also shows the effect of the subsampling on the performance of the NDT.

  1. Bottom Slamming on Heaving Point Absorber Wave Energy Devices

    DEFF Research Database (Denmark)

    De Backer, Griet; Vantorre, Marc; Frigaard, Peter

    2010-01-01

    shapes are considered: a hemisphere and two conical shapes with deadrise angles of 30 and 45, with a waterline diameter of 5 m. The simulations indicate that the risk of rising out of the water is largely dependent on the buoy draft and sea state. Although associated with power losses, emergence......Oscillating point absorber buoys may rise out of the water and be subjected to bottom slamming upon re-entering the water. Numerical simulations are performed to estimate the power absorption, the impact velocities and the corresponding slamming forces for various slamming constraints. Three buoy...... occurrence probabilities can be significantly reduced by adapting the control parameters. The magnitude of the slamming load is severely influenced by the buoy shape. The ratio between the peak impact load on the hemisphere and that on the 45 cone is approximately 2, whereas the power absorption is only 4...

  2. Signaling lymphocytic activation molecules Slam and cancers: friends or foes?

    Science.gov (United States)

    Fouquet, Gregory; Marcq, Ingrid; Debuysscher, Véronique; Bayry, Jagadeesh; Rabbind Singh, Amrathlal; Bengrine, Abderrahmane; Nguyen-Khac, Eric; Naassila, Mickael; Bouhlal, Hicham

    2018-03-23

    Signaling Lymphocytic Activation Molecules (SLAM) family receptors are initially described in immune cells. These receptors recruit both activating and inhibitory SH2 domain containing proteins through their Immunoreceptor Tyrosine based Switch Motifs (ITSMs). Accumulating evidence suggest that the members of this family are intimately involved in different physiological and pathophysiological events such as regulation of immune responses and entry pathways of certain viruses. Recently, other functions of SLAM, principally in the pathophysiology of neoplastic transformations have also been deciphered. These new findings may prompt SLAM to be considered as new tumor markers, diagnostic tools or potential therapeutic targets for controlling the tumor progression. In this review, we summarize the major observations describing the implications and features of SLAM in oncology and discuss the therapeutic potential attributed to these molecules.

  3. Creating spatial awareness in unmanned ground robots using SLAM

    Indian Academy of Sciences (India)

    -SLAM assumes that at each observation, the robot observes atleast two landmarks f1 and .... eight further divisions of its volume, it allows multi-resolution planning. .... The graphical model perspective for probabilistic inference is used. In the ...

  4. EKF - SLAM based navigation system for an AUV

    CSIR Research Space (South Africa)

    Matsebe, O

    2008-11-01

    Full Text Available SLAM is the process by which a robot builds a map of the environment and concurrently localise its position within the map. Solving this problem will render the robot truly autonomous...

  5. A reverse engineering algorithm for neural networks, applied to the subthalamopallidal network of basal ganglia.

    Science.gov (United States)

    Floares, Alexandru George

    2008-01-01

    Modeling neural networks with ordinary differential equations systems is a sensible approach, but also very difficult. This paper describes a new algorithm based on linear genetic programming which can be used to reverse engineer neural networks. The RODES algorithm automatically discovers the structure of the network, including neural connections, their signs and strengths, estimates its parameters, and can even be used to identify the biophysical mechanisms involved. The algorithm is tested on simulated time series data, generated using a realistic model of the subthalamopallidal network of basal ganglia. The resulting ODE system is highly accurate, and results are obtained in a matter of minutes. This is because the problem of reverse engineering a system of coupled differential equations is reduced to one of reverse engineering individual algebraic equations. The algorithm allows the incorporation of common domain knowledge to restrict the solution space. To our knowledge, this is the first time a realistic reverse engineering algorithm based on linear genetic programming has been applied to neural networks.

  6. Underground localization using dual magnetic field sequence measurement and pose graph SLAM for directional drilling

    International Nuclear Information System (INIS)

    Park, Byeolteo; Myung, Hyun

    2014-01-01

    With the development of unconventional gas, the technology of directional drilling has become more advanced. Underground localization is the key technique of directional drilling for real-time path following and system control. However, there are problems such as vibration, disconnection with external infrastructure, and magnetic field distortion. Conventional methods cannot solve these problems in real time or in various environments. In this paper, a novel underground localization algorithm using a re-measurement of the sequence of the magnetic field and pose graph SLAM (simultaneous localization and mapping) is introduced. The proposed algorithm exploits the property of the drilling system that the body passes through the previous pass. By comparing the recorded measurement from one magnetic sensor and the current re-measurement from another magnetic sensor, the proposed algorithm predicts the pose of the drilling system. The performance of the algorithm is validated through simulations and experiments. (paper)

  7. Underground localization using dual magnetic field sequence measurement and pose graph SLAM for directional drilling

    Science.gov (United States)

    Park, Byeolteo; Myung, Hyun

    2014-12-01

    With the development of unconventional gas, the technology of directional drilling has become more advanced. Underground localization is the key technique of directional drilling for real-time path following and system control. However, there are problems such as vibration, disconnection with external infrastructure, and magnetic field distortion. Conventional methods cannot solve these problems in real time or in various environments. In this paper, a novel underground localization algorithm using a re-measurement of the sequence of the magnetic field and pose graph SLAM (simultaneous localization and mapping) is introduced. The proposed algorithm exploits the property of the drilling system that the body passes through the previous pass. By comparing the recorded measurement from one magnetic sensor and the current re-measurement from another magnetic sensor, the proposed algorithm predicts the pose of the drilling system. The performance of the algorithm is validated through simulations and experiments.

  8. Two Measures for Enhancing Data Association Performance in SLAM

    Directory of Open Access Journals (Sweden)

    Wu Zhou

    2014-01-01

    Full Text Available Data association is one of the key problems in the SLAM community. Several data association failures may cause the SLAM results to be divergent. Data association performance in SLAM is affected by both data association methods and sensor information. Two measures of handling sensor information are introduced herein to enhance data association performance in SLAM. For the first measure, truncating strategy of limited features, instead of all matched features, is used for observation update. These features are selected according to an information variable. This truncating strategy is used to lower the effect of false matched features. For the other measure, a special rejecting mechanism is designed to reject suspected observations. When the predicted robot pose is obviously different from the updated robot pose, all observed sensor information at this moment is discarded. The rejecting mechanism aims at eliminating accidental sensor information. Experimental results indicate that the introduced measures perform well in improving the stability of data association in SLAM. These measures are of extraordinary value for real SLAM applications.

  9. A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor

    Directory of Open Access Journals (Sweden)

    Liang Zhang

    2015-08-01

    Full Text Available Internet of Things (IoT is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN k-Nearest Neighbor (KNN algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University’s datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy.

  10. A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor.

    Science.gov (United States)

    Zhang, Liang; Shen, Peiyi; Zhu, Guangming; Wei, Wei; Song, Houbing

    2015-08-14

    Internet of Things (IoT) is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB) method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN) k-Nearest Neighbor (KNN) algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC) estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP) is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS) algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University's datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy.

  11. A Comparative Study of Improved Artificial Bee Colony Algorithms Applied to Multilevel Image Thresholding

    Directory of Open Access Journals (Sweden)

    Kanjana Charansiriphaisan

    2013-01-01

    Full Text Available Multilevel thresholding is a highly useful tool for the application of image segmentation. Otsu’s method, a common exhaustive search for finding optimal thresholds, involves a high computational cost. There has been a lot of recent research into various meta-heuristic searches in the area of optimization research. This paper analyses and discusses using a family of artificial bee colony algorithms, namely, the standard ABC, ABC/best/1, ABC/best/2, IABC/best/1, IABC/rand/1, and CABC, and some particle swarm optimization-based algorithms for searching multilevel thresholding. The strategy for an onlooker bee to select an employee bee was modified to serve our purposes. The metric measures, which are used to compare the algorithms, are the maximum number of function calls, successful rate, and successful performance. The ranking was performed by Friedman ranks. The experimental results showed that IABC/best/1 outperformed the other techniques when all of them were applied to multilevel image thresholding. Furthermore, the experiments confirmed that IABC/best/1 is a simple, general, and high performance algorithm.

  12. Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process

    Science.gov (United States)

    Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh

    2018-06-01

    Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.

  13. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    Science.gov (United States)

    Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel

    2015-08-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.

  14. Deconvolution algorithms applied in ultrasonics; Methodes de deconvolution en echographie ultrasonore

    Energy Technology Data Exchange (ETDEWEB)

    Perrot, P

    1993-12-01

    In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs.

  15. Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data

    International Nuclear Information System (INIS)

    Martins, Fabio J W A; Foucaut, Jean-Marc; Stanislas, Michel; Thomas, Lionel; Azevedo, Luis F A

    2015-01-01

    Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time. (paper)

  16. A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem

    International Nuclear Information System (INIS)

    Park, Taehoon; Park, Won-Kwang

    2015-01-01

    Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation. (paper)

  17. APPLYING ARTIFICIAL NEURAL NETWORK OPTIMIZED BY FIREWORKS ALGORITHM FOR STOCK PRICE ESTIMATION

    Directory of Open Access Journals (Sweden)

    Khuat Thanh Tung

    2016-04-01

    Full Text Available Stock prediction is to determine the future value of a company stock dealt on an exchange. It plays a crucial role to raise the profit gained by firms and investors. Over the past few years, many methods have been developed in which plenty of efforts focus on the machine learning framework achieving the promising results. In this paper, an approach based on Artificial Neural Network (ANN optimized by Fireworks algorithm and data preprocessing by Haar Wavelet is applied to estimate the stock prices. The system was trained and tested with real data of various companies collected from Yahoo Finance. The obtained results are encouraging.

  18. A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem

    Science.gov (United States)

    Park, Taehoon; Park, Won-Kwang

    2015-09-01

    Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation.

  19. Intelligent simulated annealing algorithm applied to the optimization of the main magnet for magnetic resonance imaging machine

    International Nuclear Information System (INIS)

    Sanchez Lopez, Hector

    2001-01-01

    This work describes an alternative algorithm of Simulated Annealing applied to the design of the main magnet for a Magnetic Resonance Imaging machine. The algorithm uses a probabilistic radial base neuronal network to classify the possible solutions, before the objective function evaluation. This procedure allows reducing up to 50% the number of iterations required by simulated annealing to achieve the global maximum, when compared with the SA algorithm. The algorithm was applied to design a 0.1050 Tesla four coil resistive magnet, which produces a magnetic field 2.13 times more uniform than the solution given by SA. (author)

  20. Specific algorithm method of scoring the Clock Drawing Test applied in cognitively normal elderly

    Directory of Open Access Journals (Sweden)

    Liana Chaves Mendes-Santos

    Full Text Available The Clock Drawing Test (CDT is an inexpensive, fast and easily administered measure of cognitive function, especially in the elderly. This instrument is a popular clinical tool widely used in screening for cognitive disorders and dementia. The CDT can be applied in different ways and scoring procedures also vary. OBJECTIVE: The aims of this study were to analyze the performance of elderly on the CDT and evaluate inter-rater reliability of the CDT scored by using a specific algorithm method adapted from Sunderland et al. (1989. METHODS: We analyzed the CDT of 100 cognitively normal elderly aged 60 years or older. The CDT ("free-drawn" and Mini-Mental State Examination (MMSE were administered to all participants. Six independent examiners scored the CDT of 30 participants to evaluate inter-rater reliability. RESULTS AND CONCLUSION: A score of 5 on the proposed algorithm ("Numbers in reverse order or concentrated", equivalent to 5 points on the original Sunderland scale, was the most frequent (53.5%. The CDT specific algorithm method used had high inter-rater reliability (p<0.01, and mean score ranged from 5.06 to 5.96. The high frequency of an overall score of 5 points may suggest the need to create more nuanced evaluation criteria, which are sensitive to differences in levels of impairment in visuoconstructive and executive abilities during aging.

  1. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  2. Loose fusion based on SLAM and IMU for indoor environment

    Science.gov (United States)

    Zhu, Haijiang; Wang, Zhicheng; Zhou, Jinglin; Wang, Xuejing

    2018-04-01

    The simultaneous localization and mapping (SLAM) method based on the RGB-D sensor is widely researched in recent years. However, the accuracy of the RGB-D SLAM relies heavily on correspondence feature points, and the position would be lost in case of scenes with sparse textures. Therefore, plenty of fusion methods using the RGB-D information and inertial measurement unit (IMU) data have investigated to improve the accuracy of SLAM system. However, these fusion methods usually do not take into account the size of matched feature points. The pose estimation calculated by RGB-D information may not be accurate while the number of correct matches is too few. Thus, considering the impact of matches in SLAM system and the problem of missing position in scenes with few textures, a loose fusion method combining RGB-D with IMU is proposed in this paper. In the proposed method, we design a loose fusion strategy based on the RGB-D camera information and IMU data, which is to utilize the IMU data for position estimation when the corresponding point matches are quite few. While there are a lot of matches, the RGB-D information is still used to estimate position. The final pose would be optimized by General Graph Optimization (g2o) framework to reduce error. The experimental results show that the proposed method is better than the RGB-D camera's method. And this method can continue working stably for indoor environment with sparse textures in the SLAM system.

  3. A methodology for the geometric design of heat recovery steam generators applying genetic algorithms

    International Nuclear Information System (INIS)

    Durán, M. Dolores; Valdés, Manuel; Rovira, Antonio; Rincón, E.

    2013-01-01

    This paper shows how the geometric design of heat recovery steam generators (HRSG) can be achieved. The method calculates the product of the overall heat transfer coefficient (U) by the area of the heat exchange surface (A) as a function of certain thermodynamic design parameters of the HRSG. A genetic algorithm is then applied to determine the best set of geometric parameters which comply with the desired UA product and, at the same time, result in a small heat exchange area and low pressure losses in the HRSG. In order to test this method, the design was applied to the HRSG of an existing plant and the results obtained were compared with the real exchange area of the steam generator. The findings show that the methodology is sound and offers reliable results even for complex HRSG designs. -- Highlights: ► The paper shows a methodology for the geometric design of heat recovery steam generators. ► Calculates product of the overall heat transfer coefficient by heat exchange area as a function of certain HRSG thermodynamic design parameters. ► It is a complement for the thermoeconomic optimization method. ► Genetic algorithms are used for solving the optimization problem

  4. Active filtering applied to radiographic images unfolded by the Richardson-Lucy algorithm

    International Nuclear Information System (INIS)

    Almeida, Gevaldo L. de; Silvani, Maria Ines; Lopes, Ricardo T.

    2011-01-01

    Degradation of images caused by systematic uncertainties can be reduced when one knows the features of the spoiling agent. Typical uncertainties of this kind arise in radiographic images due to the non - zero resolution of the detector used to acquire them, and from the non-punctual character of the source employed in the acquisition, or from the beam divergence when extended sources are used. Both features blur the image, which, instead of a single point exhibits a spot with a vanishing edge, reproducing hence the point spread function - PSF of the system. Once this spoiling function is known, an inverse problem approach, involving inversion of matrices, can then be used to retrieve the original image. As these matrices are generally ill-conditioned, due to statistical fluctuation and truncation errors, iterative procedures should be applied, such as the Richardson-Lucy algorithm. This algorithm has been applied in this work to unfold radiographic images acquired by transmission of thermal neutrons and gamma-rays. After this procedure, the resulting images undergo an active filtering which fairly improves their final quality at a negligible cost in terms of processing time. The filter ruling the process is based on the matrix of the correction factors for the last iteration of the deconvolution procedure. Synthetic images degraded with a known PSF, and undergone to the same treatment, have been used as benchmark to evaluate the soundness of the developed active filtering procedure. The deconvolution and filtering algorithms have been incorporated to a Fortran program, written to deal with real images, generate the synthetic ones and display both. (author)

  5. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.; Wheeler, Mary Fanett; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using

  6. Rotational and frictional dynamics of the slamming of a door

    Science.gov (United States)

    Klein, Pascal; Müller, Andreas; Gröber, Sebastian; Molz, Alexander; Kuhn, Jochen

    2017-01-01

    A theoretical and experimental investigation of the rotational dynamics, including friction, of a slamming door is presented. Based on existing work regarding different damping models for rotational and oscillatory motions, we examine different forms for the (angular) velocity dependence (ωn, n = 0, 1, 2) of the frictional force. An analytic solution is given when all three friction terms are present and several solutions for specific cases known from the literature are reproduced. The motion of a door is investigated experimentally using a smartphone, and the data are compared with the theoretical results. A laboratory experiment under more controlled conditions is conducted to gain a deeper understanding of the movement of a slammed door. Our findings provide quantitative evidence that damping models involving quadratic air drag are most appropriate for the slamming of a door. Examining this everyday example of a physical phenomenon increases student motivation, because they can relate it to their own personal experience.

  7. The Effect of Slamming Impact on Out-of-Autoclave Cured Prepregs of GFRP Composite Panels for Hulls

    OpenAIRE

    Suárez, J.C.; Townsend, P.; Sanz, E.; Ulzurrum, I. Diez de; Pinilla, P.

    2016-01-01

    This paper proposes a methodology that employs an experimental apparatus that reproduces, in pre-impregnated and cured out-of-autoclave Glass Fiber Reinforced Polymer (GFRP) panels, the phenomenon of slamming or impact on the bottom of a high-speed boat during planing. The pressure limits in the simulation are defined by employing a finite element model (FEM) that evaluates the forces applied by the cam that hits the panels in the apparatus via microdeformations obtained in the simulation. Th...

  8. Applied Swarm-based medicine: collecting decision trees for patterns of algorithms analysis.

    Science.gov (United States)

    Panje, Cédric M; Glatzer, Markus; von Rappard, Joscha; Rothermundt, Christian; Hundsberger, Thomas; Zumstein, Valentin; Plasswilm, Ludwig; Putora, Paul Martin

    2017-08-16

    The objective consensus methodology has recently been applied in consensus finding in several studies on medical decision-making among clinical experts or guidelines. The main advantages of this method are an automated analysis and comparison of treatment algorithms of the participating centers which can be performed anonymously. Based on the experience from completed consensus analyses, the main steps for the successful implementation of the objective consensus methodology were identified and discussed among the main investigators. The following steps for the successful collection and conversion of decision trees were identified and defined in detail: problem definition, population selection, draft input collection, tree conversion, criteria adaptation, problem re-evaluation, results distribution and refinement, tree finalisation, and analysis. This manuscript provides information on the main steps for successful collection of decision trees and summarizes important aspects at each point of the analysis.

  9. Aida-CMK multi-algorithm optimization kernel applied to analog IC sizing

    CERN Document Server

    Lourenço, Ricardo; Horta, Nuno

    2015-01-01

    This work addresses the research and development of an innovative optimization kernel applied to analog integrated circuit (IC) design. Particularly, this works describes the modifications inside the AIDA Framework, an electronic design automation framework fully developed by at the Integrated Circuits Group-LX of the Instituto de Telecomunicações, Lisbon. It focusses on AIDA-CMK, by enhancing AIDA-C, which is the circuit optimizer component of AIDA, with a new multi-objective multi-constraint optimization module that constructs a base for multiple algorithm implementations. The proposed solution implements three approaches to multi-objective multi-constraint optimization, namely, an evolutionary approach with NSGAII, a swarm intelligence approach with MOPSO and stochastic hill climbing approach with MOSA. Moreover, the implemented structure allows the easy hybridization between kernels transforming the previous simple NSGAII optimization module into a more evolved and versatile module supporting multiple s...

  10. An improved flux-split algorithm applied to hypersonic flows in chemical equilibrium

    Science.gov (United States)

    Palmer, Grant

    1988-01-01

    An explicit, finite-difference, shock-capturing numerical algorithm is presented and applied to hypersonic flows assumed to be in thermochemical equilibrium. Real-gas chemistry is either loosely coupled to the gasdynamics by way of a Gibbs free energy minimization package or fully coupled using species mass conservation equations with finite-rate chemical reactions. A scheme is developed that maintains stability in the explicit, finite-rate formulation while allowing relatively high time steps. The codes use flux vector splitting to difference the inviscid fluxes and employ real-gas corrections to viscosity and thermal conductivity. Numerical results are compared against existing ballistic range and flight data. Flows about complex geometries are also computed.

  11. An Approach of Dynamic Object Removing for Indoor Mapping Based on UGV SLAM

    Directory of Open Access Journals (Sweden)

    Jian Tang

    2015-07-01

    Full Text Available The study of indoor mapping for Location Based Service (LBS becomes more and more popular in recent years. LiDAR SLAM based mapping method seems to be a promising indoor mapping solution. However, there are some dynamic objects such as pedestrians, indoor vehicles, etc. existing in the raw LiDAR range data. They have to be removal for mapping purpose. In this paper, a new approach of dynamic object removing called Likelihood Grid Voting (LGV is presented. It is a model free method and takes full advantage of the high scanning rate of LiDAR, which is moving at a relative low speed in indoor environment. In this method, a counting grid is allocated for recording the occupation of map position by laser scans. The lower counter value of this position can be recognized as dynamic objects and the point cloud will be removed from map. This work is a part of algorithms in our self- developed Unmanned Ground Vehicles (UGV simultaneous localization and Mapping (SLAM system- NAVIS. Field tests are carried in an indoor parking place with NAVIS to evaluate the effectiveness of the proposed method. The result shows that all the small size objects like pedestrians can be detected and removed quickly; large size of objects like cars can be detected and removed partly.

  12. Pseudolinear Model Based Solution to the SLAM Problem of Nonholonomic Mobile Robots

    Science.gov (United States)

    Pathiranage, Chandima Dedduwa; Watanabe, Keigo; Izumi, Kiyotaka

    This paper describes an improved solution to the simultaneous localization and mapping (SLAM) problem based on pseudolinear models. Accurate estimation of vehicle and landmark states is one of the key issues for successful mobile robot navigation if the configuration of the environment and initial robot location are unknown. A state estimator which can be designed to use the nonlinearity as it is coming from the original model has always been invaluable in which high accuracy is expected. Thus to accomplish the above highlighted point, pseudolinear model based Kalman filter (PLKF) state estimator is introduced. A less error prone vehicle process model is proposed to improve the accuracy and the faster convergence of state estimation. Evolution of vehicle motion is modeled using vehicle frame translation derived from successive dead reckoned poses as a control input. A measurement model with two sensor frames is proposed to improve the data association. The PLKF-based SLAM algorithm is simulated using Matlab for vehicle-landmarks system and results show that the proposed approach performs much accurately compared to the well known extended Kalman filter (EKF).

  13. ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras

    OpenAIRE

    Mur-Artal, Raul; Tardos, Juan D.

    2016-01-01

    We present ORB-SLAM2 a complete SLAM system for monocular, stereo and RGB-D cameras, including map reuse, loop closing and relocalization capabilities. The system works in real-time on standard CPUs in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end based on bundle adjustment with monocular and stereo observations allows for accurate trajectory estimation with metric scale. Our syst...

  14. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  15. SLAM POETRY: A SIMPLE WAY TO GET CLOSER WITH LITERATURE

    Directory of Open Access Journals (Sweden)

    Murti Ayu Wijayanti

    2017-04-01

    Full Text Available Teaching literature in teacher training faculty in which the students are prepared for being English teachers is always challenging as the students think that they have nothing to do with any kinds of literary work. It takes times to prepare them learn literature. Most students think that they have no talent in literature. Thus, it will certainly affect the teaching and learning process. While the lecturer is teaching, the students tend to listen and think of another else but literature. It therefore needs the lecturer‘s effort to deal with this challenge. One part of literary works taught which creates problem for most of students is poetry. One way to encourage students in learning poetry is slam poetry. Slam poetry is a kind of poetry competition which was firstly popularized in America in 1990s. A rumor that only beautiful and rhythmic poetry which is highly appreciated vanishes since the poet will only write what he or she understands. In slam poetry, the students themselves create their own poetry and present it in front of their classmates whereas other students will be the judges and decide who the winner of this slam poetry is. This method will encourage the students to learn poetry as well as appreciate it.

  16. Sampling in image space for vision based SLAM

    NARCIS (Netherlands)

    Booij, O.; Zivkovic, Z.; Kröse, B.

    2008-01-01

    Loop closing in vision based SLAM applications is a difficult task. Comparing new image data with all previous image data acquired for the map is practically impossible because of the high computational costs. This problem is part of the bigger problem to acquire local geometric constraints from

  17. A Novel Metric Online Monocular SLAM Approach for Indoor Applications

    Directory of Open Access Journals (Sweden)

    Yongfei Li

    2016-01-01

    Full Text Available Monocular SLAM has attracted more attention recently due to its flexibility and being economic. In this paper, a novel metric online direct monocular SLAM approach is proposed, which can obtain the metric reconstruction of the scene. In the proposed approach, a chessboard is utilized to provide initial depth map and scale correction information during the SLAM process. The involved chessboard provides the absolute scale of scene, and it is seen as a bridge between the camera visual coordinate and the world coordinate. The scene is reconstructed as a series of key frames with their poses and correlative semidense depth maps, using a highly accurate pose estimation achieved by direct grid point-based alignment. The estimated pose is coupled with depth map estimation calculated by filtering over a large number of pixelwise small-baseline stereo comparisons. In addition, this paper formulates the scale-drift model among key frames and the calibration chessboard is used to correct the accumulated pose error. At the end of this paper, several indoor experiments are conducted. The results suggest that the proposed approach is able to achieve higher reconstruction accuracy when compared with the traditional LSD-SLAM approach. And the approach can also run in real time on a commonly used computer.

  18. Monocular SLAM for autonomous robots with enhanced features initialization.

    Science.gov (United States)

    Guerra, Edmundo; Munguia, Rodrigo; Grau, Antoni

    2014-04-02

    This work presents a variant approach to the monocular SLAM problem focused in exploiting the advantages of a human-robot interaction (HRI) framework. Based upon the delayed inverse-depth feature initialization SLAM (DI-D SLAM), a known monocular technique, several but crucial modifications are introduced taking advantage of data from a secondary monocular sensor, assuming that this second camera is worn by a human. The human explores an unknown environment with the robot, and when their fields of view coincide, the cameras are considered a pseudo-calibrated stereo rig to produce estimations for depth through parallax. These depth estimations are used to solve a related problem with DI-D monocular SLAM, namely, the requirement of a metric scale initialization through known artificial landmarks. The same process is used to improve the performance of the technique when introducing new landmarks into the map. The convenience of the approach taken to the stereo estimation, based on SURF features matching, is discussed. Experimental validation is provided through results from real data with results showing the improvements in terms of more features correctly initialized, with reduced uncertainty, thus reducing scale and orientation drift. Additional discussion in terms of how a real-time implementation could take advantage of this approach is provided.

  19. Visual EKF-SLAM from Heterogeneous Landmarks †

    Science.gov (United States)

    Esparza-Jiménez, Jorge Othón; Devy, Michel; Gordillo, José L.

    2016-01-01

    Many applications require the localization of a moving object, e.g., a robot, using sensory data acquired from embedded devices. Simultaneous localization and mapping from vision performs both the spatial and temporal fusion of these data on a map when a camera moves in an unknown environment. Such a SLAM process executes two interleaved functions: the front-end detects and tracks features from images, while the back-end interprets features as landmark observations and estimates both the landmarks and the robot positions with respect to a selected reference frame. This paper describes a complete visual SLAM solution, combining both point and line landmarks on a single map. The proposed method has an impact on both the back-end and the front-end. The contributions comprehend the use of heterogeneous landmark-based EKF-SLAM (the management of a map composed of both point and line landmarks); from this perspective, the comparison between landmark parametrizations and the evaluation of how the heterogeneity improves the accuracy on the camera localization, the development of a front-end active-search process for linear landmarks integrated into SLAM and the experimentation methodology. PMID:27070602

  20. Metaheuristic Algorithms Applied to Bioenergy Supply Chain Problems: Theory, Review, Challenges, and Future

    Directory of Open Access Journals (Sweden)

    Krystel K. Castillo-Villar

    2014-11-01

    Full Text Available Bioenergy is a new source of energy that accounts for a substantial portion of the renewable energy production in many countries. The production of bioenergy is expected to increase due to its unique advantages, such as no harmful emissions and abundance. Supply-related problems are the main obstacles precluding the increase of use of biomass (which is bulky and has low energy density to produce bioenergy. To overcome this challenge, large-scale optimization models are needed to be solved to enable decision makers to plan, design, and manage bioenergy supply chains. Therefore, the use of effective optimization approaches is of great importance. The traditional mathematical methods (such as linear, integer, and mixed-integer programming frequently fail to find optimal solutions for non-convex and/or large-scale models whereas metaheuristics are efficient approaches for finding near-optimal solutions that use less computational resources. This paper presents a comprehensive review by studying and analyzing the application of metaheuristics to solve bioenergy supply chain models as well as the exclusive challenges of the mathematical problems applied in the bioenergy supply chain field. The reviewed metaheuristics include: (1 population approaches, such as ant colony optimization (ACO, the genetic algorithm (GA, particle swarm optimization (PSO, and bee colony algorithm (BCA; and (2 trajectory approaches, such as the tabu search (TS and simulated annealing (SA. Based on the outcomes of this literature review, the integrated design and planning of bioenergy supply chains problem has been solved primarily by implementing the GA. The production process optimization was addressed primarily by using both the GA and PSO. The supply chain network design problem was treated by utilizing the GA and ACO. The truck and task scheduling problem was solved using the SA and the TS, where the trajectory-based methods proved to outperform the population

  1. Monte Carlo evaluation of a photon pencil kernel algorithm applied to fast neutron therapy treatment planning

    Science.gov (United States)

    Söderberg, Jonas; Alm Carlsson, Gudrun; Ahnesjö, Anders

    2003-10-01

    When dedicated software is lacking, treatment planning for fast neutron therapy is sometimes performed using dose calculation algorithms designed for photon beam therapy. In this work Monte Carlo derived neutron pencil kernels in water were parametrized using the photon dose algorithm implemented in the Nucletron TMS (treatment management system) treatment planning system. A rectangular fast-neutron fluence spectrum with energies 0-40 MeV (resembling a polyethylene filtered p(41)+ Be spectrum) was used. Central axis depth doses and lateral dose distributions were calculated and compared with the corresponding dose distributions from Monte Carlo calculations for homogeneous water and heterogeneous slab phantoms. All absorbed doses were normalized to the reference dose at 10 cm depth for a field of radius 5.6 cm in a 30 × 40 × 20 cm3 water test phantom. Agreement to within 7% was found in both the lateral and the depth dose distributions. The deviations could be explained as due to differences in size between the test phantom and that used in deriving the pencil kernel (radius 200 cm, thickness 50 cm). In the heterogeneous phantom, the TMS, with a directly applied neutron pencil kernel, and Monte Carlo calculated absorbed doses agree approximately for muscle but show large deviations for media such as adipose or bone. For the latter media, agreement was substantially improved by correcting the absorbed doses calculated in TMS with the neutron kerma factor ratio and the stopping power ratio between tissue and water. The multipurpose Monte Carlo code FLUKA was used both in calculating the pencil kernel and in direct calculations of absorbed dose in the phantom.

  2. Robot-Beacon Distributed Range-Only SLAM for Resource-Constrained Operation.

    Science.gov (United States)

    Torres-González, Arturo; Martínez-de Dios, Jose Ramiro; Ollero, Anibal

    2017-04-20

    This work deals with robot-sensor network cooperation where sensor nodes (beacons) are used as landmarks for Range-Only (RO) Simultaneous Localization and Mapping (SLAM). Most existing RO-SLAM techniques consider beacons as passive devices disregarding the sensing, computational and communication capabilities with which they are actually endowed. SLAM is a resource-demanding task. Besides the technological constraints of the robot and beacons, many applications impose further resource consumption limitations. This paper presents a scalable distributed RO-SLAM scheme for resource-constrained operation. It is capable of exploiting robot-beacon cooperation in order to improve SLAM accuracy while meeting a given resource consumption bound expressed as the maximum number of measurements that are integrated in SLAM per iteration. The proposed scheme combines a Sparse Extended Information Filter (SEIF) SLAM method, in which each beacon gathers and integrates robot-beacon and inter-beacon measurements, and a distributed information-driven measurement allocation tool that dynamically selects the measurements that are integrated in SLAM, balancing uncertainty improvement and resource consumption. The scheme adopts a robot-beacon distributed approach in which each beacon participates in the selection, gathering and integration in SLAM of robot-beacon and inter-beacon measurements, resulting in significant estimation accuracies, resource-consumption efficiency and scalability. It has been integrated in an octorotor Unmanned Aerial System (UAS) and evaluated in 3D SLAM outdoor experiments. The experimental results obtained show its performance and robustness and evidence its advantages over existing methods.

  3. Matrix product algorithm for stochastic dynamics on networks applied to nonequilibrium Glauber dynamics

    Science.gov (United States)

    Barthel, Thomas; De Bacco, Caterina; Franz, Silvio

    2018-01-01

    We introduce and apply an efficient method for the precise simulation of stochastic dynamical processes on locally treelike graphs. Networks with cycles are treated in the framework of the cavity method. Such models correspond, for example, to spin-glass systems, Boolean networks, neural networks, or other technological, biological, and social networks. Building upon ideas from quantum many-body theory, our approach is based on a matrix product approximation of the so-called edge messages—conditional probabilities of vertex variable trajectories. Computation costs and accuracy can be tuned by controlling the matrix dimensions of the matrix product edge messages (MPEM) in truncations. In contrast to Monte Carlo simulations, the algorithm has a better error scaling and works for both single instances as well as the thermodynamic limit. We employ it to examine prototypical nonequilibrium Glauber dynamics in the kinetic Ising model. Because of the absence of cancellation effects, observables with small expectation values can be evaluated accurately, allowing for the study of decay processes and temporal correlations.

  4. A genetic algorithm applied to a PWR turbine extraction optimization to increase cycle efficiency

    International Nuclear Information System (INIS)

    Sacco, Wagner F.; Schirru, Roberto

    2002-01-01

    In nuclear power plants feedwater heaters are used to heat feedwater from its temperature leaving the condenser to final feedwater temperature using steam extracted from various stages of the turbines. The purpose of this process is to increase cycle efficiency. The determination of the optimal fraction of mass flow rate to be extracted from each stage of the turbines is a complex optimization problem. This kind of problem has been efficiently solved by means of evolutionary computation techniques, such as Genetic Algorithms (GAs). GAs, which are systems based upon principles from biological genetics, have been successfully applied to several combinatorial optimization problems in nuclear engineering, as the nuclear fuel reload optimization problem. We introduce the use of GAs in cycle efficiency optimization by finding an optimal combination of turbine extractions. In order to demonstrate the effectiveness of our approach, we have chosen a typical PWR as case study. The secondary side of the PWR was simulated using PEPSE, which is a modeling tool used to perform integrated heat balances for power plants. The results indicate that the GA is a quite promising tool for cycle efficiency optimization. (author)

  5. Laser-Based Slam with Efficient Occupancy Likelihood Map Learning for Dynamic Indoor Scenes

    Science.gov (United States)

    Li, Li; Yao, Jian; Xie, Renping; Tu, Jinge; Feng, Chen

    2016-06-01

    Location-Based Services (LBS) have attracted growing attention in recent years, especially in indoor environments. The fundamental technique of LBS is the map building for unknown environments, this technique also named as simultaneous localization and mapping (SLAM) in robotic society. In this paper, we propose a novel approach for SLAMin dynamic indoor scenes based on a 2D laser scanner mounted on a mobile Unmanned Ground Vehicle (UGV) with the help of the grid-based occupancy likelihood map. Instead of applying scan matching in two adjacent scans, we propose to match current scan with the occupancy likelihood map learned from all previous scans in multiple scales to avoid the accumulation of matching errors. Due to that the acquisition of the points in a scan is sequential but not simultaneous, there unavoidably exists the scan distortion at different extents. To compensate the scan distortion caused by the motion of the UGV, we propose to integrate a velocity of a laser range finder (LRF) into the scan matching optimization framework. Besides, to reduce the effect of dynamic objects such as walking pedestrians often existed in indoor scenes as much as possible, we propose a new occupancy likelihood map learning strategy by increasing or decreasing the probability of each occupancy grid after each scan matching. Experimental results in several challenged indoor scenes demonstrate that our proposed approach is capable of providing high-precision SLAM results.

  6. Genetic algorithm applied to a Soil-Vegetation-Atmosphere system: Sensitivity and uncertainty analysis

    Science.gov (United States)

    Schneider, Sébastien; Jacques, Diederik; Mallants, Dirk

    2010-05-01

    the inversion procedure a genetical algorithm (GA) was used. Specific features such as elitism, roulette-wheel process for selection operator and island theory were implemented. Optimization was based on the water content measurements recorded at several depths. Ten scenarios have been elaborated and applied on the two lysimeters in order to investigate the impact of the conceptual model in terms of processes description (mechanistic or compartmental) and geometry (number of horizons in the profile description) on the calibration accuracy. Calibration leads to a good agreement with the measured water contents. The most critical parameters for improving the goodness of fit are the number of horizons and the type of process description. Best fit are found for a mechanistic model with 5 horizons resulting in absolute differences between observed and simulated water contents less than 0.02 cm3cm-3 in average. Parameter estimate analysis shows that layers thicknesses are poorly constrained whereas hydraulic parameters are much well defined.

  7. vSLAM: vision-based SLAM for autonomous vehicle navigation

    Science.gov (United States)

    Goncalves, Luis; Karlsson, Niklas; Ostrowski, Jim; Di Bernardo, Enrico; Pirjanian, Paolo

    2004-09-01

    Among the numerous challenges of building autonomous/unmanned vehicles is that of reliable and autonomous localization in an unknown environment. In this paper we present a system that can efficiently and autonomously solve the robotics 'SLAM' problem, where a robot placed in an unknown environment, simultaneously must localize itself and make a map of the environment. The system is vision-based, and makes use of Evolution Robotic's powerful object recognition technology. As the robot explores the environment, it is continuously performing four tasks, using information from acquired images and the drive system odometry. The robot: (1) recognizes previously created 3-D visual landmarks; (2) builds new 3-D visual landmarks; (3) updates the current estimate of its location, using the map; (4) updates the landmark map. In indoor environments, the system can build a map of a 5m by 5m area in approximately 20 minutes, and can localize itself with an accuracy of approximately 15 cm in position and 3 degrees in orientation relative to the global reference frame of the landmark map. The same system can be adapted for outdoor, vehicular use.

  8. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  9. Genetic algorithm for design and manufacture optimization based on numerical simulations applied to aeronautic composite parts

    International Nuclear Information System (INIS)

    Mouton, S.; Ledoux, Y.; Teissandier, D.; Sebastian, P.

    2010-01-01

    A key challenge for the future is to reduce drastically the human impact on the environment. In the aeronautic field, this challenge aims at optimizing the design of the aircraft to decrease the global mass. This reduction leads to the optimization of every part constitutive of the plane. This operation is even more delicate when the used material is composite material. In this case, it is necessary to find a compromise between the strength, the mass and the manufacturing cost of the component. Due to these different kinds of design constraints it is necessary to assist engineer with decision support system to determine feasible solutions. In this paper, an approach is proposed based on the coupling of the different key characteristics of the design process and on the consideration of the failure risk of the component. The originality of this work is that the manufacturing deviations due to the RTM process are integrated in the simulation of the assembly process. Two kinds of deviations are identified: volume impregnation (injection phase of RTM process) and geometrical deviations (curing and cooling phases). The quantification of these deviations and the related failure risk calculation is based on finite element simulations (Pam RTM registered and Samcef registered softwares). The use of genetic algorithm allows to estimate the impact of the design choices and their consequences on the failure risk of the component. The main focus of the paper is the optimization of tool design. In the framework of decision support systems, the failure risk calculation is used for making the comparison of possible industrialization alternatives. It is proposed to apply this method on a particular part of the airplane structure: a spar unit made of carbon fiber/epoxy composite.

  10. Imperialist Competitive Algorithm with Dynamic Parameter Adaptation Using Fuzzy Logic Applied to the Optimization of Mathematical Functions

    Directory of Open Access Journals (Sweden)

    Emer Bernal

    2017-01-01

    Full Text Available In this paper we are presenting a method using fuzzy logic for dynamic parameter adaptation in the imperialist competitive algorithm, which is usually known by its acronym ICA. The ICA algorithm was initially studied in its original form to find out how it works and what parameters have more effect upon its results. Based on this study, several designs of fuzzy systems for dynamic adjustment of the ICA parameters are proposed. The experiments were performed on the basis of solving complex optimization problems, particularly applied to benchmark mathematical functions. A comparison of the original imperialist competitive algorithm and our proposed fuzzy imperialist competitive algorithm was performed. In addition, the fuzzy ICA was compared with another metaheuristic using a statistical test to measure the advantage of the proposed fuzzy approach for dynamic parameter adaptation.

  11. The Patch-Levy-Based Bees Algorithm Applied to Dynamic Optimization Problems

    Directory of Open Access Journals (Sweden)

    Wasim A. Hussein

    2017-01-01

    Full Text Available Many real-world optimization problems are actually of dynamic nature. These problems change over time in terms of the objective function, decision variables, constraints, and so forth. Therefore, it is very important to study the performance of a metaheuristic algorithm in dynamic environments to assess the robustness of the algorithm to deal with real-word problems. In addition, it is important to adapt the existing metaheuristic algorithms to perform well in dynamic environments. This paper investigates a recently proposed version of Bees Algorithm, which is called Patch-Levy-based Bees Algorithm (PLBA, on solving dynamic problems, and adapts it to deal with such problems. The performance of the PLBA is compared with other BA versions and other state-of-the-art algorithms on a set of dynamic multimodal benchmark problems of different degrees of difficulties. The results of the experiments show that PLBA achieves better results than the other BA variants. The obtained results also indicate that PLBA significantly outperforms some of the other state-of-the-art algorithms and is competitive with others.

  12. A quantitative performance evaluation of the EM algorithm applied to radiographic images

    International Nuclear Information System (INIS)

    Brailean, J.C.; Sullivan, B.J.; Giger, M.L.; Chen, C.T.

    1991-01-01

    In this paper, the authors quantitatively evaluate the performance of the Expectation Maximization (EM) algorithm as a restoration technique for radiographic images. The perceived signal-to-noise ratio (SNR), of simple radiographic patterns processed by the EM algorithm are calculated on the basis of a statistical decision theory model that includes both the observer's visual response function and a noise component internal to the eye-brain system. The relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to quantitatively compare the effects of the EM algorithm to two popular image enhancement techniques: contrast enhancement (windowing) and unsharp mask filtering

  13. An evaluation of attention models for use in SLAM

    Science.gov (United States)

    Dodge, Samuel; Karam, Lina

    2013-12-01

    In this paper we study the application of visual saliency models for the simultaneous localization and mapping (SLAM) problem. We consider visual SLAM, where the location of the camera and a map of the environment can be generated using images from a single moving camera. In visual SLAM, the interest point detector is of key importance. This detector must be invariant to certain image transformations so that features can be matched across di erent frames. Recent work has used a model of human visual attention to detect interest points, however it is unclear as to what is the best attention model for this purpose. To this aim, we compare the performance of interest points from four saliency models (Itti, GBVS, RARE, and AWS) with the performance of four traditional interest point detectors (Harris, Shi-Tomasi, SIFT, and FAST). We evaluate these detectors under several di erent types of image transformation and nd that the Itti saliency model, in general, achieves the best performance in terms of keypoint repeatability.

  14. Branch-and-Bound algorithm applied to uncertainty quantification of a Boiling Water Reactor Station Blackout

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, Joseph, E-mail: joseph.nielsen@inl.gov [Idaho National Laboratory, 1955 N. Fremont Avenue, P.O. Box 1625, Idaho Falls, ID 83402 (United States); University of Idaho, Department of Mechanical Engineering and Nuclear Engineering Program, 1776 Science Center Drive, Idaho Falls, ID 83402-1575 (United States); Tokuhiro, Akira [University of Idaho, Department of Mechanical Engineering and Nuclear Engineering Program, 1776 Science Center Drive, Idaho Falls, ID 83402-1575 (United States); Hiromoto, Robert [University of Idaho, Department of Computer Science, 1776 Science Center Drive, Idaho Falls, ID 83402-1575 (United States); Tu, Lei [University of Idaho, Department of Mechanical Engineering and Nuclear Engineering Program, 1776 Science Center Drive, Idaho Falls, ID 83402-1575 (United States)

    2015-12-15

    state. Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. Unfortunately DPRA methods introduce issues associated with combinatorial explosion of states. This paper presents a methodology to address combinatorial explosion using a Branch-and-Bound algorithm applied to Dynamic Event Trees (DET), which utilize LENDIT (L – Length, E – Energy, N – Number, D – Distribution, I – Information, and T – Time) as well as a set theory to describe system, state, resource, and response (S2R2) sets to create bounding functions for the DET. The optimization of the DET in identifying high probability failure branches is extended to create a Phenomenological Identification and Ranking Table (PIRT) methodology to evaluate modeling parameters important to safety of those failure branches that have a high probability of failure. The PIRT can then be used as a tool to identify and evaluate the need for experimental validation of models that have the potential to reduce risk. In order to demonstrate this methodology, a Boiling Water Reactor (BWR) Station Blackout (SBO) case study is presented.

  15. Branch-and-Bound algorithm applied to uncertainty quantification of a Boiling Water Reactor Station Blackout

    International Nuclear Information System (INIS)

    Nielsen, Joseph; Tokuhiro, Akira; Hiromoto, Robert; Tu, Lei

    2015-01-01

    state. Dynamic PRA (DPRA) methods provide a more rigorous analysis of complex dynamic systems. Unfortunately DPRA methods introduce issues associated with combinatorial explosion of states. This paper presents a methodology to address combinatorial explosion using a Branch-and-Bound algorithm applied to Dynamic Event Trees (DET), which utilize LENDIT (L – Length, E – Energy, N – Number, D – Distribution, I – Information, and T – Time) as well as a set theory to describe system, state, resource, and response (S2R2) sets to create bounding functions for the DET. The optimization of the DET in identifying high probability failure branches is extended to create a Phenomenological Identification and Ranking Table (PIRT) methodology to evaluate modeling parameters important to safety of those failure branches that have a high probability of failure. The PIRT can then be used as a tool to identify and evaluate the need for experimental validation of models that have the potential to reduce risk. In order to demonstrate this methodology, a Boiling Water Reactor (BWR) Station Blackout (SBO) case study is presented.

  16. Hybrid Model Based on Genetic Algorithms and SVM Applied to Variable Selection within Fruit Juice Classification

    Directory of Open Access Journals (Sweden)

    C. Fernandez-Lozano

    2013-01-01

    Full Text Available Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM. Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA, the most representative variables for a specific classification problem can be selected.

  17. New approaches of the potential field for QPSO algorithm applied to nuclear reactor reload problem

    International Nuclear Information System (INIS)

    Nicolau, Andressa dos Santos; Schirru, Roberto

    2015-01-01

    Recently quantum-inspired version of the Particle Swarm Optimization (PSO) algorithm, Quantum Particle Swarm Optimization (QPSO) was proposed. The QPSO algorithm permits all particles to have a quantum behavior, where some sort of 'quantum motion' is imposed in the search process. When the QPSO is tested against a set of benchmarking functions, it showed superior performances as compared to classical PSO. The QPSO outperforms the classical one most of the time in convergence speed and achieves better levels for the fitness functions. The great advantage of QPSO algorithm is that it uses only one parameter control. The critical step or QPSO algorithm is the choice of suitable attractive potential field that can guarantee bound states for the particles moving in the quantum environment. In this article, one version of QPSO algorithm was tested with two types of potential well: delta-potential well harmonic oscillator. The main goal of this study is to show with of the potential field is the most suitable for use in QPSO in a solution of the Nuclear Reactor Reload Optimization Problem, especially in the cycle 7 of a Brazilian Nuclear Power Plant. All result were compared with the performance of its classical counterpart of the literature and shows that QPSO algorithm are well situated among the best alternatives for dealing with hard optimization problems, such as NRROP. (author)

  18. SVC control enhancement applying self-learning fuzzy algorithm for islanded microgrid

    Directory of Open Access Journals (Sweden)

    Hossam Gabbar

    2016-03-01

    Full Text Available Maintaining voltage stability, within acceptable levels, for islanded Microgrids (MGs is a challenge due to limited exchange power between generation and loads. This paper proposes an algorithm to enhance the dynamic performance of islanded MGs in presence of load disturbance using Static VAR Compensator (SVC with Fuzzy Model Reference Learning Controller (FMRLC. The proposed algorithm compensates MG nonlinearity via fuzzy membership functions and inference mechanism imbedded in both controller and inverse model. Hence, MG keeps the desired performance as required at any operating condition. Furthermore, the self-learning capability of the proposed control algorithm compensates for grid parameter’s variation even with inadequate information about load dynamics. A reference model was designed to reject bus voltage disturbance with achievable performance by the proposed fuzzy controller. Three simulations scenarios have been presented to investigate effectiveness of proposed control algorithm in improving steady-state and transient performance of islanded MGs. The first scenario conducted without SVC, second conducted with SVC using PID controller and third conducted using FMRLC algorithm. A comparison for results shows ability of proposed control algorithm to enhance disturbance rejection due to learning process.

  19. New approaches of the potential field for QPSO algorithm applied to nuclear reactor reload problem

    Energy Technology Data Exchange (ETDEWEB)

    Nicolau, Andressa dos Santos; Schirru, Roberto, E-mail: andressa@lmp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear

    2015-07-01

    Recently quantum-inspired version of the Particle Swarm Optimization (PSO) algorithm, Quantum Particle Swarm Optimization (QPSO) was proposed. The QPSO algorithm permits all particles to have a quantum behavior, where some sort of 'quantum motion' is imposed in the search process. When the QPSO is tested against a set of benchmarking functions, it showed superior performances as compared to classical PSO. The QPSO outperforms the classical one most of the time in convergence speed and achieves better levels for the fitness functions. The great advantage of QPSO algorithm is that it uses only one parameter control. The critical step or QPSO algorithm is the choice of suitable attractive potential field that can guarantee bound states for the particles moving in the quantum environment. In this article, one version of QPSO algorithm was tested with two types of potential well: delta-potential well harmonic oscillator. The main goal of this study is to show with of the potential field is the most suitable for use in QPSO in a solution of the Nuclear Reactor Reload Optimization Problem, especially in the cycle 7 of a Brazilian Nuclear Power Plant. All result were compared with the performance of its classical counterpart of the literature and shows that QPSO algorithm are well situated among the best alternatives for dealing with hard optimization problems, such as NRROP. (author)

  20. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  1. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  2. The fuzzy clearing approach for a niching genetic algorithm applied to a nuclear reactor core design optimization problem

    International Nuclear Information System (INIS)

    Sacco, Wagner F.; Machado, Marcelo D.; Pereira, Claudio M.N.A.; Schirru, Roberto

    2004-01-01

    This article extends previous efforts on genetic algorithms (GAs) applied to a core design optimization problem. We introduce the application of a new Niching Genetic Algorithm (NGA) to this problem and compare its performance to these previous works. The optimization problem consists in adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the average peak-factor in a three-enrichment zone reactor, considering restrictions on the average thermal flux, criticality and sub-moderation. After exhaustive experiments we observed that our new niching method performs better than the conventional GA due to a greater exploration of the search space

  3. Comparison of vessel enhancement algorithms applied to time-of-flight MRA images for cerebrovascular segmentation.

    Science.gov (United States)

    Phellan, Renzo; Forkert, Nils D

    2017-11-01

    Vessel enhancement algorithms are often used as a preprocessing step for vessel segmentation in medical images to improve the overall segmentation accuracy. Each algorithm uses different characteristics to enhance vessels, such that the most suitable algorithm may vary for different applications. This paper presents a comparative analysis of the accuracy gains in vessel segmentation generated by the use of nine vessel enhancement algorithms: Multiscale vesselness using the formulas described by Erdt (MSE), Frangi (MSF), and Sato (MSS), optimally oriented flux (OOF), ranking orientations responses path operator (RORPO), the regularized Perona-Malik approach (RPM), vessel enhanced diffusion (VED), hybrid diffusion with continuous switch (HDCS), and the white top hat algorithm (WTH). The filters were evaluated and compared based on time-of-flight MRA datasets and corresponding manual segmentations from 5 healthy subjects and 10 patients with an arteriovenous malformation. Additionally, five synthetic angiographic datasets with corresponding ground truth segmentation were generated with three different noise levels (low, medium, and high) and also used for comparison. The parameters for each algorithm and subsequent segmentation were optimized using leave-one-out cross evaluation. The Dice coefficient, Matthews correlation coefficient, area under the ROC curve, number of connected components, and true positives were used for comparison. The results of this study suggest that vessel enhancement algorithms do not always lead to more accurate segmentation results compared to segmenting nonenhanced images directly. Multiscale vesselness algorithms, such as MSE, MSF, and MSS proved to be robust to noise, while diffusion-based filters, such as RPM, VED, and HDCS ranked in the top of the list in scenarios with medium or no noise. Filters that assume tubular-shapes, such as MSE, MSF, MSS, OOF, RORPO, and VED show a decrease in accuracy when considering patients with an AVM

  4. a Fast and Flexible Method for Meta-Map Building for Icp Based Slam

    Science.gov (United States)

    Kurian, A.; Morin, K. W.

    2016-06-01

    Recent developments in LiDAR sensors make mobile mapping fast and cost effective. These sensors generate a large amount of data which in turn improves the coverage and details of the map. Due to the limited range of the sensor, one has to collect a series of scans to build the entire map of the environment. If we have good GNSS coverage, building a map is a well addressed problem. But in an indoor environment, we have limited GNSS reception and an inertial solution, if available, can quickly diverge. In such situations, simultaneous localization and mapping (SLAM) is used to generate a navigation solution and map concurrently. SLAM using point clouds possesses a number of computational challenges even with modern hardware due to the shear amount of data. In this paper, we propose two strategies for minimizing the cost of computation and storage when a 3D point cloud is used for navigation and real-time map building. We have used the 3D point cloud generated by Leica Geosystems's Pegasus Backpack which is equipped with Velodyne VLP-16 LiDARs scanners. To improve the speed of the conventional iterative closest point (ICP) algorithm, we propose a point cloud sub-sampling strategy which does not throw away any key features and yet significantly reduces the number of points that needs to be processed and stored. In order to speed up the correspondence finding step, a dual kd-tree and circular buffer architecture is proposed. We have shown that the proposed method can run in real time and has excellent navigation accuracy characteristics.

  5. A FAST AND FLEXIBLE METHOD FOR META-MAP BUILDING FOR ICP BASED SLAM

    Directory of Open Access Journals (Sweden)

    A. Kurian

    2016-06-01

    Full Text Available Recent developments in LiDAR sensors make mobile mapping fast and cost effective. These sensors generate a large amount of data which in turn improves the coverage and details of the map. Due to the limited range of the sensor, one has to collect a series of scans to build the entire map of the environment. If we have good GNSS coverage, building a map is a well addressed problem. But in an indoor environment, we have limited GNSS reception and an inertial solution, if available, can quickly diverge. In such situations, simultaneous localization and mapping (SLAM is used to generate a navigation solution and map concurrently. SLAM using point clouds possesses a number of computational challenges even with modern hardware due to the shear amount of data. In this paper, we propose two strategies for minimizing the cost of computation and storage when a 3D point cloud is used for navigation and real-time map building. We have used the 3D point cloud generated by Leica Geosystems's Pegasus Backpack which is equipped with Velodyne VLP-16 LiDARs scanners. To improve the speed of the conventional iterative closest point (ICP algorithm, we propose a point cloud sub-sampling strategy which does not throw away any key features and yet significantly reduces the number of points that needs to be processed and stored. In order to speed up the correspondence finding step, a dual kd-tree and circular buffer architecture is proposed. We have shown that the proposed method can run in real time and has excellent navigation accuracy characteristics.

  6. Comparison of Nonequilibrium Solution Algorithms Applied to Chemically Stiff Hypersonic Flows

    Science.gov (United States)

    Palmer, Grant; Venkatapathy, Ethiraj

    1995-01-01

    Three solution algorithms, explicit under-relaxation, point implicit, and lower-upper symmetric Gauss-Seidel, are used to compute nonequilibrium flow around the Apollo 4 return capsule at the 62-km altitude point in its descent trajectory. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness.The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15 and 30, the lower-upper symmetric Gauss-Seidel method produces an eight order of magnitude drop in the energy residual in one-third to one-half the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 30 and above. At Mach 40 the performance of the lower-upper symmetric Gauss-Seidel algorithm deteriorates to the point that it is out performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.

  7. An Encoding Technique for Multiobjective Evolutionary Algorithms Applied to Power Distribution System Reconfiguration

    Directory of Open Access Journals (Sweden)

    J. L. Guardado

    2014-01-01

    Full Text Available Network reconfiguration is an alternative to reduce power losses and optimize the operation of power distribution systems. In this paper, an encoding scheme for evolutionary algorithms is proposed in order to search efficiently for the Pareto-optimal solutions during the reconfiguration of power distribution systems considering multiobjective optimization. The encoding scheme is based on the edge window decoder (EWD technique, which was embedded in the Strength Pareto Evolutionary Algorithm 2 (SPEA2 and the Nondominated Sorting Genetic Algorithm II (NSGA-II. The effectiveness of the encoding scheme was proved by solving a test problem for which the true Pareto-optimal solutions are known in advance. In order to prove the practicability of the encoding scheme, a real distribution system was used to find the near Pareto-optimal solutions for different objective functions to optimize.

  8. An encoding technique for multiobjective evolutionary algorithms applied to power distribution system reconfiguration.

    Science.gov (United States)

    Guardado, J L; Rivas-Davalos, F; Torres, J; Maximov, S; Melgoza, E

    2014-01-01

    Network reconfiguration is an alternative to reduce power losses and optimize the operation of power distribution systems. In this paper, an encoding scheme for evolutionary algorithms is proposed in order to search efficiently for the Pareto-optimal solutions during the reconfiguration of power distribution systems considering multiobjective optimization. The encoding scheme is based on the edge window decoder (EWD) technique, which was embedded in the Strength Pareto Evolutionary Algorithm 2 (SPEA2) and the Nondominated Sorting Genetic Algorithm II (NSGA-II). The effectiveness of the encoding scheme was proved by solving a test problem for which the true Pareto-optimal solutions are known in advance. In order to prove the practicability of the encoding scheme, a real distribution system was used to find the near Pareto-optimal solutions for different objective functions to optimize.

  9. New algorithm using only one variable measurement applied to a maximum power point tracker

    Energy Technology Data Exchange (ETDEWEB)

    Salas, V.; Olias, E.; Lazaro, A.; Barrado, A. [University Carlos III de Madrid (Spain). Dept. of Electronic Technology

    2005-05-01

    A novel algorithm for seeking the maximum power point of a photovoltaic (PV) array for any temperature and solar irradiation level, needing only the PV current value, is proposed. Satisfactory theoretical and experimental results are presented and were obtained when the algorithm was included on a 100 W 24 V PV buck converter prototype, using an inexpensive microcontroller. The load of the system used was a battery and a resistance. The main advantage of this new maximum power point tracking (MPPT), when is compared with others, is that it only uses the measurement of the photovoltaic current, I{sub PV}. (author)

  10. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  11. Lagrangian and hamiltonian algorithms applied to the elar ged DGL model

    International Nuclear Information System (INIS)

    Batlle, C.; Roman-Roy, N.

    1988-01-01

    We analyse a model of two interating relativistic particles which is useful to illustrate the equivalence between the Dirac-Bergmann and the geometrical presympletic constraint algorithms. Both the lagrangian and hamiltonian formalisms are deeply analysed and we also find and discuss the equations of motion. (Autor)

  12. Estimation of the soil temperature from the AVHRR-NOAA satellite data applying split window algorithms

    International Nuclear Information System (INIS)

    Parra, J.C.; Acevedo, P.S.; Sobrino, J.A.; Morales, L.J.

    2006-01-01

    Four algorithms based on the technique of split-window, to estimate the land surface temperature starting from the data provided by the sensor Advanced Very High Resolution radiometer (AVHRR), on board the series of satellites of the National Oceanic and Atmospheric Administration (NOAA), are carried out. These algorithms consider corrections for atmospheric characteristics and emissivity of the different surfaces of the land. Fourteen images AVHRR-NOAA corresponding to the months of October of 2003, and January of 2004 were used. Simultaneously, measurements of soil temperature in the Carillanca hydro-meteorological station were collected in the Region of La Araucana, Chile (38 deg 41 min S; 72 deg 25 min W). Of all the used algorithms, the best results correspond to the model proposed by Sobrino and Raussoni (2000), with a media and standard deviation corresponding to the difference among the temperature of floor measure in situ and the estimated for this algorithm, of -0.06 and 2.11 K, respectively. (Author)

  13. Searching dependency between algebraic equations: An algorithm applied to automated reasoning

    International Nuclear Information System (INIS)

    Yang Lu; Zhang Jingzhong

    1990-01-01

    An efficient computer algorithm is given to decide how many branches of the solution to a system of algebraic also solve another equation. As one of the applications, this can be used in practice to verify a conjecture with hypotheses and conclusion expressed by algebraic equations, despite the variety of reducible or irreducible. (author). 10 refs

  14. A computationally efficient depression-filling algorithm for digital elevation models, applied to proglacial lake drainage

    NARCIS (Netherlands)

    Berends, Constantijn J.; Van De Wal, Roderik S W

    2016-01-01

    Many processes govern the deglaciation of ice sheets. One of the processes that is usually ignored is the calving of ice in lakes that temporarily surround the ice sheet. In order to capture this process a "flood-fill algorithm" is needed. Here we present and evaluate several optimizations to a

  15. A hybrid niched-island genetic algorithm applied to a nuclear core optimization problem

    International Nuclear Information System (INIS)

    Pereira, Claudio M.N.A.

    2005-01-01

    Diversity maintenance is a key-feature in most genetic-based optimization processes. The quest for such characteristic, has been motivating improvements in the original genetic algorithm (GA). The use of multiple populations (called islands) has demonstrating to increase diversity, delaying the genetic drift. Island Genetic Algorithms (IGA) lead to better results, however, the drift is only delayed, but not avoided. An important advantage of this approach is the simplicity and efficiency for parallel processing. Diversity can also be improved by the use of niching techniques. Niched Genetic Algorithms (NGA) are able to avoid the genetic drift, by containing evolution in niches of a single-population GA, however computational cost is increased. In this work it is investigated the use of a hybrid Niched-Island Genetic Algorithm (NIGA) in a nuclear core optimization problem found in literature. Computational experiments demonstrate that it is possible to take advantage of both, performance enhancement due to the parallelism and drift avoidance due to the use of niches. Comparative results shown that the proposed NIGA demonstrated to be more efficient and robust than an IGA and a NGA for solving the proposed optimization problem. (author)

  16. An Efficient VQ Codebook Search Algorithm Applied to AMR-WB Speech Coding

    Directory of Open Access Journals (Sweden)

    Cheng-Yu Yeh

    2017-04-01

    Full Text Available The adaptive multi-rate wideband (AMR-WB speech codec is widely used in modern mobile communication systems for high speech quality in handheld devices. Nonetheless, a major disadvantage is that vector quantization (VQ of immittance spectral frequency (ISF coefficients takes a considerable computational load in the AMR-WB coding. Accordingly, a binary search space-structured VQ (BSS-VQ algorithm is adopted to efficiently reduce the complexity of ISF quantization in AMR-WB. This search algorithm is done through a fast locating technique combined with lookup tables, such that an input vector is efficiently assigned to a subspace where relatively few codeword searches are required to be executed. In terms of overall search performance, this work is experimentally validated as a superior search algorithm relative to a multiple triangular inequality elimination (MTIE, a TIE with dynamic and intersection mechanisms (DI-TIE, and an equal-average equal-variance equal-norm nearest neighbor search (EEENNS approach. With a full search algorithm as a benchmark for overall search load comparison, this work provides an 87% search load reduction at a threshold of quantization accuracy of 0.96, a figure far beyond 55% in the MTIE, 76% in the EEENNS approach, and 83% in the DI-TIE approach.

  17. Probability Analysis of the Wave-Slamming Pressure Values of the Horizontal Deck with Elastic Support

    Science.gov (United States)

    Zuo, Weiguang; Liu, Ming; Fan, Tianhui; Wang, Pengtao

    2018-06-01

    This paper presents the probability distribution of the slamming pressure from an experimental study of regular wave slamming on an elastically supported horizontal deck. The time series of the slamming pressure during the wave impact were first obtained through statistical analyses on experimental data. The exceeding probability distribution of the maximum slamming pressure peak and distribution parameters were analyzed, and the results show that the exceeding probability distribution of the maximum slamming pressure peak accords with the three-parameter Weibull distribution. Furthermore, the range and relationships of the distribution parameters were studied. The sum of the location parameter D and the scale parameter L was approximately equal to 1.0, and the exceeding probability was more than 36.79% when the random peak was equal to the sample average during the wave impact. The variation of the distribution parameters and slamming pressure under different model conditions were comprehensively presented, and the parameter values of the Weibull distribution of wave-slamming pressure peaks were different due to different test models. The parameter values were found to decrease due to the increased stiffness of the elastic support. The damage criterion of the structure model caused by the wave impact was initially discussed, and the structure model was destroyed when the average slamming time was greater than a certain value during the duration of the wave impact. The conclusions of the experimental study were then described.

  18. A scalable hybrid multi-robot SLAM method for highly detailed maps

    NARCIS (Netherlands)

    Pfingsthorn, M.; Slamet, B.; Visser, A.

    2008-01-01

    Recent successful SLAM methods employ hybrid map representations combining the strengths of topological maps and occupancy grids. Such representations often facilitate multi-agent mapping. In this paper, a successful SLAM method is presented, which is inspired by the manifold data structure by

  19. Indoor radar SLAM A radar application for vision and GPS denied environments

    NARCIS (Netherlands)

    Marck, J.W.; Mohamoud, A.A.; Houwen, E.H. van de; Heijster, R.M.E.M. van

    2013-01-01

    Indoor navigation especially in unknown areas is a real challenge. Simultaneous Localization and Mapping (SLAM) technology provides a solution. However SLAM as currently based on optical sensors, is unsuitable in vision denied areas, which are for example encountered by first responders. Radar can

  20. An algorithm for applying flagged Sysmex XE-2100 absolute neutrophil counts in clinical practice

    DEFF Research Database (Denmark)

    Friis-Hansen, Lennart; Saelsen, Lone; Abildstrøm, Steen Z

    2008-01-01

    BACKGROUND: Even though most differential leukocyte counts are performed by automated hematology platforms, turn-around time is often prolonged as flagging of test results trigger additional confirmatory manual procedures. However, frequently only the absolute neutrophil count (ANC) is needed. We...... therefore examined if an algorithm could be developed to identify samples in which the automated ANC is valid despite flagged test results. METHODS: During a 3-wk period, a training set consisting of 1448 consecutive flagged test-results from the Sysmex XE-2100 system and associated manual differential...... counts was collected. The training set was used to determine which alarms were associated with valid ANCs. The algorithm was then tested on a new set of 1371 test results collected during a later 3-wk period. RESULTS: Analysis of the training set data revealed that the ANC from test results flagged...

  1. Algorithm applied in dialogue with Skateholders: a case study in a business tourism sector

    Directory of Open Access Journals (Sweden)

    Ana María Gil Lafuente

    2010-12-01

    Full Text Available According to numerous scientific studies one of the most important points in the area of sustainability in business is related to dialogue with stakeholders. Based on Stakeholder Theory we try to analyze corporate sustainability and the process of preparing a report that a company in the tourism sector in accordance with the guidelines of the guide G3 - Global Reporting Initiative. With the completion of an empirical study seeks to understand the expectations of stakeholders regarding the implementation of the contents of the sustainability report. To achieve the proposed aim we use «The Expertons Method» algorithm that allows the aggregation of opinions of various experts on the subject and represents an important extension of fuzzy subsets for aggregation processes. At the end of our study, we present the results of using this algorithm, the contributions and future research.

  2. The particle swarm optimization algorithm applied to nuclear systems surveillance test planning

    International Nuclear Information System (INIS)

    Siqueira, Newton Norat

    2006-12-01

    This work shows a new approach to solve availability maximization problems in electromechanical systems, under periodic preventive scheduled tests. This approach uses a new Optimization tool called PSO developed by Kennedy and Eberhart (2001), Particle Swarm Optimization, integrated with probabilistic safety analysis model. Two maintenance optimization problems are solved by the proposed technique, the first one is a hypothetical electromechanical configuration and the second one is a real case from a nuclear power plant (Emergency Diesel Generators). For both problem PSO is compared to a genetic algorithm (GA). In the experiments made, PSO was able to obtain results comparable or even slightly better than those obtained b GA. Therefore, the PSO algorithm is simpler and its convergence is faster, indicating that PSO is a good alternative for solving such kind of problems. (author)

  3. Transfusion algorithms and how they apply to blood conservation: the high-risk cardiac surgical patient.

    Science.gov (United States)

    Steiner, Marie E; Despotis, George John

    2007-02-01

    Considerable blood product support is administered to the cardiac surgery population. Due to the multifactorial etiology of bleeding in the cardiac bypass patient, blood products frequently and empirically are infused to correct bleeding, with varying success. Several studies have demonstrated the benefit of algorithm-guided transfusion in reducing blood loss, transfusion exposure, or rate of surgical re-exploration for bleeding. Some transfusion algorithms also incorporate laboratory-based decision points in their guidelines. Despite published success with standardized transfusion practices, generalized change in blood use has not been realized, and it is evident that current laboratory-guided hemostasis measures are inadequate to define and address the bleeding etiology in these patients.

  4. Energy loss optimization of run-off-road wheels applying imperialist competitive algorithm

    Directory of Open Access Journals (Sweden)

    Hamid Taghavifar

    2014-08-01

    Full Text Available The novel imperialist competitive algorithm (ICA has presented outstanding fitness on various optimization problems. Application of meta-heuristics has been a dynamic studying interest of the reliability optimization to determine idleness and reliability constituents. The application of a meta-heuristic evolutionary optimization method, imperialist competitive algorithm (ICA, for minimization of energy loss due to wheel rolling resistance in a soil bin facility equipped with single-wheel tester is discussed. The required data were collected thorough various designed experiments in the controlled soil bin environment. Local and global searching of the search space proposed that the energy loss could be reduced to the minimum amount of 15.46 J at the optimized input variable configuration of wheel load at 1.2 kN, tire inflation pressure of 296 kPa and velocity of 2 m/s. Meanwhile, genetic algorithm (GA, particle swarm optimization (PSO and hybridized GA–PSO approaches were benchmarked among the broad spectrum of meta-heuristics to find the outperforming approach. It was deduced that, on account of the obtained results, ICA can achieve optimum configuration with superior accuracy in less required computational time.

  5. Multiple Harmonics Fitting Algorithms Applied to Periodic Signals Based on Hilbert-Huang Transform

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2013-01-01

    Full Text Available A new generation of multipurpose measurement equipment is transforming the role of computers in instrumentation. The new features involve mixed devices, such as kinds of sensors, analog-to-digital and digital-to-analog converters, and digital signal processing techniques, that are able to substitute typical discrete instruments like multimeters and analyzers. Signal-processing applications frequently use least-squares (LS sine-fitting algorithms. Periodic signals may be interpreted as a sum of sine waves with multiple frequencies: the Fourier series. This paper describes a new sine fitting algorithm that is able to fit a multiharmonic acquired periodic signal. By means of a “sinusoidal wave” whose amplitude and phase are both transient, the “triangular wave” can be reconstructed on the basis of Hilbert-Huang transform (HHT. This method can be used to test effective number of bits (ENOBs of analog-to-digital converter (ADC, avoiding the trouble of selecting initial value of the parameters and working out the nonlinear equations. The simulation results show that the algorithm is precise and efficient. In the case of enough sampling points, even under the circumstances of low-resolution signal with the harmonic distortion existing, the root mean square (RMS error between the sampling data of original “triangular wave” and the corresponding points of fitting “sinusoidal wave” is marvelously small. That maybe means, under the circumstances of any periodic signal, that ENOBs of high-resolution ADC can be tested accurately.

  6. User Activity Recognition in Smart Homes Using Pattern Clustering Applied to Temporal ANN Algorithm.

    Science.gov (United States)

    Bourobou, Serge Thomas Mickala; Yoo, Younghwan

    2015-05-21

    This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things) based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen's temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home.

  7. User Activity Recognition in Smart Homes Using Pattern Clustering Applied to Temporal ANN Algorithm

    Directory of Open Access Journals (Sweden)

    Serge Thomas Mickala Bourobou

    2015-05-01

    Full Text Available This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen’s temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home.

  8. Optimization of spatial light distribution through genetic algorithms for vision systems applied to quality control

    International Nuclear Information System (INIS)

    Castellini, P; Cecchini, S; Stroppa, L; Paone, N

    2015-01-01

    The paper presents an adaptive illumination system for image quality enhancement in vision-based quality control systems. In particular, a spatial modulation of illumination intensity is proposed in order to improve image quality, thus compensating for different target scattering properties, local reflections and fluctuations of ambient light. The desired spatial modulation of illumination is obtained by a digital light projector, used to illuminate the scene with an arbitrary spatial distribution of light intensity, designed to improve feature extraction in the region of interest. The spatial distribution of illumination is optimized by running a genetic algorithm. An image quality estimator is used to close the feedback loop and to stop iterations once the desired image quality is reached. The technique proves particularly valuable for optimizing the spatial illumination distribution in the region of interest, with the remarkable capability of the genetic algorithm to adapt the light distribution to very different target reflectivity and ambient conditions. The final objective of the proposed technique is the improvement of the matching score in the recognition of parts through matching algorithms, hence of the diagnosis of machine vision-based quality inspections. The procedure has been validated both by a numerical model and by an experimental test, referring to a significant problem of quality control for the washing machine manufacturing industry: the recognition of a metallic clamp. Its applicability to other domains is also presented, specifically for the visual inspection of shoes with retro-reflective tape and T-shirts with paillettes. (paper)

  9. Vision Based SLAM in Dynamic Scenes

    Science.gov (United States)

    2012-12-20

    the correct relative poses between cameras at frame F. For this purpose, we detect and match SURF features between cameras in dilierent groups, and...all cameras in s uch a challenging case. For a compa rison, we disabled the ’ inte r-camera pose estimation’ and applied the ’ intra-camera pose esti

  10. Applying a machine learning model using a locally preserving projection based feature regeneration algorithm to predict breast cancer risk

    Science.gov (United States)

    Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qian, Wei; Zheng, Bin

    2018-03-01

    Both conventional and deep machine learning has been used to develop decision-support tools applied in medical imaging informatics. In order to take advantages of both conventional and deep learning approach, this study aims to investigate feasibility of applying a locally preserving projection (LPP) based feature regeneration algorithm to build a new machine learning classifier model to predict short-term breast cancer risk. First, a computer-aided image processing scheme was used to segment and quantify breast fibro-glandular tissue volume. Next, initially computed 44 image features related to the bilateral mammographic tissue density asymmetry were extracted. Then, an LLP-based feature combination method was applied to regenerate a new operational feature vector using a maximal variance approach. Last, a k-nearest neighborhood (KNN) algorithm based machine learning classifier using the LPP-generated new feature vectors was developed to predict breast cancer risk. A testing dataset involving negative mammograms acquired from 500 women was used. Among them, 250 were positive and 250 remained negative in the next subsequent mammography screening. Applying to this dataset, LLP-generated feature vector reduced the number of features from 44 to 4. Using a leave-onecase-out validation method, area under ROC curve produced by the KNN classifier significantly increased from 0.62 to 0.68 (p breast cancer detected in the next subsequent mammography screening.

  11. Continuous grasp algorithm applied to economic dispatch problem of thermal units

    Energy Technology Data Exchange (ETDEWEB)

    Vianna Neto, Julio Xavier [Pontifical Catholic University of Parana - PUCPR, Curitiba, PR (Brazil). Undergraduate Program at Mechatronics Engineering; Bernert, Diego Luis de Andrade; Coelho, Leandro dos Santos [Pontifical Catholic University of Parana - PUCPR, Curitiba, PR (Brazil). Industrial and Systems Engineering Graduate Program, LAS/PPGEPS], e-mail: leandro.coelho@pucpr.br

    2010-07-01

    The economic dispatch problem (EDP) is one of the fundamental issues in power systems to obtain benefits with the stability, reliability and security. Its objective is to allocate the power demand among committed generators in the most economical manner, while all physical and operational constraints are satisfied. The cost of power generation, particularly in fossil fuel plants, is very high and economic dispatch helps in saving a significant amount of revenue. Recently, as an alternative to the conventional mathematical approaches, modern heuristic optimization techniques such as simulated annealing, evolutionary algorithms, neural networks, ant colony, and tabu search have been given much attention by many researchers due to their ability to find an almost global optimal solution in EDPs. On other hand, continuous GRASP (C-GRASP) is a stochastic local search meta-heuristic for finding cost-efficient solutions to continuous global optimization problems subject to box constraints. Like a greedy randomized adaptive search procedure (GRASP), a C-GRASP is a multi-start procedure where a starting solution for local improvement is constructed in a greedy randomized fashion. The C-GRASP algorithm is validated for a test system consisting of fifteen units, test system that takes into account spinning reserve and prohibited operating zones constrains. (author)

  12. Coarse-grained parallel genetic algorithm applied to a nuclear reactor core design optimization problem

    International Nuclear Information System (INIS)

    Pereira, Claudio M.N.A.; Lapa, Celso M.F.

    2003-01-01

    This work extends the research related to generic algorithms (GA) in core design optimization problems, which basic investigations were presented in previous work. Here we explore the use of the Island Genetic Algorithm (IGA), a coarse-grained parallel GA model, comparing its performance to that obtained by the application of a traditional non-parallel GA. The optimization problem consists on adjusting several reactor cell parameters, such as dimensions, enrichment and materials, in order to minimize the average peak-factor in a 3-enrichment zone reactor, considering restrictions on the average thermal flux, criticality and sub-moderation. Our IGA implementation runs as a distributed application on a conventional local area network (LAN), avoiding the use of expensive parallel computers or architectures. After exhaustive experiments, taking more than 1500 h in 550 MHz personal computers, we have observed that the IGA provided gains not only in terms of computational time, but also in the optimization outcome. Besides, we have also realized that, for such kind of problem, which fitness evaluation is itself time consuming, the time overhead in the IGA, due to the communication in LANs, is practically imperceptible, leading to the conclusion that the use of expensive parallel computers or architecture can be avoided

  13. Azcaxalli: A system based on Ant Colony Optimization algorithms, applied to fuel reloads design in a Boiling Water Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Esquivel-Estrada, Jaime, E-mail: jaime.esquivel@fi.uaemex.m [Facultad de Ingenieria, Universidad Autonoma del Estado de Mexico, Cerro de Coatepec S/N, Toluca de Lerdo, Estado de Mexico 50000 (Mexico); Instituto Nacional de Investigaciones Nucleares, Carr. Mexico Toluca S/N, Ocoyoacac, Estado de Mexico 52750 (Mexico); Ortiz-Servin, Juan Jose, E-mail: juanjose.ortiz@inin.gob.m [Instituto Nacional de Investigaciones Nucleares, Carr. Mexico Toluca S/N, Ocoyoacac, Estado de Mexico 52750 (Mexico); Castillo, Jose Alejandro; Perusquia, Raul [Instituto Nacional de Investigaciones Nucleares, Carr. Mexico Toluca S/N, Ocoyoacac, Estado de Mexico 52750 (Mexico)

    2011-01-15

    This paper presents some results of the implementation of several optimization algorithms based on ant colonies, applied to the fuel reload design in a Boiling Water Reactor. The system called Azcaxalli is constructed with the following algorithms: Ant Colony System, Ant System, Best-Worst Ant System and MAX-MIN Ant System. Azcaxalli starts with a random fuel reload. Ants move into reactor core channels according to the State Transition Rule in order to select two fuel assemblies into a 1/8 part of the reactor core and change positions between them. This rule takes into account pheromone trails and acquired knowledge. Acquired knowledge is obtained from load cycle values of fuel assemblies. Azcaxalli claim is to work in order to maximize the cycle length taking into account several safety parameters. Azcaxalli's objective function involves thermal limits at the end of the cycle, cold shutdown margin at the beginning of the cycle and the neutron effective multiplication factor for a given cycle exposure. Those parameters are calculated by CM-PRESTO code. Through the Haling Principle is possible to calculate the end of the cycle. This system was applied to an equilibrium cycle of 18 months of Laguna Verde Nuclear Power Plant in Mexico. The results show that the system obtains fuel reloads with higher cycle lengths than the original fuel reload. Azcaxalli results are compared with genetic algorithms, tabu search and neural networks results.

  14. Development of a multi-objective PBIL evolutionary algorithm applied to a nuclear reactor core reload optimization problem

    International Nuclear Information System (INIS)

    Machado, Marcelo D.; Dchirru, Roberto

    2005-01-01

    The nuclear reactor core reload optimization problem consists in finding a pattern of partially burned-up and fresh fuels that optimizes the plant's next operation cycle. This optimization problem has been traditionally solved using an expert's knowledge, but recently artificial intelligence techniques have also been applied successfully. The artificial intelligence optimization techniques generally have a single objective. However, most real-world engineering problems, including nuclear core reload optimization, have more than one objective (multi-objective) and these objectives are usually conflicting. The aim of this work is to develop a tool to solve multi-objective problems based on the Population-Based Incremental Learning (PBIL) algorithm. The new tool is applied to solve the Angra 1 PWR core reload optimization problem with the purpose of creating a Pareto surface, so that a pattern selected from this surface can be applied for the plant's next operation cycle. (author)

  15. A niching genetic algorithm applied to a nuclear power plant auxiliary feedwater system surveillance tests policy optimization

    International Nuclear Information System (INIS)

    Sacco, W.F.; Lapa, Celso M.F.; Pereira, C.M.N.A.; Oliveira, C.R.E. de

    2006-01-01

    This article extends previous efforts on genetic algorithms (GAs) applied to a nuclear power plant (NPP) auxiliary feedwater system (AFWS) surveillance tests policy optimization. We introduce the application of a niching genetic algorithm (NGA) to this problem and compare its performance to previous results. The NGA maintains a populational diversity during the search process, thus promoting a greater exploration of the search space. The optimization problem consists in maximizing the system's average availability for a given period of time, considering realistic features such as: (i) aging effects on standby components during the tests; (ii) revealing failures in the tests implies on corrective maintenance, increasing outage times; (iii) components have distinct test parameters (outage time, aging factors, etc.) and (iv) tests are not necessarily periodic. We find that the NGA performs better than the conventional GA and the island GA due to a greater exploration of the search space

  16. A New Missing Data Imputation Algorithm Applied to Electrical Data Loggers

    Directory of Open Access Journals (Sweden)

    Concepción Crespo Turrado

    2015-12-01

    Full Text Available Nowadays, data collection is a key process in the study of electrical power networks when searching for harmonics and a lack of balance among phases. In this context, the lack of data of any of the main electrical variables (phase-to-neutral voltage, phase-to-phase voltage, and current in each phase and power factor adversely affects any time series study performed. When this occurs, a data imputation process must be accomplished in order to substitute the data that is missing for estimated values. This paper presents a novel missing data imputation method based on multivariate adaptive regression splines (MARS and compares it with the well-known technique called multivariate imputation by chained equations (MICE. The results obtained demonstrate how the proposed method outperforms the MICE algorithm.

  17. A simulator-independent optimization tool based on genetic algorithm applied to nuclear reactor design

    International Nuclear Information System (INIS)

    Abreu Pereira, Claudio Marcio Nascimento do; Schirru, Roberto; Martinez, Aquilino Senra

    1999-01-01

    Here is presented an engineering optimization tool based on a genetic algorithm, implemented according to the method proposed in recent work that has demonstrated the feasibility of the use of this technique in nuclear reactor core designs. The tool is simulator-independent in the sense that it can be customized to use most of the simulators which have the input parameters read from formatted text files and the outputs also written from a text file. As the nuclear reactor simulators generally use such kind of interface, the proposed tool plays an important role in nuclear reactor designs. Research reactors may often use non-conventional design approaches, causing different situations that may lead the nuclear engineer to face new optimization problems. In this case, a good optimization technique, together with its customizing facility and a friendly man-machine interface could be very interesting. Here, the tool is described and some advantages are outlined. (author)

  18. Double-Stage Delay Multiply and Sum Beamforming Algorithm Applied to Ultrasound Medical Imaging.

    Science.gov (United States)

    Mozaffarzadeh, Moein; Sadeghi, Masume; Mahloojifar, Ali; Orooji, Mahdi

    2018-03-01

    In ultrasound (US) imaging, delay and sum (DAS) is the most common beamformer, but it leads to low-quality images. Delay multiply and sum (DMAS) was introduced to address this problem. However, the reconstructed images using DMAS still suffer from the level of side lobes and low noise suppression. Here, a novel beamforming algorithm is introduced based on expansion of the DMAS formula. We found that there is a DAS algebra inside the expansion, and we proposed use of the DMAS instead of the DAS algebra. The introduced method, namely double-stage DMAS (DS-DMAS), is evaluated numerically and experimentally. The quantitative results indicate that DS-DMAS results in an approximately 25% lower level of side lobes compared with DMAS. Moreover, the introduced method leads to 23%, 22% and 43% improvement in signal-to-noise ratio, full width at half-maximum and contrast ratio, respectively, compared with the DMAS beamformer. Copyright © 2018. Published by Elsevier Inc.

  19. Comparison of the inversion algorithms applied to the ozone vertical profile retrieval from SCIAMACHY limb measurements

    Directory of Open Access Journals (Sweden)

    A. Rozanov

    2007-09-01

    Full Text Available This paper is devoted to an intercomparison of ozone vertical profiles retrieved from the measurements of scattered solar radiation performed by the SCIAMACHY instrument in the limb viewing geometry. Three different inversion algorithms including the prototype of the operational Level 1 to 2 processor to be operated by the European Space Agency are considered. Unlike usual validation studies, this comparison removes the uncertainties arising when comparing measurements made by different instruments probing slightly different air masses and focuses on the uncertainties specific to the modeling-retrieval problem only. The intercomparison was performed for 5 selected orbits of SCIAMACHY showing a good overall agreement of the results in the middle stratosphere, whereas considerable discrepancies were identified in the lower stratosphere and upper troposphere altitude region. Additionally, comparisons with ground-based lidar measurements are shown for selected profiles demonstrating an overall correctness of the retrievals.

  20. Comparison of algorithms for blood stain detection applied to forensic hyperspectral imagery

    Science.gov (United States)

    Yang, Jie; Messinger, David W.; Mathew, Jobin J.; Dube, Roger R.

    2016-05-01

    Blood stains are among the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Early detection of blood stains is particularly important since the blood reacts physically and chemically with air and materials over time. Accurate identification of blood remnants, including regions that might have been intentionally cleaned, is an important aspect of forensic investigation. Hyperspectral imaging might be a potential method to detect blood stains because it is non-contact and provides substantial spectral information that can be used to identify regions in a scene with trace amounts of blood. The potential complexity of scenes in which such vast violence occurs can be high when the range of scene material types and conditions containing blood stains at a crime scene are considered. Some stains are hard to detect by the unaided eye, especially if a conscious effort to clean the scene has occurred (we refer to these as "latent" blood stains). In this paper we present the initial results of a study of the use of hyperspectral imaging algorithms for blood detection in complex scenes. We describe a hyperspectral imaging system which generates images covering 400 nm - 700 nm visible range with a spectral resolution of 10 nm. Three image sets of 31 wavelength bands were generated using this camera for a simulated indoor crime scene in which blood stains were placed on a T-shirt and walls. To detect blood stains in the scene, Principal Component Analysis (PCA), Subspace Reed Xiaoli Detection (SRXD), and Topological Anomaly Detection (TAD) algorithms were used. Comparison of the three hyperspectral image analysis techniques shows that TAD is most suitable for detecting blood stains and discovering latent blood stains.

  1. A modified firefly algorithm applied to the nuclear reload problem of a pressurized water reactor

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Iona Maghali Santos de; Schirru, Roberto, E-mail: ioliveira@con.ufrj.b, E-mail: schirru@lmp.ufrj.b [Universidade Federal do Rio de Janeiro (PEN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Engenharia Nuclear

    2011-07-01

    The Nuclear Reactor Reload Problem (NRRP) is an issue of great importance and concern in nuclear engineering. It is the problem related with the periodic operation of replacing part of the fuel of a nuclear reactor. Traditionally, this procedure occurs after a period of operation called a cycle, or whenever the nuclear power plant is unable to continue operating at its nominal power. Studied for more than 40 years, the NRRP still remains a challenge for many optimization techniques due to its multiple objectives concerning economics, safety and reactor physics calculations. Characteristics such as non-linearity, multimodality and high dimensionality also make the NRRP a very complex optimization problem. In broad terms, it aims at getting the best arrangement of fuel in the nuclear reactor core that leads to a maximization of the operating time. The primary goal is to design fuel loading patterns (LPs) so that the core produces the required energy output in an economical way, without violating safety limits. Since multiple feasible solutions can be obtained to this problem, judicious optimization is required in order to identify the most economical among them. In this sense, this paper presents a new contribution in this area and introduces a modified firefly algorithm (FA) to perform LPs optimization for a pressurized water reactor. Based on the original FA introduced by Xin-She Yang in 2008, the proposed methodology seems to be very promising as an optimizer to the NRRP. The experiments performed and the comparisons with some well known best performing algorithms from the literature, confirm this statement. (author)

  2. A modified firefly algorithm applied to the nuclear reload problem of a pressurized water reactor

    International Nuclear Information System (INIS)

    Oliveira, Iona Maghali Santos de; Schirru, Roberto

    2011-01-01

    The Nuclear Reactor Reload Problem (NRRP) is an issue of great importance and concern in nuclear engineering. It is the problem related with the periodic operation of replacing part of the fuel of a nuclear reactor. Traditionally, this procedure occurs after a period of operation called a cycle, or whenever the nuclear power plant is unable to continue operating at its nominal power. Studied for more than 40 years, the NRRP still remains a challenge for many optimization techniques due to its multiple objectives concerning economics, safety and reactor physics calculations. Characteristics such as non-linearity, multimodality and high dimensionality also make the NRRP a very complex optimization problem. In broad terms, it aims at getting the best arrangement of fuel in the nuclear reactor core that leads to a maximization of the operating time. The primary goal is to design fuel loading patterns (LPs) so that the core produces the required energy output in an economical way, without violating safety limits. Since multiple feasible solutions can be obtained to this problem, judicious optimization is required in order to identify the most economical among them. In this sense, this paper presents a new contribution in this area and introduces a modified firefly algorithm (FA) to perform LPs optimization for a pressurized water reactor. Based on the original FA introduced by Xin-She Yang in 2008, the proposed methodology seems to be very promising as an optimizer to the NRRP. The experiments performed and the comparisons with some well known best performing algorithms from the literature, confirm this statement. (author)

  3. A hybrid adaptive large neighborhood search algorithm applied to a lot-sizing problem

    DEFF Research Database (Denmark)

    Muller, Laurent Flindt; Spoorendonk, Simon

    This paper presents a hybrid of a general heuristic framework that has been successfully applied to vehicle routing problems and a general purpose MIP solver. The framework uses local search and an adaptive procedure which choses between a set of large neighborhoods to be searched. A mixed integer...... of a solution and to investigate the feasibility of elements in such a neighborhood. The hybrid heuristic framework is applied to the multi-item capacitated lot sizing problem with dynamic lot sizes, where experiments have been conducted on a series of instances from the literature. On average the heuristic...

  4. Robust algorithms and system theory applied to the reconstruction of primary and secondary vertices

    International Nuclear Information System (INIS)

    Fruehwirth, R.; Liko, D.; Mitaroff, W.; Regler, M.

    1990-01-01

    Filter techniques from system theory have recently been applied to the estimation of track and vertex parameters. In this paper, vertex fitting by the Kalman filter method is discussed. These techniques have been applied to the identification of short-lived decay vertices in the case of high multiplicities as expected at LEP (Monte Carlo data in the DELPHI detector). Then in this context the need of further rebustification of the Kalman filter method is discussed. Finally results of an application with real data at a heavy ion experiment (NA36) will be presented. Here the vertex fit is used to select the interaction point among possible targets

  5. a Mapping Method of Slam Based on Look up Table

    Science.gov (United States)

    Wang, Z.; Li, J.; Wang, A.; Wang, J.

    2017-09-01

    In the last years several V-SLAM(Visual Simultaneous Localization and Mapping) approaches have appeared showing impressive reconstructions of the world. However these maps are built with far more than the required information. This limitation comes from the whole process of each key-frame. In this paper we present for the first time a mapping method based on the LOOK UP TABLE(LUT) for visual SLAM that can improve the mapping effectively. As this method relies on extracting features in each cell divided from image, it can get the pose of camera that is more representative of the whole key-frame. The tracking direction of key-frames is obtained by counting the number of parallax directions of feature points. LUT stored all mapping needs the number of cell corresponding to the tracking direction which can reduce the redundant information in the key-frame, and is more efficient to mapping. The result shows that a better map with less noise is build using less than one-third of the time. We believe that the capacity of LUT efficiently building maps makes it a good choice for the community to investigate in the scene reconstruction problems.

  6. Continuous Recording and Interobserver Agreement Algorithms Reported in The Journal of Applied Behavior Analysis (1995–2005)

    Science.gov (United States)

    Mudford, Oliver C; Taylor, Sarah Ann; Martin, Neil T

    2009-01-01

    We reviewed all research articles in 10 recent volumes of the Journal of Applied Behavior Analysis (JABA): Vol. 28(3), 1995, through Vol. 38(2), 2005. Continuous recording was used in the majority (55%) of the 168 articles reporting data on free-operant human behaviors. Three methods for reporting interobserver agreement (exact agreement, block-by-block agreement, and time-window analysis) were employed in more than 10 of the articles that reported continuous recording. Having identified these currently popular agreement computation algorithms, we explain them to assist researchers, software writers, and other consumers of JABA articles. PMID:19721737

  7. Continuous recording and interobserver agreement algorithms reported in the Journal of Applied Behavior Analysis (1995-2005).

    Science.gov (United States)

    Mudford, Oliver C; Taylor, Sarah Ann; Martin, Neil T

    2009-01-01

    We reviewed all research articles in 10 recent volumes of the Journal of Applied Behavior Analysis (JABA): Vol. 28(3), 1995, through Vol. 38(2), 2005. Continuous recording was used in the majority (55%) of the 168 articles reporting data on free-operant human behaviors. Three methods for reporting interobserver agreement (exact agreement, block-by-block agreement, and time-window analysis) were employed in more than 10 of the articles that reported continuous recording. Having identified these currently popular agreement computation algorithms, we explain them to assist researchers, software writers, and other consumers of JABA articles.

  8. Experimental investigation of slamming impact acted on flat bottom bodies and cumulative damage

    Directory of Open Access Journals (Sweden)

    Hyunkyoung Shin

    2018-05-01

    Full Text Available Most offshore structures including offshore wind turbines, ships, etc. suffer from the impulsive pressure loads due to slamming phenomena in rough waves. The effects of elasticity & plasticity on such slamming loads are investigated through wet free drop test results of several steel unstiffened flat bottom bodies in the rectangular water tank. Also, their cumulative deformations by consecutively repetitive free drops from 1000 mm to 2000 mm in height are measured. Keywords: Slamming phenomena, Impulsive pressure load, Wet free drop test, Flat bottom body, Cumulative damage

  9. Statistical methods applied to gamma-ray spectroscopy algorithms in nuclear security missions.

    Science.gov (United States)

    Fagan, Deborah K; Robinson, Sean M; Runkle, Robert C

    2012-10-01

    Gamma-ray spectroscopy is a critical research and development priority to a range of nuclear security missions, specifically the interdiction of special nuclear material involving the detection and identification of gamma-ray sources. We categorize existing methods by the statistical methods on which they rely and identify methods that have yet to be considered. Current methods estimate the effect of counting uncertainty but in many cases do not address larger sources of decision uncertainty, which may be significantly more complex. Thus, significantly improving algorithm performance may require greater coupling between the problem physics that drives data acquisition and statistical methods that analyze such data. Untapped statistical methods, such as Bayes Modeling Averaging and hierarchical and empirical Bayes methods, could reduce decision uncertainty by rigorously and comprehensively incorporating all sources of uncertainty. Application of such methods should further meet the needs of nuclear security missions by improving upon the existing numerical infrastructure for which these analyses have not been conducted. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Investigation of high burnup structures in uranium dioxide applying cellular automata: algorithms and codes

    International Nuclear Information System (INIS)

    Akishina, E.P.; Kostenko, B.F.; Ivanov, V.V.

    2003-01-01

    A new method of research in spatial structures that result from uranium dioxide burning in nuclear reactors of modern atomic plants is suggested. The method is based on the presentation of images of the mentioned structures in the form of the working field of a cellular automaton (CA). First, it has allowed one to extract some important quantitative characteristics of the structures directly from the micrographs of the uranium fuel surface. Secondly, the CA has been found out to allow one to formulate easily the dynamics of the evolution of the studied structures in terms of such micrograph elements as spots, spots' boundaries, cracks, etc. Relation has been found between the dynamics and some exactly solvable models of the theory of cellular automata, in particular, the Ising model and the vote model. This investigation gives a detailed description of some CA algorithms which allow one to perform the fuel surface image processing and to model its evolution caused by burnup or chemical etching. (author)

  11. Globally Consistent Indoor Mapping via a Decoupling Rotation and Translation Algorithm Applied to RGB-D Camera Output

    Directory of Open Access Journals (Sweden)

    Yuan Liu

    2017-10-01

    Full Text Available This paper presents a novel RGB-D 3D reconstruction algorithm for the indoor environment. The method can produce globally-consistent 3D maps for potential GIS applications. As the consumer RGB-D camera provides a noisy depth image, the proposed algorithm decouples the rotation and translation for a more robust camera pose estimation, which makes full use of the information, but also prevents inaccuracies caused by noisy depth measurements. The uncertainty in the image depth is not only related to the camera device, but also the environment; hence, a novel uncertainty model for depth measurements was developed using Gaussian mixture applied to multi-windows. The plane features in the indoor environment contain valuable information about the global structure, which can guide the convergence of camera pose solutions, and plane and feature point constraints are incorporated in the proposed optimization framework. The proposed method was validated using publicly-available RGB-D benchmarks and obtained good quality trajectory and 3D models, which are difficult for traditional 3D reconstruction algorithms.

  12. Parallel island genetic algorithm applied to a nuclear power plant auxiliary feedwater system surveillance tests policy optimization

    International Nuclear Information System (INIS)

    Pereira, Claudio M.N.A.; Lapa, Celso M.F.

    2003-01-01

    In this work, we focus the application of an Island Genetic Algorithm (IGA), a coarse-grained parallel genetic algorithm (PGA) model, to a Nuclear Power Plant (NPP) Auxiliary Feedwater System (AFWS) surveillance tests policy optimization. Here, the main objective is to outline, by means of comparisons, the advantages of the IGA over the simple (non-parallel) genetic algorithm (GA), which has been successfully applied in the solution of such kind of problem. The goal of the optimization is to maximize the system's average availability for a given period of time, considering realistic features such as: i) aging effects on standby components during the tests; ii) revealing failures in the tests implies on corrective maintenance, increasing outage times; iii) components have distinct test parameters (outage time, aging factors, etc.) and iv) tests are not necessarily periodic. In our experiments, which were made in a cluster comprised by 8 1-GHz personal computers, we could clearly observe gains not only in the computational time, which reduced linearly with the number of computers, but in the optimization outcome

  13. Dynamic Water Surface Detection Algorithm Applied on PROBA-V Multispectral Data

    Directory of Open Access Journals (Sweden)

    Luc Bertels

    2016-12-01

    Full Text Available Water body detection worldwide using spaceborne remote sensing is a challenging task. A global scale multi-temporal and multi-spectral image analysis method for water body detection was developed. The PROBA-V microsatellite has been fully operational since December 2013 and delivers daily near-global synthesis with a spatial resolution of 1 km and 333 m. The Red, Near-InfRared (NIR and Short Wave InfRared (SWIR bands of the atmospherically corrected 10-day synthesis images are first Hue, Saturation and Value (HSV color transformed and subsequently used in a decision tree classification for water body detection. To minimize commission errors four additional data layers are used: the Normalized Difference Vegetation Index (NDVI, Water Body Potential Mask (WBPM, Permanent Glacier Mask (PGM and Volcanic Soil Mask (VSM. Threshold values on the hue and value bands, expressed by a parabolic function, are used to detect the water bodies. Beside the water bodies layer, a quality layer, based on the water bodies occurrences, is available in the output product. The performance of the Water Bodies Detection Algorithm (WBDA was assessed using Landsat 8 scenes over 15 regions selected worldwide. A mean Commission Error (CE of 1.5% was obtained while a mean Omission Error (OE of 15.4% was obtained for minimum Water Surface Ratio (WSR = 0.5 and drops to 9.8% for minimum WSR = 0.6. Here, WSR is defined as the fraction of the PROBA-V pixel covered by water as derived from high spatial resolution images, e.g., Landsat 8. Both the CE = 1.5% and OE = 9.8% (WSR = 0.6 fall within the user requirements of 15%. The WBDA is fully operational in the Copernicus Global Land Service and products are freely available.

  14. Enhancing State-of-the-art Multi-objective Optimization Algorithms by Applying Domain Specific Operators

    DEFF Research Database (Denmark)

    Ghoreishi, Newsha; Sørensen, Jan Corfixen; Jørgensen, Bo Nørregaard

    2015-01-01

    optimization problems where the environment does not change dynamically. For that reason, the requirement for convergence in static optimization problems is not as timecritical as for dynamic optimization problems. Most MOEAs use generic variables and operators that scale to static multi-objective optimization...... problem. The domain specific operators only encode existing knowledge about the environment. A comprehensive comparative study is provided to evaluate the results of applying the CONTROLEUM-GA compared to NSGAII, e-NSGAII and e- MOEA. Experimental results demonstrate clear improvements in convergence time...

  15. Applying Different Independent Component Analysis Algorithms and Support Vector Regression for IT Chain Store Sales Forecasting

    Science.gov (United States)

    Dai, Wensheng

    2014-01-01

    Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting. PMID:25165740

  16. Applying different independent component analysis algorithms and support vector regression for IT chain store sales forecasting.

    Science.gov (United States)

    Dai, Wensheng; Wu, Jui-Yu; Lu, Chi-Jie

    2014-01-01

    Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting.

  17. Applying Different Independent Component Analysis Algorithms and Support Vector Regression for IT Chain Store Sales Forecasting

    Directory of Open Access Journals (Sweden)

    Wensheng Dai

    2014-01-01

    Full Text Available Sales forecasting is one of the most important issues in managing information technology (IT chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR, is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA, temporal ICA (tICA, and spatiotemporal ICA (stICA to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting.

  18. Model-based testing with UML applied to a roaming algorithm for bluetooth devices.

    Science.gov (United States)

    Dai, Zhen Ru; Grabowski, Jens; Neukirchen, Helmut; Pals, Holger

    2004-11-01

    In late 2001, the Object Management Group issued a Request for Proposal to develop a testing profile for UML 2.0. In June 2003, the work on the UML 2.0 Testing Profile was finally adopted by the OMG. Since March 2004, it has become an official standard of the OMG. The UML 2.0 Testing Profile provides support for UML based model-driven testing. This paper introduces a methodology on how to use the testing profile in order to modify and extend an existing UML design model for test issues. The application of the methodology will be explained by applying it to an existing UML Model for a Bluetooth device.

  19. Advancement of vision-based SLAM from static to dynamic environments

    CSIR Research Space (South Africa)

    Pancham, A

    2012-11-01

    Full Text Available Simultaneous Localization And Mapping (SLAM) allows a mobile robot to construct a map of an unknown, static environment and simultaneously localize itself. Real world environments, however, have dynamic objects such as people, doors that open...

  20. A iterative algorithm in computarized tomography applied to non-destructive testing

    International Nuclear Information System (INIS)

    Santos, C.A.C.

    1982-10-01

    In the present work, a mathematical model has been developed for two dimensional image reconstruction in computarized tomography applied to non-destructive testing. The method used is the Algebraic Reconstruction Technique (ART) with additive corrections. This model consists of a discontinuous system formed by an NxN array of cells (pixels). The attenuation in the object of a collimated beam of gamma rays has been determined for various positions and angles of incidence (projections) in terms of the interaction of the beam with the intercepted pixels. The contribution of each pixel to beam attenuation was determined using the weight function wij. Simulated tests using standard objects carried out with attenuation coefficients in the range 0,2 to 0,7 cm -1 , were made using cell arrays of up to 25x25. Experiments were made using a gamma radiation source ( 241 Am), a table with translational and rotational movements and a gamma radiation detection system. Results indicate that convergence obtained in the iterative calculations is a function of the distribution of attenuation coefficient in the pixels, of the number of angular projection and of the number of iterations. (author) [pt

  1. Elastic LiDAR Fusion: Dense Map-Centric Continuous-Time SLAM

    OpenAIRE

    Park, Chanoh; Moghadam, Peyman; Kim, Soohwan; Elfes, Alberto; Fookes, Clinton; Sridharan, Sridha

    2017-01-01

    The concept of continuous-time trajectory representation has brought increased accuracy and efficiency to multi-modal sensor fusion in modern SLAM. However, regardless of these advantages, its offline property caused by the requirement of global batch optimization is critically hindering its relevance for real-time and life-long applications. In this paper, we present a dense map-centric SLAM method based on a continuous-time trajectory to cope with this problem. The proposed system locally f...

  2. Microscopic insight into thermodynamics of conformational changes of SAP-SLAM complex in signal transduction cascade

    Science.gov (United States)

    Samanta, Sudipta; Mukherjee, Sanchita

    2017-04-01

    The signalling lymphocytic activation molecule (SLAM) family of receptors, expressed by an array of immune cells, associate with SLAM-associated protein (SAP)-related molecules, composed of single SH2 domain architecture. SAP activates Src-family kinase Fyn after SLAM ligation, resulting in a SLAM-SAP-Fyn complex, where, SAP binds the Fyn SH3 domain that does not involve canonical SH3 or SH2 interactions. This demands insight into this SAP mediated signalling cascade. Thermodynamics of the conformational changes are extracted from the histograms of dihedral angles obtained from the all-atom molecular dynamics simulations of this structurally well characterized SAP-SLAM complex. The results incorporate the binding induced thermodynamic changes of individual amino acid as well as the secondary structural elements of the protein and the solvent. Stabilization of the peptide partially comes through a strong hydrogen bonding network with the protein, while hydrophobic interactions also play a significant role where the peptide inserts itself into a hydrophobic cavity of the protein. SLAM binding widens SAP's second binding site for Fyn, which is the next step in the signal transduction cascade. The higher stabilization and less fluctuation of specific residues of SAP in the Fyn binding site, induced by SAP-SLAM complexation, emerge as the key structural elements to trigger the recognition of SAP by the SH3 domain of Fyn. The thermodynamic quantification of the protein due to complexation not only throws deeper understanding in the established mode of SAP-SLAM interaction but also assists in the recognition of the relevant residues of the protein responsible for alterations in its activity.

  3. AeroADL: applying the integration of the Suomi-NPP science algorithms with the Algorithm Development Library to the calibration and validation task

    Science.gov (United States)

    Houchin, J. S.

    2014-09-01

    A common problem for the off-line validation of the calibration algorithms and algorithm coefficients is being able to run science data through the exact same software used for on-line calibration of that data. The Joint Polar Satellite System (JPSS) program solved part of this problem by making the Algorithm Development Library (ADL) available, which allows the operational algorithm code to be compiled and run on a desktop Linux workstation using flat file input and output. However, this solved only part of the problem, as the toolkit and methods to initiate the processing of data through the algorithms were geared specifically toward the algorithm developer, not the calibration analyst. In algorithm development mode, a limited number of sets of test data are staged for the algorithm once, and then run through the algorithm over and over as the software is developed and debugged. In calibration analyst mode, we are continually running new data sets through the algorithm, which requires significant effort to stage each of those data sets for the algorithm without additional tools. AeroADL solves this second problem by providing a set of scripts that wrap the ADL tools, providing both efficient means to stage and process an input data set, to override static calibration coefficient look-up-tables (LUT) with experimental versions of those tables, and to manage a library containing multiple versions of each of the static LUT files in such a way that the correct set of LUTs required for each algorithm are automatically provided to the algorithm without analyst effort. Using AeroADL, The Aerospace Corporation's analyst team has demonstrated the ability to quickly and efficiently perform analysis tasks for both the VIIRS and OMPS sensors with minimal training on the software tools.

  4. Canine distemper virus isolated from a monkey efficiently replicates on Vero cells expressing non-human primate SLAM receptors but not human SLAM receptor.

    Science.gov (United States)

    Feng, Na; Liu, Yuxiu; Wang, Jianzhong; Xu, Weiwei; Li, Tiansong; Wang, Tiecheng; Wang, Lei; Yu, Yicong; Wang, Hualei; Zhao, Yongkun; Yang, Songtao; Gao, Yuwei; Hu, Guixue; Xia, Xianzhu

    2016-08-02

    In 2008, an outbreak of canine distemper virus (CDV) infection in monkeys was reported in China. We isolated CDV strain (subsequently named Monkey-BJ01-DV) from lung tissue obtained from a rhesus monkey that died in this outbreak. We evaluated the ability of this virus on Vero cells expressing SLAM receptors from dog, monkey and human origin, and analyzed the H gene of Monkey-BJ01-DV with other strains. The Monkey-BJ01-DV isolate replicated to the highest titer on Vero cells expressing dog-origin SLAM (10(5.2±0.2) TCID50/ml) and monkey-origin SLAM (10(5.4±0.1) TCID50/ml), but achieved markedly lower titers on human-origin SLAM cells (10(3.3±0.3) TCID50/ml). Phylogenetic analysis of the full-length H gene showed that Monkey-BJ01-DV was highly related to other CDV strains obtained during recent CDV epidemics among species of the Canidae family in China, and these Monkey strains CDV (Monkey-BJ01-DV, CYN07-dV, Monkey-KM-01) possessed a number of amino acid specific substitutions (E276V, Q392R, D435Y and I542F) compared to the H protein of CDV epidemic in other animals at the same period. Our results suggested that the monkey origin-CDV-H protein could possess specific substitutions to adapt to the new host. Monkey-BJ01-DV can efficiently use monkey- and dog-origin SLAM to infect and replicate in host cells, but further adaptation may be required for efficient replication in host cells expressing the human SLAM receptor.

  5. MULTIOBJECTIVE EVOLUTIONARY ALGORITHMS APPLIED TO MICROSTRIP ANTENNAS DESIGN ALGORITMOS EVOLUTIVOS MULTIOBJETIVO APLICADOS A LOS PROYECTOS DE ANTENAS MICROSTRIP

    Directory of Open Access Journals (Sweden)

    Juliano Rodrigues Brianeze

    2009-12-01

    Full Text Available This work presents three of the main evolutionary algorithms: Genetic Algorithm, Evolution Strategy and Evolutionary Programming, applied to microstrip antennas design. Efficiency tests were performed, considering the analysis of key physical and geometrical parameters, evolution type, numerical random generators effects, evolution operators and selection criteria. These algorithms were validated through design of microstrip antennas based on the Resonant Cavity Method, and allow multiobjective optimizations, considering bandwidth, standing wave ratio and relative material permittivity. The optimal results obtained with these optimization processes, were confirmed by CST Microwave Studio commercial package.Este trabajo presenta tres de los principales algoritmos evolutivos: Algoritmo Genético, Estrategia Evolutiva y Programación Evolutiva, aplicados al diseño de antenas de microlíneas (microstrip. Se realizaron pruebas de eficiencia de los algoritmos, considerando el análisis de los parámetros físicos y geométricos, tipo de evolución, efecto de generación de números aleatorios, operadores evolutivos y los criterios de selección. Estos algoritmos fueron validados a través del diseño de antenas de microlíneas basado en el Método de Cavidades Resonantes y permiten optimizaciones multiobjetivo, considerando ancho de banda, razón de onda estacionaria y permitividad relativa del dieléctrico. Los resultados óptimos obtenidos fueron confirmados a través del software comercial CST Microwave Studio.

  6. Observer Evaluation of a Metal Artifact Reduction Algorithm Applied to Head and Neck Cone Beam Computed Tomographic Images

    Energy Technology Data Exchange (ETDEWEB)

    Korpics, Mark; Surucu, Murat; Mescioglu, Ibrahim; Alite, Fiori; Block, Alec M.; Choi, Mehee; Emami, Bahman; Harkenrider, Matthew M.; Solanki, Abhishek A.; Roeske, John C., E-mail: jroeske@lumc.edu

    2016-11-15

    Purpose and Objectives: To quantify, through an observer study, the reduction in metal artifacts on cone beam computed tomographic (CBCT) images using a projection-interpolation algorithm, on images containing metal artifacts from dental fillings and implants in patients treated for head and neck (H&N) cancer. Methods and Materials: An interpolation-substitution algorithm was applied to H&N CBCT images containing metal artifacts from dental fillings and implants. Image quality with respect to metal artifacts was evaluated subjectively and objectively. First, 6 independent radiation oncologists were asked to rank randomly sorted blinded images (before and after metal artifact reduction) using a 5-point rating scale (1 = severe artifacts; 5 = no artifacts). Second, the standard deviation of different regions of interest (ROI) within each image was calculated and compared with the mean rating scores. Results: The interpolation-substitution technique successfully reduced metal artifacts in 70% of the cases. From a total of 60 images from 15 H&N cancer patients undergoing image guided radiation therapy, the mean rating score on the uncorrected images was 2.3 ± 1.1, versus 3.3 ± 1.0 for the corrected images. The mean difference in ranking score between uncorrected and corrected images was 1.0 (95% confidence interval: 0.9-1.2, P<.05). The standard deviation of each ROI significantly decreased after artifact reduction (P<.01). Moreover, a negative correlation between the mean rating score for each image and the standard deviation of the oral cavity and bilateral cheeks was observed. Conclusion: The interpolation-substitution algorithm is efficient and effective for reducing metal artifacts caused by dental fillings and implants on CBCT images, as demonstrated by the statistically significant increase in observer image quality ranking and by the decrease in ROI standard deviation between uncorrected and corrected images.

  7. Canine Distemper Virus Fusion Activation: Critical Role of Residue E123 of CD150/SLAM.

    Science.gov (United States)

    Khosravi, Mojtaba; Bringolf, Fanny; Röthlisberger, Silvan; Bieringer, Maria; Schneider-Schaulies, Jürgen; Zurbriggen, Andreas; Origgi, Francesco; Plattet, Philippe

    2016-02-01

    Measles virus (MeV) and canine distemper virus (CDV) possess tetrameric attachment proteins (H) and trimeric fusion proteins, which cooperate with either SLAM or nectin 4 receptors to trigger membrane fusion for cell entry. While the MeV H-SLAM cocrystal structure revealed the binding interface, two distinct oligomeric H assemblies were also determined. In one of the conformations, two SLAM units were sandwiched between two discrete H head domains, thus spotlighting two binding interfaces ("front" and "back"). Here, we investigated the functional relevance of both interfaces in activating the CDV membrane fusion machinery. While alanine-scanning mutagenesis identified five critical regulatory residues in the front H-binding site of SLAM, the replacement of a conserved glutamate residue (E at position 123, replaced with A [E123A]) led to the most pronounced impact on fusion promotion. Intriguingly, while determination of the interaction of H with the receptor using soluble constructs revealed reduced binding for the identified SLAM mutants, no effect was recorded when physical interaction was investigated with the full-length counterparts of both molecules. Conversely, although mutagenesis of three strategically selected residues within the back H-binding site of SLAM did not substantially affect fusion triggering, nevertheless, the mutants weakened the H-SLAM interaction recorded with the membrane-anchored protein constructs. Collectively, our findings support a mode of binding between the attachment protein and the V domain of SLAM that is common to all morbilliviruses and suggest a major role of the SLAM residue E123, located at the front H-binding site, in triggering the fusion machinery. However, our data additionally support the hypothesis that other microdomain(s) of both glycoproteins (including the back H-binding site) might be required to achieve fully productive H-SLAM interactions. A complete understanding of the measles virus and canine distemper virus

  8. Crossover versus Mutation: A Comparative Analysis of the Evolutionary Strategy of Genetic Algorithms Applied to Combinatorial Optimization Problems

    Directory of Open Access Journals (Sweden)

    E. Osaba

    2014-01-01

    Full Text Available Since their first formulation, genetic algorithms (GAs have been one of the most widely used techniques to solve combinatorial optimization problems. The basic structure of the GAs is known by the scientific community, and thanks to their easy application and good performance, GAs are the focus of a lot of research works annually. Although throughout history there have been many studies analyzing various concepts of GAs, in the literature there are few studies that analyze objectively the influence of using blind crossover operators for combinatorial optimization problems. For this reason, in this paper a deep study on the influence of using them is conducted. The study is based on a comparison of nine techniques applied to four well-known combinatorial optimization problems. Six of the techniques are GAs with different configurations, and the remaining three are evolutionary algorithms that focus exclusively on the mutation process. Finally, to perform a reliable comparison of these results, a statistical study of them is made, performing the normal distribution z-test.

  9. Crossover versus Mutation: A Comparative Analysis of the Evolutionary Strategy of Genetic Algorithms Applied to Combinatorial Optimization Problems

    Science.gov (United States)

    Osaba, E.; Carballedo, R.; Diaz, F.; Onieva, E.; de la Iglesia, I.; Perallos, A.

    2014-01-01

    Since their first formulation, genetic algorithms (GAs) have been one of the most widely used techniques to solve combinatorial optimization problems. The basic structure of the GAs is known by the scientific community, and thanks to their easy application and good performance, GAs are the focus of a lot of research works annually. Although throughout history there have been many studies analyzing various concepts of GAs, in the literature there are few studies that analyze objectively the influence of using blind crossover operators for combinatorial optimization problems. For this reason, in this paper a deep study on the influence of using them is conducted. The study is based on a comparison of nine techniques applied to four well-known combinatorial optimization problems. Six of the techniques are GAs with different configurations, and the remaining three are evolutionary algorithms that focus exclusively on the mutation process. Finally, to perform a reliable comparison of these results, a statistical study of them is made, performing the normal distribution z-test. PMID:25165731

  10. A new formulation of the pseudocontinuous synthesis algorithm applied to the calculation of neutronic flux in PWR reactors

    International Nuclear Information System (INIS)

    Silva, C.F. da.

    1979-09-01

    A new formulation of the pseudocontinuous synthesis algorithm is applied to solve the static three dimensional two-group diffusion equations. The new method avoids ambiguities regarding interface conditions, which are inherent to the differential formulation, by resorting to the finite difference version of the differential equations involved. A considerable number of input/output options, possible core configurations and control rod positioning are implemented resulting in a very flexible as well as economical code to compute 3D fluxes, power density and reactivities of PWR reactors with partial inserted control rods. The performance of this new code is checked against the IAEA 3D Benchmark problem and results show that SINT3D yields comparable accuracy with much less computing time and memory required than in conventional 3D finite differerence codes. (Author) [pt

  11. The Improved Locating Algorithm of Particle Filter Based on ROS Robot

    Science.gov (United States)

    Fang, Xun; Fu, Xiaoyang; Sun, Ming

    2018-03-01

    This paperanalyzes basic theory and primary algorithm of the real-time locating system and SLAM technology based on ROS system Robot. It proposes improved locating algorithm of particle filter effectively reduces the matching time of laser radar and map, additional ultra-wideband technology directly accelerates the global efficiency of FastSLAM algorithm, which no longer needs searching on the global map. Meanwhile, the re-sampling has been largely reduced about 5/6 that directly cancels the matching behavior on Roboticsalgorithm.

  12. Entering tennis men’s Grand Slams within the top-10 and its relationship with the fact of winning the tournament. [Acceder a los Grand Slams de tenis masculino desde el top-10 y su relación con el hecho de ganar el torneo].

    Directory of Open Access Journals (Sweden)

    Jaime Prieto-Bermejo

    2016-10-01

    Full Text Available The purpose of this study was to analyse the relationship between entering tennis men’s singles Grand Slams within the top-10 ranking (i.e. title favourites and the fact of winning the tournament. In order to differentiate between these players in a more powerful way than just considering the ranking number, a cluster algorithm was used to classify the players into two groups depending on their number of ranking points (i.e. higher level top-10 players vs. lower level top-10 players. The possible winners entering the tournament outside the top-10 (if any were also considered. The sample comprised all the 92 men’s singles Grand Slams played between 1990 and 2012. As was expected, the majority of Grand Slams were won by players entering the tournament ranked in the top-10. However, the main result is contrary to the hypothesis that there would be significant differences in the number of titles won in favour of the players entering the tournament from the higher positions of the top-10 when comparing to those won by the players entering from the lower positions of the top-10. Several factors that may influence whether and to what extent a player is more or less favourite to win a Grand Slam title are presented in the discussion. Resumen El propósito del estudio fue analizar la relación entre acceder a los Grand Slam de tenis masculino desde el top-10 del ranking (favoritos al título y el hecho de ganar el torneo. Con el objeto de diferenciar a estos jugadores de una forma más potente que simplemente considerando su número de ranking, se empleó un algoritmo cluster que clasificó a los jugadores en dos grupos en función del número de puntos de ranking (jugadores de mayor nivel dentro del top-10 v. jugadores de menor nivel del top-10. Los posibles ganadores del torneo accediendo al mismo desde posiciones fuera del top-10 (si los hubiera fueron también considerados. La muestra comprendió los 92 torneos de Grand Slam masculinos jugados

  13. Flat plate approximation in the three-dimensional slamming; Heiban kinji ni yoru sanjigen suimen shogeki keisanho ni tsuite

    Energy Technology Data Exchange (ETDEWEB)

    Toyama, Y. [Mitsui Engineering and Shipbuilding Co. Ltd., Tokyo (Japan)

    1996-12-31

    A slamming load generated by interactive motions between a ship body and water face is an important load in ensuring safety of the ship. A flat plate approximation developed by Wagner is used as a two-dimensional slamming theory, but it has a drawback in handling edges of a flat plate. Therefore, an attempt was made to expand the two-dimensional Wagner`s theory to three dimensions. This paper first shows a method to calculate water face slamming of an arbitrary axisymmetric body by using circular plate approximation. The paper then proposes a method to calculate slamming pressure distribution and slamming force for the case when shape of the water contacting surface may be approximated by an elliptic shape. Expansion to the three dimensions made clear to some extent the characteristics of the three-dimensional slamming. In the case of two dimensions or a circular column for example, the water contacting area increases rapidly in the initial stage generating large slamming force. However, in the case of three dimensions, since the water contacting area expands longitudinally and laterally, the slamming force tends to increase gradually. Maximum slamming pressure was found proportional to square of moving velocity in a water contacting boundary in the case of three dimensions, and similar to stagnation pressure on a gliding plate. 12 refs., 17 figs., 1 tab.

  14. Flat plate approximation in the three-dimensional slamming; Heiban kinji ni yoru sanjigen suimen shogeki keisanho ni tsuite

    Energy Technology Data Exchange (ETDEWEB)

    Toyama, Y [Mitsui Engineering and Shipbuilding Co. Ltd., Tokyo (Japan)

    1997-12-31

    A slamming load generated by interactive motions between a ship body and water face is an important load in ensuring safety of the ship. A flat plate approximation developed by Wagner is used as a two-dimensional slamming theory, but it has a drawback in handling edges of a flat plate. Therefore, an attempt was made to expand the two-dimensional Wagner`s theory to three dimensions. This paper first shows a method to calculate water face slamming of an arbitrary axisymmetric body by using circular plate approximation. The paper then proposes a method to calculate slamming pressure distribution and slamming force for the case when shape of the water contacting surface may be approximated by an elliptic shape. Expansion to the three dimensions made clear to some extent the characteristics of the three-dimensional slamming. In the case of two dimensions or a circular column for example, the water contacting area increases rapidly in the initial stage generating large slamming force. However, in the case of three dimensions, since the water contacting area expands longitudinally and laterally, the slamming force tends to increase gradually. Maximum slamming pressure was found proportional to square of moving velocity in a water contacting boundary in the case of three dimensions, and similar to stagnation pressure on a gliding plate. 12 refs., 17 figs., 1 tab.

  15. An Integrated GNSS/INS/LiDAR-SLAM Positioning Method for Highly Accurate Forest Stem Mapping

    Directory of Open Access Journals (Sweden)

    Chuang Qian

    2016-12-01

    Full Text Available Forest mapping, one of the main components of performing a forest inventory, is an important driving force in the development of laser scanning. Mobile laser scanning (MLS, in which laser scanners are installed on moving platforms, has been studied as a convenient measurement method for forest mapping in the past several years. Positioning and attitude accuracies are important for forest mapping using MLS systems. Inertial Navigation Systems (INSs and Global Navigation Satellite Systems (GNSSs are typical and popular positioning and attitude sensors used in MLS systems. In forest environments, because of the loss of signal due to occlusion and severe multipath effects, the positioning accuracy of GNSS is severely degraded, and even that of GNSS/INS decreases considerably. Light Detection and Ranging (LiDAR-based Simultaneous Localization and Mapping (SLAM can achieve higher positioning accuracy in environments containing many features and is commonly implemented in GNSS-denied indoor environments. Forests are different from an indoor environment in that the GNSS signal is available to some extent in a forest. Although the positioning accuracy of GNSS/INS is reduced, estimates of heading angle and velocity can maintain high accurate even with fewer satellites. GNSS/INS and the LiDAR-based SLAM technique can be effectively integrated to form a sustainable, highly accurate positioning and mapping solution for use in forests without additional hardware costs. In this study, information such as heading angles and velocities extracted from a GNSS/INS is utilized to improve the positioning accuracy of the SLAM solution, and two information-aided SLAM methods are proposed. First, a heading angle-aided SLAM (H-aided SLAM method is proposed that supplies the heading angle from GNSS/INS to SLAM. Field test results show that the horizontal positioning accuracy of an entire trajectory of 800 m is 0.13 m and is significantly improved (by 70% compared to that

  16. La proteína asociada a SLAM (SAP regula la expresión de IFN-g en lepra The SLAM-associated protein (SAP regulates IFN-g expression in leprosy

    Directory of Open Access Journals (Sweden)

    María F. Quiroga

    2004-10-01

    Full Text Available La inmunidad protectora contra Mycobacterium leprae requiere IFN-g. Los pacientes con lepra tuberculoide producen localmente citoquinas Th1, mientras que los pacientes lepromatosos producen citoquinas Th2. La molécula linfocitaria activadora de señales (SLAM y la proteína asociada a SLAM (SAP participan en la diferenciación celular que conduce a producción de patrones específicos de citoquinas. A fin de investigar la vía SLAM/SAP en la infección por M. leprae, determinamos expresión de ARN mensajero (ARNm de SAP, IFN-g y SLAM en pacientes con lepra. Observamos que la expresión de SLAM correlacionó en forma directa con la expresión de IFN-g, mientras que la expresión de SAP correlacionó inversamente con la expresión de ambas proteínas. Así, nuestros resultados indican que SAP interferiría con las respuestas de citoquinas Th1 mientras que SLAM contribuiría con la respuesta Th1 en lepra, señalando a la vía SLAM/SAP como potencial blanco modulador de citoquinas en enfermedades con respuestas Th2 disfuncionales.Tuberculoid leprosy patients locally produce Th1 cytokines, while lepromatous patients produce Th2 cytokines. Signaling lymphocytic activation molecule (SLAM and the SLAM-associated protein (SAP participate in the differentiation process that leads to the production of specific patterns of cytokines by activated T cells. To investigate the SLAM/SAP pathway in M. leprae infection, we determined the expression of SAP, IFN-g and SLAM RNA messenger in leprosy patients. We found a direct correlation of SLAM expression with IFN-g expression, whereas the expression of SAP was inversely correlated with the expression of both SLAM and IFN-g. Therefore, our data indicate that SAP might interfere with Th1 cytokine responses while SLAM expression may contribute to Th1 responses in leprosy. This study further suggests that the SLAM/SAP pathway might be a focal point for therapeutic modulation of T cell cytokine responses in diseases

  17. A multilevel search algorithm for the maximization of submodular functions applied to the quadratic cost partition problem

    NARCIS (Netherlands)

    Goldengorin, B.; Ghosh, D.

    Maximization of submodular functions on a ground set is a NP-hard combinatorial optimization problem. Data correcting algorithms are among the several algorithms suggested for solving this problem exactly and approximately. From the point of view of Hasse diagrams data correcting algorithms use

  18. A Precise and Real-Time Loop-closure Detection for SLAM Using the RSOM Tree

    Directory of Open Access Journals (Sweden)

    Siyang Song

    2015-06-01

    Full Text Available In robotic applications of visual simultaneous localization and mapping (SLAM techniques, loop-closure detection detects whether or not a current location has previously been visited. We present an online and incremental approach to detect loops when images come from an already visited scene and learn new information from the environment. Instead of utilizing a bag-of-words model, the attributed graph model is applied to represent images and measure the similarity between pairs of images in our method. In order to position a camera in visual environments in real-time, the method demands retrieval of images from the database through a clustering tree that we call RSOM (recursive self-organizing feature map. As long as the match is found between the current graph and several graphs in the database, a threshold will be chosen to judge whether loop-closure is accepted or rejected. The results demonstrate the method's accuracy and real-time performance by testing several videos collected from a digital camera fixed on vehicles in indoor and outdoor environments.

  19. Shocklets, SLAMS, and Field-Aligned Ion Beams in the Terrestrial Foreshock

    Science.gov (United States)

    Wilson, L. B.; Koval, A.; Sibeck, D. G.; Szabo, A.; Cattell, C. A.; Kasper, J. C.; Maruca, B. A.; Pulupa, M.; Salem, C. S.; Wilber, M.

    2012-01-01

    We present Wind spacecraft observations of ion distributions showing field- aligned beams (FABs) and large-amplitude magnetic fluctuations composed of a series of shocklets and short large-amplitude magnetic structures (SLAMS). The FABs are found to have T(sub k) approx 80-850 eV, V(sub b)/V(sub sw) approx 1.3-2.4, T(sub perpendicular,b)/T(sub paralell,b) approx 1-8, and n(sub b)/n(sub o) approx 0.2-11%. Saturation amplitudes for ion/ion resonant and non-resonant instabilities are too small to explain the observed SLAMS amplitudes. We show two examples where groups of SLAMS can act like a local quasi-perpendicular shock reflecting ions to produce the FABs, a scenario distinct from the more-common production at the quasi-perpendicular bow shock. The SLAMS exhibit a foot-like magnetic enhancement with a leading magnetosonic whistler train, consistent with previous observations. Strong ion and electron heating are observed within the series of shocklets and SLAMS with temperatures increasing by factors approx > 5 and approx >3, respectively. Both the core and halo electron components show strong perpendicular heating inside the feature.

  20. a Variant of Lsd-Slam Capable of Processing High-Speed Low-Framerate Monocular Datasets

    Science.gov (United States)

    Schmid, S.; Fritsch, D.

    2017-11-01

    We develop a new variant of LSD-SLAM, called C-LSD-SLAM, which is capable of performing monocular tracking and mapping in high-speed low-framerate situations such as those of the KITTI datasets. The methods used here are robust against the influence of erronously triangulated points near the epipolar direction, which otherwise causes tracking divergence.

  1. On the performance of an artificial bee colony optimization algorithm applied to the accident diagnosis in a PWR nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Iona Maghali S. de; Schirru, Roberto; Medeiros, Jose A.C.C., E-mail: maghali@lmp.ufrj.b, E-mail: schirru@lmp.ufrj.b, E-mail: canedo@lmp.ufrj.b [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Engenharia Nuclear

    2009-07-01

    The swarm-based algorithm described in this paper is a new search algorithm capable of locating good solutions efficiently and within a reasonable running time. The work presents a population-based search algorithm that mimics the food foraging behavior of honey bee swarms and can be regarded as belonging to the category of intelligent optimization tools. In its basic version, the algorithm performs a kind of random search combined with neighborhood search and can be used for solving multi-dimensional numeric problems. Following a description of the algorithm, this paper presents a new event classification system based exclusively on the ability of the algorithm to find the best centroid positions that correctly identifies an accident in a PWR nuclear power plant, thus maximizing the number of correct classification of transients. The simulation results show that the performance of the proposed algorithm is comparable to other population-based algorithms when applied to the same problem, with the advantage of employing fewer control parameters. (author)

  2. On the performance of an artificial bee colony optimization algorithm applied to the accident diagnosis in a PWR nuclear power plant

    International Nuclear Information System (INIS)

    Oliveira, Iona Maghali S. de; Schirru, Roberto; Medeiros, Jose A.C.C.

    2009-01-01

    The swarm-based algorithm described in this paper is a new search algorithm capable of locating good solutions efficiently and within a reasonable running time. The work presents a population-based search algorithm that mimics the food foraging behavior of honey bee swarms and can be regarded as belonging to the category of intelligent optimization tools. In its basic version, the algorithm performs a kind of random search combined with neighborhood search and can be used for solving multi-dimensional numeric problems. Following a description of the algorithm, this paper presents a new event classification system based exclusively on the ability of the algorithm to find the best centroid positions that correctly identifies an accident in a PWR nuclear power plant, thus maximizing the number of correct classification of transients. The simulation results show that the performance of the proposed algorithm is comparable to other population-based algorithms when applied to the same problem, with the advantage of employing fewer control parameters. (author)

  3. On a Hopping-Points SVD and Hough Transform-Based Line Detection Algorithm for Robot Localization and Mapping

    Directory of Open Access Journals (Sweden)

    Abhijeet Ravankar

    2016-05-01

    Full Text Available Line detection is an important problem in computer vision, graphics and autonomous robot navigation. Lines detected using a laser range sensor (LRS mounted on a robot can be used as features to build a map of the environment, and later to localize the robot in the map, in a process known as Simultaneous Localization and Mapping (SLAM. We propose an efficient algorithm for line detection from LRS data using a novel hopping-points Singular Value Decomposition (SVD and Hough transform-based algorithm, in which SVD is applied to intermittent LRS points to accelerate the algorithm. A reverse-hop mechanism ensures that the end points of the line segments are accurately extracted. Line segments extracted from the proposed algorithm are used to form a map and, subsequently, LRS data points are matched with the line segments to localize the robot. The proposed algorithm eliminates the drawbacks of point-based matching algorithms like the Iterative Closest Points (ICP algorithm, the performance of which degrades with an increasing number of points. We tested the proposed algorithm for mapping and localization in both simulated and real environments, and found it to detect lines accurately and build maps with good self-localization.

  4. [Establishment and application of a Vero cell line stably expressing raccoon dog SLAM, the cellular receptor of canine distemper virus].

    Science.gov (United States)

    Zhao, Jianjun; Yan, Ruxun; Zhang, Hailing; Zhang, Lei; Hu, Bo; Bai, Xue; Shao, Xiqun; Chai, Xiuli; Yan, Xijun; Wu, Wei

    2012-12-04

    The signaling lymphocyte activation molecule (SLAM, also known as CD150), is used as a cellular receptor by canine distemper virus (CDV). Wild-type strains of CDVs can be isolated and propagated efficiently in non-lymphoid cells expressing this protein. Our aim is to establish a Vero cells expressing raccoon dog SLAM (rSLAM) to efficiently isolate CDV from pathological samples. A eukaryotic expression plasmid, pIRES2-EGFP-rSLAMhis, containing rSLAM gene fused with six histidine-coding sequence, EGFP gene, and neomycin resistance gene was constructed. After transfection with the plasmid, a stable cell line, Vero-rSLAM, was screened from Vero cells with the identification of EGFP reporter and G418 resistance. Three CD positive specimens from infected foxes and raccoon dogs were inoculated to Vero-rSLAM cells for CDV isolation. Foxes and raccoon dogs were inoculated subcutaneously LN (10)fl strain with 4 x 10(2.39)TCID50 dose to evaluate pathogenicity of CDV isolations. The rSLAMh fused gene was shown to transcript and express stably in Vero-rSLAM cells by RT-PCR and Immunohistochemistry assay. Three CDV strains were isolated successfully in Vero-rSLAM cells 36 -48 hours after inoculation with spleen or lung specimens from foxes and raccoon dogs with distemper. By contrast, no CDV was recovered from those CD positive specimens when Vero cells were used for virus isolation. Infected foxes and raccoon dogs with LN(10)f1 strain all showed typical CD symptoms and high mortality (2/3 for foxes and 3/3 for raccoon dogs) in 22 days post challenge. Our results indicate that Vero-rSLAM cells stably expressing raccoon dog SLAM are highly sensitive to CDV in clinical specimens and the CDV isolation can maintain high virulence to its host animals.

  5. Resting lymphocyte transduction with measles virus glycoprotein pseudotyped lentiviral vectors relies on CD46 and SLAM

    International Nuclear Information System (INIS)

    Zhou Qi; Schneider, Irene C.; Gallet, Manuela; Kneissl, Sabrina; Buchholz, Christian J.

    2011-01-01

    The measles virus (MV) glycoproteins hemagglutinin (H) and fusion (F) were recently shown to mediate transduction of resting lymphocytes by lentiviral vectors. MV vaccine strains use CD46 or signaling lymphocyte activation molecule (SLAM) as receptor for cell entry. A panel of H protein mutants derived from vaccine strain or wild-type MVs that lost or gained CD46 or SLAM receptor usage were investigated for their ability to mediate gene transfer into unstimulated T lymphocytes. The results demonstrate that CD46 is sufficient for efficient vector particle association with unstimulated lymphocytes. For stable gene transfer into these cells, however, both MV receptors were found to be essential.

  6. MSGD: Scalable back-end for indoor magnetic field-based GraphSLAM

    OpenAIRE

    Gao, C; Harle, Robert Keith

    2017-01-01

    Simultaneous Localisation and Mapping (SLAM) systems that recover the trajectory of a robot or mobile device are characterised by a front-end and back-end. The front-end uses sensor observations to identify loop closures; the back-end optimises the estimated trajectory to be consistent with these closures. The GraphSLAM framework formulates the back-end problem as a graph-based optimisation on a pose graph. This paper describes a back-end system optimised for very dense sequence-based lo...

  7. A new design for SLAM front-end based on recursive SOM

    Science.gov (United States)

    Yang, Xuesi; Xia, Shengping

    2015-12-01

    Aiming at the graph optimization-based monocular SLAM, a novel design for the front-end in single camera SLAM is proposed, based on the recursive SOM. Pixel intensities are directly used to achieve image registration and motion estimation, which can save time compared with the current appearance-based frameworks, usually including feature extraction and matching. Once a key-frame is identified, a recursive SOM is used to actualize loop-closure detecting, resulting a more precise location. The experiment on a public dataset validates our method on a computer with a quicker and effective result.

  8. A new method for class prediction based on signed-rank algorithms applied to Affymetrix® microarray experiments

    Directory of Open Access Journals (Sweden)

    Vassal Aurélien

    2008-01-01

    Full Text Available Abstract Background The huge amount of data generated by DNA chips is a powerful basis to classify various pathologies. However, constant evolution of microarray technology makes it difficult to mix data from different chip types for class prediction of limited sample populations. Affymetrix® technology provides both a quantitative fluorescence signal and a decision (detection call: absent or present based on signed-rank algorithms applied to several hybridization repeats of each gene, with a per-chip normalization. We developed a new prediction method for class belonging based on the detection call only from recent Affymetrix chip type. Biological data were obtained by hybridization on U133A, U133B and U133Plus 2.0 microarrays of purified normal B cells and cells from three independent groups of multiple myeloma (MM patients. Results After a call-based data reduction step to filter out non class-discriminative probe sets, the gene list obtained was reduced to a predictor with correction for multiple testing by iterative deletion of probe sets that sequentially improve inter-class comparisons and their significance. The error rate of the method was determined using leave-one-out and 5-fold cross-validation. It was successfully applied to (i determine a sex predictor with the normal donor group classifying gender with no error in all patient groups except for male MM samples with a Y chromosome deletion, (ii predict the immunoglobulin light and heavy chains expressed by the malignant myeloma clones of the validation group and (iii predict sex, light and heavy chain nature for every new patient. Finally, this method was shown powerful when compared to the popular classification method Prediction Analysis of Microarray (PAM. Conclusion This normalization-free method is routinely used for quality control and correction of collection errors in patient reports to clinicians. It can be easily extended to multiple class prediction suitable with

  9. Feasible Initial Population with Genetic Diversity for a Population-Based Algorithm Applied to the Vehicle Routing Problem with Time Windows

    Directory of Open Access Journals (Sweden)

    Marco Antonio Cruz-Chávez

    2016-01-01

    Full Text Available A stochastic algorithm for obtaining feasible initial populations to the Vehicle Routing Problem with Time Windows is presented. The theoretical formulation for the Vehicle Routing Problem with Time Windows is explained. The proposed method is primarily divided into a clustering algorithm and a two-phase algorithm. The first step is the application of a modified k-means clustering algorithm which is proposed in this paper. The two-phase algorithm evaluates a partial solution to transform it into a feasible individual. The two-phase algorithm consists of a hybridization of four kinds of insertions which interact randomly to obtain feasible individuals. It has been proven that different kinds of insertions impact the diversity among individuals in initial populations, which is crucial for population-based algorithm behavior. A modification to the Hamming distance method is applied to the populations generated for the Vehicle Routing Problem with Time Windows to evaluate their diversity. Experimental tests were performed based on the Solomon benchmarking. Experimental results show that the proposed method facilitates generation of highly diverse populations, which vary according to the type and distribution of the instances.

  10. Evaluation of a wavelet-based compression algorithm applied to the silicon drift detectors data of the ALICE experiment at CERN

    International Nuclear Information System (INIS)

    Falchieri, Davide; Gandolfi, Enzo; Masotti, Matteo

    2004-01-01

    This paper evaluates the performances of a wavelet-based compression algorithm applied to the data produced by the silicon drift detectors of the ALICE experiment at CERN. This compression algorithm is a general purpose lossy technique, in other words, its application could prove useful even on a wide range of other data reduction's problems. In particular the design targets relevant for our wavelet-based compression algorithm are the following ones: a high-compression coefficient, a reconstruction error as small as possible and a very limited execution time. Interestingly, the results obtained are quite close to the ones achieved by the algorithm implemented in the first prototype of the chip CARLOS, the chip that will be used in the silicon drift detectors readout chain

  11. SU-E-T-33: A Feasibility-Seeking Algorithm Applied to Planning of Intensity Modulated Proton Therapy: A Proof of Principle Study

    International Nuclear Information System (INIS)

    Penfold, S; Casiraghi, M; Dou, T; Schulte, R; Censor, Y

    2015-01-01

    Purpose: To investigate the applicability of feasibility-seeking cyclic orthogonal projections to the field of intensity modulated proton therapy (IMPT) inverse planning. Feasibility of constraints only, as opposed to optimization of a merit function, is less demanding algorithmically and holds a promise of parallel computations capability with non-cyclic orthogonal projections algorithms such as string-averaging or block-iterative strategies. Methods: A virtual 2D geometry was designed containing a C-shaped planning target volume (PTV) surrounding an organ at risk (OAR). The geometry was pixelized into 1 mm pixels. Four beams containing a subset of proton pencil beams were simulated in Geant4 to provide the system matrix A whose elements a-ij correspond to the dose delivered to pixel i by a unit intensity pencil beam j. A cyclic orthogonal projections algorithm was applied with the goal of finding a pencil beam intensity distribution that would meet the following dose requirements: D-OAR < 54 Gy and 57 Gy < D-PTV < 64.2 Gy. The cyclic algorithm was based on the concept of orthogonal projections onto half-spaces according to the Agmon-Motzkin-Schoenberg algorithm, also known as ‘ART for inequalities’. Results: The cyclic orthogonal projections algorithm resulted in less than 5% of the PTV pixels and less than 1% of OAR pixels violating their dose constraints, respectively. Because of the abutting OAR-PTV geometry and the realistic modelling of the pencil beam penumbra, complete satisfaction of the dose objectives was not achieved, although this would be a clinically acceptable plan for a meningioma abutting the brainstem, for example. Conclusion: The cyclic orthogonal projections algorithm was demonstrated to be an effective tool for inverse IMPT planning in the 2D test geometry described. We plan to further develop this linear algorithm to be capable of incorporating dose-volume constraints into the feasibility-seeking algorithm

  12. Improved adaptive genetic algorithm with sparsity constraint applied to thermal neutron CT reconstruction of two-phase flow

    Science.gov (United States)

    Yan, Mingfei; Hu, Huasi; Otake, Yoshie; Taketani, Atsushi; Wakabayashi, Yasuo; Yanagimachi, Shinzo; Wang, Sheng; Pan, Ziheng; Hu, Guang

    2018-05-01

    Thermal neutron computer tomography (CT) is a useful tool for visualizing two-phase flow due to its high imaging contrast and strong penetrability of neutrons for tube walls constructed with metallic material. A novel approach for two-phase flow CT reconstruction based on an improved adaptive genetic algorithm with sparsity constraint (IAGA-SC) is proposed in this paper. In the algorithm, the neighborhood mutation operator is used to ensure the continuity of the reconstructed object. The adaptive crossover probability P c and mutation probability P m are improved to help the adaptive genetic algorithm (AGA) achieve the global optimum. The reconstructed results for projection data, obtained from Monte Carlo simulation, indicate that the comprehensive performance of the IAGA-SC algorithm exceeds the adaptive steepest descent-projection onto convex sets (ASD-POCS) algorithm in restoring typical and complex flow regimes. It especially shows great advantages in restoring the simply connected flow regimes and the shape of object. In addition, the CT experiment for two-phase flow phantoms was conducted on the accelerator-driven neutron source to verify the performance of the developed IAGA-SC algorithm.

  13. Soft real-time EPICS extensions for fast control: A case study applied to a TCV equilibrium algorithm

    International Nuclear Information System (INIS)

    Castro, R.; Romero, J.A.; Vega, J.; Nieto, J.; Ruiz, M.; Sanz, D.; Barrera, E.; De Arcas, G.

    2014-01-01

    Highlights: • Implementation of a soft real-time control system based on EPICS technology. • High data throughput system control implementation. • GPU technology applied to fast control. • EPICS fast control based solution. • Fast control and data acquisition in Linux. - Abstract: For new control systems development, ITER distributes CODAC Core System that is a software package based on Linux RedHat, and includes EPICS (Experimental Physics and Industrial Control System) as software control system solution. EPICS technology is being widely used for implementing control systems in research experiments and it is a very well tested technology, but presents important lacks to meet fast control requirements. To manage and process massive amounts of acquired data, EPICS requires additional functions such as: data block oriented transmissions, links with speed-optimized data buffers and synchronization mechanisms not based on system interruptions. This EPICS limitation turned out clearly during the development of the Fast Plant System Controller Prototype for ITER based on PXIe platform. In this work, we present a solution that, on the one hand, is completely compatible and based on EPCIS technology, and on the other hand, extends EPICS technology for implementing high performance fast control systems with soft-real time characteristics. This development includes components such as: data acquisition, processing, monitoring, data archiving, and data streaming (via network and shared memory). Additionally, it is important to remark that this system is compatible with multiple Graphics Processing Units (GPUs) and is able to integrate MatLab code through MatLab engine connections. It preserves EPICS modularity, enabling system modification or extension with a simple change of configuration, and finally it enables parallelization based on data distribution to different processing components. With the objective of illustrating the presented solution in an actual

  14. Applying the ACSM Preparticipation Screening Algorithm to U.S. Adults: National Health and Nutrition Examination Survey 2001-2004.

    Science.gov (United States)

    Whitfield, Geoffrey P; Riebe, Deborah; Magal, Meir; Liguori, Gary

    2017-10-01

    For most people, the benefits of physical activity far outweigh the risks. Research has suggested that exercise preparticipation questionnaires might refer an unwarranted number of adults for medical evaluation before exercise initiation, creating a potential barrier to adoption. The new American College of Sports Medicine (ACSM) prescreening algorithm relies on current exercise participation; history and symptoms of cardiovascular, metabolic, or renal disease; and desired exercise intensity to determine referral status. Our purpose was to compare the referral proportion of the ACSM algorithm to that of previous screening tools using a representative sample of U.S. adults. On the basis of responses to health questionnaires from the 2001-2004 National Health and Nutrition Examination Survey, we calculated the proportion of adults 40 yr or older who would be referred for medical clearance before exercise participation based on the ACSM algorithm. Results were stratified by age and sex and compared with previous results for the ACSM/American Heart Association Preparticipation Questionnaire and the Physical Activity Readiness Questionnaire. On the basis of the ACSM algorithm, 2.6% of adults would be referred only before beginning vigorous exercise and 54.2% of respondents would be referred before beginning any exercise. Men were more frequently referred before vigorous exercise, and women were more frequently referred before any exercise. Referral was more common with increasing age. The ACSM algorithm referred a smaller proportion of adults for preparticipation medical clearance than the previously examined questionnaires. Although additional validation is needed to determine whether the algorithm correctly identifies those at risk for cardiovascular complications, the revised ACSM algorithm referred fewer respondents than other screening tools. A lower referral proportion may mitigate an important barrier of medical clearance from exercise participation.

  15. Efficient data association for view based SLAM using connected dominating sets

    NARCIS (Netherlands)

    Booij, O.; Zivkovic, Z.; Kröse, B.

    2009-01-01

    Loop closing in vision based SLAM applications is a difficult task. Comparing new image data with all previously acquired image data is practically impossible because of the high computational costs. Most approaches therefore compare new data with only a subset of the old data, for example by

  16. Sufficient Condition for Estimation in Designing H∞ Filter-Based SLAM

    Directory of Open Access Journals (Sweden)

    Nur Aqilah Othman

    2015-01-01

    Full Text Available Extended Kalman filter (EKF is often employed in determining the position of mobile robot and landmarks in simultaneous localization and mapping (SLAM. Nonetheless, there are some disadvantages of using EKF, namely, the requirement of Gaussian distribution for the state and noises, as well as the fact that it requires the smallest possible initial state covariance. This has led researchers to find alternative ways to mitigate the aforementioned shortcomings. Therefore, this study is conducted to propose an alternative technique by implementing H∞ filter in SLAM instead of EKF. In implementing H∞ filter in SLAM, the parameters of the filter especially γ need to be properly defined to prevent finite escape time problem. Hence, this study proposes a sufficient condition for the estimation purposes. Two distinct cases of initial state covariance are analysed considering an indoor environment to ensure the best solution for SLAM problem exists along with considerations of process and measurement noises statistical behaviour. If the prescribed conditions are not satisfied, then the estimation would exhibit unbounded uncertainties and consequently results in erroneous inference about the robot and landmarks estimation. The simulation results have shown the reliability and consistency as suggested by the theoretical analysis and our previous findings.

  17. Grotoco@SLAM: Second Language Acquisition Modeling with Simple Features, Learners and Task-wise Models

    DEFF Research Database (Denmark)

    Klerke, Sigrid; Martínez Alonso, Héctor; Plank, Barbara

    2018-01-01

    We present our submission to the 2018 Duolingo Shared Task on Second Language Acquisition Modeling (SLAM). We focus on evaluating a range of features for the task, including user-derived measures, while examining how far we can get with a simple linear classifier. Our analysis reveals that errors...

  18. Hydro-elastic response of ship structures to slamming induced whipping

    NARCIS (Netherlands)

    Tuitman, J.T.

    2010-01-01

    Slamming induced whipping can significantly increase the structural loading of ships. Although this is well-known, the whipping contribution to the structural loading is rarely taken into account when computing the structural loading. An exception are the "dynamic loading" factors found in

  19. Case note: CBB (SLAM!FM t. Minister van EZ en Radio 538)

    NARCIS (Netherlands)

    Hins, W.

    2008-01-01

    Bij de verdeling van FM-radiofrequenties in 1983 heeft SLAM!FM een frequentiepakket verworven voor recente bijzondere muziek. Haar toezegging om slechts een gering percentage hitmuziek uit te zenden was daarbij doorslaggevend. Later ontstaan problemen over de vraag wat ‘hitmuziek’ eigenlijk is. De

  20. A comparison of three-dimensional nonequilibrium solution algorithms applied to hypersonic flows with stiff chemical source terms

    Science.gov (United States)

    Palmer, Grant; Venkatapathy, Ethiraj

    1993-01-01

    Three solution algorithms, explicit underrelaxation, point implicit, and lower upper symmetric Gauss-Seidel (LUSGS), are used to compute nonequilibrium flow around the Apollo 4 return capsule at 62 km altitude. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness. The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15, 23, and 30, the LUSGS method produces an eight order of magnitude drop in the L2 norm of the energy residual in 1/3 to 1/2 the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 23 and above. At Mach 40 the performance of the LUSGS algorithm deteriorates to the point it is out-performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.

  1. Algorithmic analysis of relational learning processes in instructional technology: Some implications for basic, translational, and applied research.

    Science.gov (United States)

    McIlvane, William J; Kledaras, Joanne B; Gerard, Christophe J; Wilde, Lorin; Smelson, David

    2018-07-01

    A few noteworthy exceptions notwithstanding, quantitative analyses of relational learning are most often simple descriptive measures of study outcomes. For example, studies of stimulus equivalence have made much progress using measures such as percentage consistent with equivalence relations, discrimination ratio, and response latency. Although procedures may have ad hoc variations, they remain fairly similar across studies. Comparison studies of training variables that lead to different outcomes are few. Yet to be developed are tools designed specifically for dynamic and/or parametric analyses of relational learning processes. This paper will focus on recent studies to develop (1) quality computer-based programmed instruction for supporting relational learning in children with autism spectrum disorders and intellectual disabilities and (2) formal algorithms that permit ongoing, dynamic assessment of learner performance and procedure changes to optimize instructional efficacy and efficiency. Because these algorithms have a strong basis in evidence and in theories of stimulus control, they may have utility also for basic and translational research. We present an overview of the research program, details of algorithm features, and summary results that illustrate their possible benefits. It also presents arguments that such algorithm development may encourage parametric research, help in integrating new research findings, and support in-depth quantitative analyses of stimulus control processes in relational learning. Such algorithms may also serve to model control of basic behavioral processes that is important to the design of effective programmed instruction for human learners with and without functional disabilities. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Applying a New Adaptive Genetic Algorithm to Study the Layout of Drilling Equipment in Semisubmersible Drilling Platforms

    Directory of Open Access Journals (Sweden)

    Wensheng Xiao

    2015-01-01

    Full Text Available This study proposes a new selection method called trisection population for genetic algorithm selection operations. In this new algorithm, the highest fitness of 2N/3 parent individuals is genetically manipulated to reproduce offspring. This selection method ensures a high rate of effective population evolution and overcomes the tendency of population to fall into local optimal solutions. Rastrigin’s test function was selected to verify the superiority of the method. Based on characteristics of arc tangent function, a genetic algorithm crossover and mutation probability adaptive methods were proposed. This allows individuals close to the average fitness to be operated with a greater probability of crossover and mutation, while individuals close to the maximum fitness are not easily destroyed. This study also analyzed the equipment layout constraints and objective functions of deep-water semisubmersible drilling platforms. The improved genetic algorithm was used to solve the layout plan. Optimization results demonstrate the effectiveness of the improved algorithm and the fit of layout plans.

  3. Initialization of a fractional order identification algorithm applied for Lithium-ion battery modeling in time domain

    Science.gov (United States)

    Nasser Eddine, Achraf; Huard, Benoît; Gabano, Jean-Denis; Poinot, Thierry

    2018-06-01

    This paper deals with the initialization of a non linear identification algorithm used to accurately estimate the physical parameters of Lithium-ion battery. A Randles electric equivalent circuit is used to describe the internal impedance of the battery. The diffusion phenomenon related to this modeling is presented using a fractional order method. The battery model is thus reformulated into a transfer function which can be identified through Levenberg-Marquardt algorithm to ensure the algorithm's convergence to the physical parameters. An initialization method is proposed in this paper by taking into account previously acquired information about the static and dynamic system behavior. The method is validated using noisy voltage response, while precision of the final identification results is evaluated using Monte-Carlo method.

  4. Comparison of Dose Distributions With TG-43 and Collapsed Cone Convolution Algorithms Applied to Accelerated Partial Breast Irradiation Patient Plans

    Energy Technology Data Exchange (ETDEWEB)

    Thrower, Sara L., E-mail: slloupot@mdanderson.org [The University of Texas Graduate School of Biomedical Sciences at Houston, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Shaitelman, Simona F.; Bloom, Elizabeth [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Salehpour, Mohammad; Gifford, Kent [Department of Radiation Physics, The University of Texas Graduate School of Biomedical Sciences at Houston, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States)

    2016-08-01

    Purpose: To compare the treatment plans for accelerated partial breast irradiation calculated by the new commercially available collapsed cone convolution (CCC) and current standard TG-43–based algorithms for 50 patients treated at our institution with either a Strut-Adjusted Volume Implant (SAVI) or Contura device. Methods and Materials: We recalculated target coverage, volume of highly dosed normal tissue, and dose to organs at risk (ribs, skin, and lung) with each algorithm. For 1 case an artificial air pocket was added to simulate 10% nonconformance. We performed a Wilcoxon signed rank test to determine the median differences in the clinical indices V90, V95, V100, V150, V200, and highest-dosed 0.1 cm{sup 3} and 1.0 cm{sup 3} of rib, skin, and lung between the two algorithms. Results: The CCC algorithm calculated lower values on average for all dose-volume histogram parameters. Across the entire patient cohort, the median difference in the clinical indices calculated by the 2 algorithms was <10% for dose to organs at risk, <5% for target volume coverage (V90, V95, and V100), and <4 cm{sup 3} for dose to normal breast tissue (V150 and V200). No discernable difference was seen in the nonconformance case. Conclusions: We found that on average over our patient population CCC calculated (<10%) lower doses than TG-43. These results should inform clinicians as they prepare for the transition to heterogeneous dose calculation algorithms and determine whether clinical tolerance limits warrant modification.

  5. Analysis of the moderate resolution imaging spectroradiometer contextual algorithm for small fire detection, Journal of Applied Remote Sensing Vol.3

    Science.gov (United States)

    W. Wang; J.J. Qu; X. Hao; Y. Liu

    2009-01-01

    In the southeastern United States, most wildland fires are of low intensity. A substantial number of these fires cannot be detected by the MODIS contextual algorithm. To improve the accuracy of fire detection for this region, the remote-sensed characteristics of these fires have to be...

  6. Optimization the Initial Weights of Artificial Neural Networks via Genetic Algorithm Applied to Hip Bone Fracture Prediction

    Directory of Open Access Journals (Sweden)

    Yu-Tzu Chang

    2012-01-01

    Full Text Available This paper aims to find the optimal set of initial weights to enhance the accuracy of artificial neural networks (ANNs by using genetic algorithms (GA. The sample in this study included 228 patients with first low-trauma hip fracture and 215 patients without hip fracture, both of them were interviewed with 78 questions. We used logistic regression to select 5 important factors (i.e., bone mineral density, experience of fracture, average hand grip strength, intake of coffee, and peak expiratory flow rate for building artificial neural networks to predict the probabilities of hip fractures. Three-layer (one hidden layer ANNs models with back-propagation training algorithms were adopted. The purpose in this paper is to find the optimal initial weights of neural networks via genetic algorithm to improve the predictability. Area under the ROC curve (AUC was used to assess the performance of neural networks. The study results showed the genetic algorithm obtained an AUC of 0.858±0.00493 on modeling data and 0.802 ± 0.03318 on testing data. They were slightly better than the results of our previous study (0.868±0.00387 and 0.796±0.02559, resp.. Thus, the preliminary study for only using simple GA has been proved to be effective for improving the accuracy of artificial neural networks.

  7. Assessment of two aerosol optical thickness retrieval algorithms applied to MODIS Aqua and Terra measurements in Europe

    Directory of Open Access Journals (Sweden)

    P. Glantz

    2012-07-01

    Full Text Available The aim of the present study is to validate AOT (aerosol optical thickness and Ångström exponent (α, obtained from MODIS (MODerate resolution Imaging Spectroradiometer Aqua and Terra calibrated level 1 data (1 km horizontal resolution at ground with the SAER (Satellite AErosol Retrieval algorithm and with MODIS Collection 5 (c005 standard product retrievals (10 km horizontal resolution, against AERONET (AErosol RObotic NETwork sun photometer observations over land surfaces in Europe. An inter-comparison of AOT at 0.469 nm obtained with the two algorithms has also been performed. The time periods investigated were chosen to enable a validation of the findings of the two algorithms for a maximal possible variation in sun elevation. The satellite retrievals were also performed with a significant variation in the satellite-viewing geometry, since Aqua and Terra passed the investigation area twice a day for several of the cases analyzed. The validation with AERONET shows that the AOT at 0.469 and 0.555 nm obtained with MODIS c005 is within the expected uncertainty of one standard deviation of the MODIS c005 retrievals (ΔAOT = ± 0.05 ± 0.15 · AOT. The AOT at 0.443 nm retrieved with SAER, but with a much finer spatial resolution, also agreed reasonably well with AERONET measurements. The majority of the SAER AOT values are within the MODIS c005 expected uncertainty range, although somewhat larger average absolute deviation occurs compared to the results obtained with the MODIS c005 algorithm. The discrepancy between AOT from SAER and AERONET is, however, substantially larger for the wavelength 488 nm. This means that the values are, to a larger extent, outside of the expected MODIS uncertainty range. In addition, both satellite retrieval algorithms are unable to estimate α accurately, although the MODIS c005 algorithm performs better. Based on the inter-comparison of the SAER and MODIS c005 algorithms, it was found that SAER on the whole is

  8. Effect of the signaling lymphocytic activation molecule (SLAM in the modulation of T cells in immune response to Leishmania braziliensis in vitro

    Directory of Open Access Journals (Sweden)

    Zirlane Castelo Branco Coêlho

    2017-02-01

    Full Text Available Introduction: Signaling lymphocyte activation molecule (SLAM is a self-ligand receptor on the surface of activated T- and B-lymphocytes, macrophages, and DC. Studies have shown PBMC from healthy individuals exposed to Leishmania differ in IFN-γ production. Objective: We investigated the role of SLAM signaling pathway in PMBC from high (HP and low (LP IFN-γ producers exposed to L. braziliensis in vitro. Methods: PBMC from 43 healthy individuals were cultured with or without antigen, α-SLAM, rIL-12 and rIFN-γ. The cytokines production was evaluated by ELISA, and SLAM expression by flow cytometry. Results: L. braziliensis associated with rIFN-γ or rIL-12 reduced early SLAM but did not modify this response later in HP. α-SLAM did not alter CD3+SLAM+ expression, and not affected IFN-γ and IL-13 production, in both groups, but increased significantly IL-10 in HP. Leishmania associated with α-SLAM and rIL-12 increased IFN-γ in LP, as well as IL-13 in HP. LP group presented low IFN-γ and IL-13 production, and low SLAM expression. Conclusion: Collectively, these findings suggest that when PBMC from healthy individuals are sensitized with L. braziliensis in vitro, SLAM acts in modulating Th1 response in HP individuals and induces a condition of immunosuppression in LP individuals.

  9. Demonstration of Inexact Computing Implemented in the JPEG Compression Algorithm using Probabilistic Boolean Logic applied to CMOS Components

    Science.gov (United States)

    2015-12-24

    manufacturing today (namely, the 14nm FinFET silicon CMOS technology). The JPEG algorithm is selected as a motivational example since it is widely...TIFF images of a U.S. Air Force F-16 aircraft provided by the University of Southern California Signal and Image Processing Institute (SIPI) image...silicon CMOS technology currently in high volume manufac- turing today (the 14 nm FinFET silicon CMOS technology). The main contribution of this

  10. Optimization the initial weights of artificial neural networks via genetic algorithm applied to hip bone fracture prediction

    OpenAIRE

    Chang, Y-T; Lin, J; Shieh, J-S; Abbod, MF

    2012-01-01

    This paper aims to find the optimal set of initial weights to enhance the accuracy of artificial neural networks (ANNs) by using genetic algorithms (GA). The sample in this study included 228 patients with first low-trauma hip fracture and 215 patients without hip fracture, both of them were interviewed with 78 questions. We used logistic regression to select 5 important factors (i.e., bone mineral density, experience of fracture, average hand grip strength, intake of coffee, and peak expirat...

  11. Integrated High-Fidelity CFD/FE FSI Code Development and Benchmark Full-Scale Validation EFD for Slamming Analysis

    Science.gov (United States)

    2016-06-30

    accelerometers, an additional inertia navigation system, and L YDT’s. New tests were performed in the Atlantic Ocean and further insight into slamming was...accelerometers, an additional inertia navigation system, and LVDT’s. New tests were performed in the Atlantic Ocean and further insight into slamming was...measured data, Fig. 13. The Numerette was then operated in the Atlantic Ocean . A typical strain history is shown in Fig. 14. Fast Fourier Transforms

  12. Applying Advances in GPM Radiometer Intercalibration and Algorithm Development to a Long-Term TRMM/GPM Global Precipitation Dataset

    Science.gov (United States)

    Berg, W. K.

    2016-12-01

    The Global Precipitation Mission (GPM) Core Observatory, which was launched in February of 2014, provides a number of advances for satellite monitoring of precipitation including a dual-frequency radar, high frequency channels on the GPM Microwave Imager (GMI), and coverage over middle and high latitudes. The GPM concept, however, is about producing unified precipitation retrievals from a constellation of microwave radiometers to provide approximately 3-hourly global sampling. This involves intercalibration of the input brightness temperatures from the constellation radiometers, development of an apriori precipitation database using observations from the state-of-the-art GPM radiometer and radars, and accounting for sensor differences in the retrieval algorithm in a physically-consistent way. Efforts by the GPM inter-satellite calibration working group, or XCAL team, and the radiometer algorithm team to create unified precipitation retrievals from the GPM radiometer constellation were fully implemented into the current version 4 GPM precipitation products. These include precipitation estimates from a total of seven conical-scanning and six cross-track scanning radiometers as well as high spatial and temporal resolution global level 3 gridded products. Work is now underway to extend this unified constellation-based approach to the combined TRMM/GPM data record starting in late 1997. The goal is to create a long-term global precipitation dataset employing these state-of-the-art calibration and retrieval algorithm approaches. This new long-term global precipitation dataset will incorporate the physics provided by the combined GPM GMI and DPR sensors into the apriori database, extend prior TRMM constellation observations to high latitudes, and expand the available TRMM precipitation data to the full constellation of available conical and cross-track scanning radiometers. This combined TRMM/GPM precipitation data record will thus provide a high-quality high

  13. Improved quantum-inspired evolutionary algorithm with diversity information applied to economic dispatch problem with prohibited operating zones

    International Nuclear Information System (INIS)

    Vianna Neto, Julio Xavier; Andrade Bernert, Diego Luis de; Santos Coelho, Leandro dos

    2011-01-01

    The objective of the economic dispatch problem (EDP) of electric power generation, whose characteristics are complex and highly nonlinear, is to schedule the committed generating unit outputs so as to meet the required load demand at minimum operating cost while satisfying all unit and system equality and inequality constraints. Recently, as an alternative to the conventional mathematical approaches, modern meta-heuristic optimization techniques have been given much attention by many researchers due to their ability to find an almost global optimal solution in EDPs. Research on merging evolutionary computation and quantum computation has been started since late 1990. Inspired on the quantum computation, this paper presented an improved quantum-inspired evolutionary algorithm (IQEA) based on diversity information of population. A classical quantum-inspired evolutionary algorithm (QEA) and the IQEA were implemented and validated for a benchmark of EDP with 15 thermal generators with prohibited operating zones. From the results for the benchmark problem, it is observed that the proposed IQEA approach provides promising results when compared to various methods available in the literature.

  14. Improved quantum-inspired evolutionary algorithm with diversity information applied to economic dispatch problem with prohibited operating zones

    Energy Technology Data Exchange (ETDEWEB)

    Vianna Neto, Julio Xavier, E-mail: julio.neto@onda.com.b [Pontifical Catholic University of Parana, PUCPR, Undergraduate Program at Mechatronics Engineering, Imaculada Conceicao, 1155, Zip code 80215-901, Curitiba, Parana (Brazil); Andrade Bernert, Diego Luis de, E-mail: dbernert@gmail.co [Pontifical Catholic University of Parana, PUCPR, Industrial and Systems Engineering Graduate Program, LAS/PPGEPS, Imaculada Conceicao, 1155, Zip code 80215-901, Curitiba, Parana (Brazil); Santos Coelho, Leandro dos, E-mail: leandro.coelho@pucpr.b [Pontifical Catholic University of Parana, PUCPR, Industrial and Systems Engineering Graduate Program, LAS/PPGEPS, Imaculada Conceicao, 1155, Zip code 80215-901, Curitiba, Parana (Brazil)

    2011-01-15

    The objective of the economic dispatch problem (EDP) of electric power generation, whose characteristics are complex and highly nonlinear, is to schedule the committed generating unit outputs so as to meet the required load demand at minimum operating cost while satisfying all unit and system equality and inequality constraints. Recently, as an alternative to the conventional mathematical approaches, modern meta-heuristic optimization techniques have been given much attention by many researchers due to their ability to find an almost global optimal solution in EDPs. Research on merging evolutionary computation and quantum computation has been started since late 1990. Inspired on the quantum computation, this paper presented an improved quantum-inspired evolutionary algorithm (IQEA) based on diversity information of population. A classical quantum-inspired evolutionary algorithm (QEA) and the IQEA were implemented and validated for a benchmark of EDP with 15 thermal generators with prohibited operating zones. From the results for the benchmark problem, it is observed that the proposed IQEA approach provides promising results when compared to various methods available in the literature.

  15. Improved quantum-inspired evolutionary algorithm with diversity information applied to economic dispatch problem with prohibited operating zones

    Energy Technology Data Exchange (ETDEWEB)

    Neto, Julio Xavier Vianna [Pontifical Catholic University of Parana, PUCPR, Undergraduate Program at Mechatronics Engineering, Imaculada Conceicao, 1155, Zip code 80215-901, Curitiba, Parana (Brazil); Bernert, Diego Luis de Andrade; Coelho, Leandro dos Santos [Pontifical Catholic University of Parana, PUCPR, Industrial and Systems Engineering Graduate Program, LAS/PPGEPS, Imaculada Conceicao, 1155, Zip code 80215-901, Curitiba, Parana (Brazil)

    2011-01-15

    The objective of the economic dispatch problem (EDP) of electric power generation, whose characteristics are complex and highly nonlinear, is to schedule the committed generating unit outputs so as to meet the required load demand at minimum operating cost while satisfying all unit and system equality and inequality constraints. Recently, as an alternative to the conventional mathematical approaches, modern meta-heuristic optimization techniques have been given much attention by many researchers due to their ability to find an almost global optimal solution in EDPs. Research on merging evolutionary computation and quantum computation has been started since late 1990. Inspired on the quantum computation, this paper presented an improved quantum-inspired evolutionary algorithm (IQEA) based on diversity information of population. A classical quantum-inspired evolutionary algorithm (QEA) and the IQEA were implemented and validated for a benchmark of EDP with 15 thermal generators with prohibited operating zones. From the results for the benchmark problem, it is observed that the proposed IQEA approach provides promising results when compared to various methods available in the literature. (author)

  16. A neural network based implementation of an MPC algorithm applied in the control systems of electromechanical plants

    Science.gov (United States)

    Marusak, Piotr M.; Kuntanapreeda, Suwat

    2018-01-01

    The paper considers application of a neural network based implementation of a model predictive control (MPC) control algorithm to electromechanical plants. Properties of such control plants implicate that a relatively short sampling time should be used. However, in such a case, finding the control value numerically may be too time-consuming. Therefore, the current paper tests the solution based on transforming the MPC optimization problem into a set of differential equations whose solution is the same as that of the original optimization problem. This set of differential equations can be interpreted as a dynamic neural network. In such an approach, the constraints can be introduced into the optimization problem with relative ease. Moreover, the solution of the optimization problem can be obtained faster than when the standard numerical quadratic programming routine is used. However, a very careful tuning of the algorithm is needed to achieve this. A DC motor and an electrohydraulic actuator are taken as illustrative examples. The feasibility and effectiveness of the proposed approach are demonstrated through numerical simulations.

  17. A Semiautomated Multilayer Picking Algorithm for Ice-sheet Radar Echograms Applied to Ground-Based Near-Surface Data

    Science.gov (United States)

    Onana, Vincent De Paul; Koenig, Lora Suzanne; Ruth, Julia; Studinger, Michael; Harbeck, Jeremy P.

    2014-01-01

    Snow accumulation over an ice sheet is the sole mass input, making it a primary measurement for understanding the past, present, and future mass balance. Near-surface frequency-modulated continuous-wave (FMCW) radars image isochronous firn layers recording accumulation histories. The Semiautomated Multilayer Picking Algorithm (SAMPA) was designed and developed to trace annual accumulation layers in polar firn from both airborne and ground-based radars. The SAMPA algorithm is based on the Radon transform (RT) computed by blocks and angular orientations over a radar echogram. For each echogram's block, the RT maps firn segmented-layer features into peaks, which are picked using amplitude and width threshold parameters of peaks. A backward RT is then computed for each corresponding block, mapping the peaks back into picked segmented-layers. The segmented layers are then connected and smoothed to achieve a final layer pick across the echogram. Once input parameters are trained, SAMPA operates autonomously and can process hundreds of kilometers of radar data picking more than 40 layers. SAMPA final pick results and layer numbering still require a cursory manual adjustment to correct noncontinuous picks, which are likely not annual, and to correct for inconsistency in layer numbering. Despite the manual effort to train and check SAMPA results, it is an efficient tool for picking multiple accumulation layers in polar firn, reducing time over manual digitizing efforts. The trackability of good detected layers is greater than 90%.

  18. Mitigating check valve slamming and subsequentwater hammer events for PPFS using MOC

    International Nuclear Information System (INIS)

    Tian Wenxi; Su Guanghui; Wang Gaopeng; Qiu Suizheng; Xiao Zejun

    2009-01-01

    The method of characteristic (MOC) was adopted to analyze the check valve-induced water hammer behaviors for a Parallel Pumps Feedwater System (PPFS) during the alternate startup process. The motion of check valve disc was simulated using inertial valve model. Transient parameters including the pressure oscillation, local flow velocity and slamming of the check valve disc etc. have been obtained. The results showed that severe slamming between the valve disc and valve seat occurred during the alternate startup of parallel pumps. The induced maximum pressure vibration amplitude is up to 5.0 MPa. The scheme of appending a damping torque to slow down the check valve closing speed was also performed to mitigate of water hammer. It has been numerically approved to be an effective approach. (authors)

  19. Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments.

    Science.gov (United States)

    Trujillo, Juan-Carlos; Munguia, Rodrigo; Guerra, Edmundo; Grau, Antoni

    2018-04-26

    This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.

  20. Mitigating check valve slamming and subsequentwater hammer events for PPFS using MOC

    Institute of Scientific and Technical Information of China (English)

    TIAN Wenxi; SU Guanghui; WANG Gaopeng; QIU Suizheng; XIAO Zejun

    2009-01-01

    The method of characteristic (MOC) was adopted to analyze the check valve-induced water hammer behaviors for a Parallel Pumps Feedwater System (PPFS) during the alternate startup process. The motion of check valve disc was simulated using inertial valve model. Transient parameters including the pressure oscillation, local flow velocity and slamming of the check valve disc etc. have been obtained. The results showed that severe slamming between the valve disc and valve seat occurred during the alternate startup of parallel pumps. The induced maximum pressure vibration amplitude is up to 5.0 MPa. The scheme of appending a damping torque to slow down the check valve closing speed was also performed to mitigate of water hammer. It has been numerically approved to be an effective approach.

  1. An Efficient Ceiling-view SLAM Using Relational Constraints Between Landmarks

    Directory of Open Access Journals (Sweden)

    Hyukdoo Choi

    2014-01-01

    Full Text Available In this paper, we present a new indoor 'simultaneous localization and mapping‘ (SLAM technique based on an upward-looking ceiling camera. Adapted from our previous work [17], the proposed method employs sparsely-distributed line and point landmarks in an indoor environment to aid with data association and reduce extended Kalman filter computation as compared with earlier techniques. Further, the proposed method exploits geometric relationships between the two types of landmarks to provide added information about the environment. This geometric information is measured with an upward-looking ceiling camera and is used as a constraint in Kalman filtering. The performance of the proposed ceiling-view (CV SLAM is demonstrated through simulations and experiments. The proposed method performs localization and mapping more accurately than those methods that use the two types of landmarks without taking into account their relative geometries.

  2. The Study of Fractional Order Controller with SLAM in the Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Shuhuan Wen

    2014-01-01

    Full Text Available We present a fractional order PI controller (FOPI with SLAM method, and the proposed method is used in the simulation of navigation of NAO humanoid robot from Aldebaran. We can discretize the transfer function by the Al-Alaoui generating function and then get the FOPI controller by Power Series Expansion (PSE. FOPI can be used as a correction part to reduce the accumulated error of SLAM. In the FOPI controller, the parameters (Kp,Ki,  and  α need to be tuned to obtain the best performance. Finally, we compare the results of position without controller and with PI controller, FOPI controller. The simulations show that the FOPI controller can reduce the error between the real position and estimated position. The proposed method is efficient and reliable for NAO navigation.

  3. Applying an animal model to quantify the uncertainties of an image-based 4D-CT algorithm

    International Nuclear Information System (INIS)

    Pierce, Greg; Battista, Jerry; Wang, Kevin; Lee, Ting-Yim

    2012-01-01

    The purpose of this paper is to use an animal model to quantify the spatial displacement uncertainties and test the fundamental assumptions of an image-based 4D-CT algorithm in vivo. Six female Landrace cross pigs were ventilated and imaged using a 64-slice CT scanner (GE Healthcare) operating in axial cine mode. The breathing amplitude pattern of the pigs was varied by periodically crimping the ventilator gas return tube during the image acquisition. The image data were used to determine the displacement uncertainties that result from matching CT images at the same respiratory phase using normalized cross correlation (NCC) as the matching criteria. Additionally, the ability to match the respiratory phase of a 4.0 cm subvolume of the thorax to a reference subvolume using only a single overlapping 2D slice from the two subvolumes was tested by varying the location of the overlapping matching image within the subvolume and examining the effect this had on the displacement relative to the reference volume. The displacement uncertainty resulting from matching two respiratory images using NCC ranged from 0.54 ± 0.10 mm per match to 0.32 ± 0.16 mm per match in the lung of the animal. The uncertainty was found to propagate in quadrature, increasing with number of NCC matches performed. In comparison, the minimum displacement achievable if two respiratory images were matched perfectly in phase ranged from 0.77 ± 0.06 to 0.93 ± 0.06 mm in the lung. The assumption that subvolumes from separate cine scan could be matched by matching a single overlapping 2D image between to subvolumes was validated. An in vivo animal model was developed to test an image-based 4D-CT algorithm. The uncertainties associated with using NCC to match the respiratory phase of two images were quantified and the assumption that a 4.0 cm 3D subvolume can by matched in respiratory phase by matching a single 2D image from the 3D subvolume was validated. The work in this paper shows the image-based 4D

  4. Improving visual SLAM by filtering outliers with the aid of optical flow

    OpenAIRE

    Özaslan, Tolga

    2011-01-01

    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2011. Thesis (Master's) -- Bilkent University, 2011. Includes bibliographical references leaves 77-81. Simultaneous Localization and Mapping (SLAM) for mobile robots has been one of the challenging problems for the robotics community. Extensive study of this problem in recent years has somewhat saturated the theoretical and practical background on this to...

  5. An EM Algorithm for Double-Pareto-Lognormal Generalized Linear Model Applied to Heavy-Tailed Insurance Claims

    Directory of Open Access Journals (Sweden)

    Enrique Calderín-Ojeda

    2017-11-01

    Full Text Available Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN in Reed and Jorgensen (2004, we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind and lognorma distributions.

  6. Applying network analysis and Nebula (neighbor-edges based and unbiased leverage algorithm) to ToxCast data.

    Science.gov (United States)

    Ye, Hao; Luo, Heng; Ng, Hui Wen; Meehan, Joe; Ge, Weigong; Tong, Weida; Hong, Huixiao

    2016-01-01

    ToxCast data have been used to develop models for predicting in vivo toxicity. To predict the in vivo toxicity of a new chemical using a ToxCast data based model, its ToxCast bioactivity data are needed but not normally available. The capability of predicting ToxCast bioactivity data is necessary to fully utilize ToxCast data in the risk assessment of chemicals. We aimed to understand and elucidate the relationships between the chemicals and bioactivity data of the assays in ToxCast and to develop a network analysis based method for predicting ToxCast bioactivity data. We conducted modularity analysis on a quantitative network constructed from ToxCast data to explore the relationships between the assays and chemicals. We further developed Nebula (neighbor-edges based and unbiased leverage algorithm) for predicting ToxCast bioactivity data. Modularity analysis on the network constructed from ToxCast data yielded seven modules. Assays and chemicals in the seven modules were distinct. Leave-one-out cross-validation yielded a Q(2) of 0.5416, indicating ToxCast bioactivity data can be predicted by Nebula. Prediction domain analysis showed some types of ToxCast assay data could be more reliably predicted by Nebula than others. Network analysis is a promising approach to understand ToxCast data. Nebula is an effective algorithm for predicting ToxCast bioactivity data, helping fully utilize ToxCast data in the risk assessment of chemicals. Published by Elsevier Ltd.

  7. 3D noise power spectrum applied on clinical MDCT scanners: effects of reconstruction algorithms and reconstruction filters

    Science.gov (United States)

    Miéville, Frédéric A.; Bolard, Gregory; Benkreira, Mohamed; Ayestaran, Paul; Gudinchet, François; Bochud, François; Verdun, Francis R.

    2011-03-01

    The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters. A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed. In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements. The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.

  8. Severe Psychosis, Drug Dependence, and Hepatitis C Related to Slamming Mephedrone

    Directory of Open Access Journals (Sweden)

    Helen Dolengevich-Segal

    2016-01-01

    Full Text Available Background. Synthetic cathinones (SCs, also known as “bath salts,” are β-ketone amphetamine compounds derived from cathinone, a psychoactive substance found in Catha edulis. Mephedrone is the most representative SC. Slamming is the term used for the intravenous injection of these substances in the context of chemsex parties, in order to enhance sex experiences. Using IV mephedrone may lead to diverse medical and psychiatric complications like psychosis, aggressive behavior, and suicide ideation. Case. We report the case of a 25-year-old man admitted into a psychiatric unit, presenting with psychotic symptoms after slamming mephedrone almost every weekend for the last 4 months. He presents paranoid delusions, intense anxiety, and visual and kinesthetic hallucinations. He also shows intense craving, compulsive drug use, general malaise, and weakness. After four weeks of admission and antipsychotic treatment, delusions completely disappear. The patient is reinfected with hepatitis C. Discussion. Psychiatric and medical conditions related to chemsex and slamming have been reported in several European cities, but not in Spain. Psychotic symptoms have been associated with mephedrone and other SCs’ consumption, with the IV route being prone to produce more severe symptomatology and addictive conducts. In the case we report, paranoid psychosis, addiction, and medical complications are described.

  9. Severe Psychosis, Drug Dependence, and Hepatitis C Related to Slamming Mephedrone

    Science.gov (United States)

    Rodríguez-Salgado, Beatriz; Sánchez-Mateos, Daniel

    2016-01-01

    Background. Synthetic cathinones (SCs), also known as “bath salts,” are β-ketone amphetamine compounds derived from cathinone, a psychoactive substance found in Catha edulis. Mephedrone is the most representative SC. Slamming is the term used for the intravenous injection of these substances in the context of chemsex parties, in order to enhance sex experiences. Using IV mephedrone may lead to diverse medical and psychiatric complications like psychosis, aggressive behavior, and suicide ideation. Case. We report the case of a 25-year-old man admitted into a psychiatric unit, presenting with psychotic symptoms after slamming mephedrone almost every weekend for the last 4 months. He presents paranoid delusions, intense anxiety, and visual and kinesthetic hallucinations. He also shows intense craving, compulsive drug use, general malaise, and weakness. After four weeks of admission and antipsychotic treatment, delusions completely disappear. The patient is reinfected with hepatitis C. Discussion. Psychiatric and medical conditions related to chemsex and slamming have been reported in several European cities, but not in Spain. Psychotic symptoms have been associated with mephedrone and other SCs' consumption, with the IV route being prone to produce more severe symptomatology and addictive conducts. In the case we report, paranoid psychosis, addiction, and medical complications are described. PMID:27247820

  10. Continuous Recording and Interobserver Agreement Algorithms Reported in the "Journal of Applied Behavior Analysis" (1995-2005)

    Science.gov (United States)

    Mudford, Oliver C.; Taylor, Sarah Ann; Martin, Neil T.

    2009-01-01

    We reviewed all research articles in 10 recent volumes of the "Journal of Applied Behavior Analysis (JABA)": Vol. 28(3), 1995, through Vol. 38(2), 2005. Continuous recording was used in the majority (55%) of the 168 articles reporting data on free-operant human behaviors. Three methods for reporting interobserver agreement (exact agreement,…

  11. Intelligent simulated annealing algorithm applied to the optimization of the main magnet for magnetic resonance imaging machine; Algoritmo simulated annealing inteligente aplicado a la optimizacion del iman principal de una maquina de resonancia magnetica de imagenes

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez Lopez, Hector [Universidad de Oriente, Santiago de Cuba (Cuba). Centro de Biofisica Medica]. E-mail: hsanchez@cbm.uo.edu.cu

    2001-08-01

    This work describes an alternative algorithm of Simulated Annealing applied to the design of the main magnet for a Magnetic Resonance Imaging machine. The algorithm uses a probabilistic radial base neuronal network to classify the possible solutions, before the objective function evaluation. This procedure allows reducing up to 50% the number of iterations required by simulated annealing to achieve the global maximum, when compared with the SA algorithm. The algorithm was applied to design a 0.1050 Tesla four coil resistive magnet, which produces a magnetic field 2.13 times more uniform than the solution given by SA. (author)

  12. Applying Probability Theory for the Quality Assessment of a Wildfire Spread Prediction Framework Based on Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Andrés Cencerrado

    2013-01-01

    Full Text Available This work presents a framework for assessing how the existing constraints at the time of attending an ongoing forest fire affect simulation results, both in terms of quality (accuracy obtained and the time needed to make a decision. In the wildfire spread simulation and prediction area, it is essential to properly exploit the computational power offered by new computing advances. For this purpose, we rely on a two-stage prediction process to enhance the quality of traditional predictions, taking advantage of parallel computing. This strategy is based on an adjustment stage which is carried out by a well-known evolutionary technique: Genetic Algorithms. The core of this framework is evaluated according to the probability theory principles. Thus, a strong statistical study is presented and oriented towards the characterization of such an adjustment technique in order to help the operation managers deal with the two aspects previously mentioned: time and quality. The experimental work in this paper is based on a region in Spain which is one of the most prone to forest fires: El Cap de Creus.

  13. An observation planning algorithm applied to multi-objective astronomical observations and its simulation in COSMOS field

    Science.gov (United States)

    Jin, Yi; Gu, Yonggang; Zhai, Chao

    2012-09-01

    Multi-Object Fiber Spectroscopic sky surveys are now booming, such as LAMOST already built by China, BIGBOSS project put forward by the U.S. Lawrence Berkeley National Lab and GTC (Gran Telescopio Canarias) telescope developed by the United States, Mexico and Spain. They all use or will use this approach and each fiber can be moved within a certain area for one astrology target, so observation planning is particularly important for this Sky Surveys. One observation planning algorithm used in multi-objective astronomical observations is developed. It can avoid the collision and interference between the fiber positioning units in the focal plane during the observation in one field of view, and the interested objects can be ovserved in a limited round with the maximize efficiency. Also, the observation simulation can be made for wide field of view through multi-FOV observation. After the observation planning is built ,the simulation is made in COSMOS field using GTC telescope. Interested galaxies, stars and high-redshift LBG galaxies are selected after the removal of the mask area, which may be bright stars. Then 9 FOV simulation is completed and observation efficiency and fiber utilization ratio for every round are given. Otherwise,allocating a certain number of fibers for background sky, giving different weights for different objects and how to move the FOV to improve the overall observation efficiency are discussed.

  14. Decision making based on data analysis and optimization algorithm applied for cogeneration systems integration into a grid

    Science.gov (United States)

    Asmar, Joseph Al; Lahoud, Chawki; Brouche, Marwan

    2018-05-01

    Cogeneration and trigeneration systems can contribute to the reduction of primary energy consumption and greenhouse gas emissions in residential and tertiary sectors, by reducing fossil fuels demand and grid losses with respect to conventional systems. The cogeneration systems are characterized by a very high energy efficiency (80 to 90%) as well as a less polluting aspect compared to the conventional energy production. The integration of these systems into the energy network must simultaneously take into account their economic and environmental challenges. In this paper, a decision-making strategy will be introduced and is divided into two parts. The first one is a strategy based on a multi-objective optimization tool with data analysis and the second part is based on an optimization algorithm. The power dispatching of the Lebanese electricity grid is then simulated and considered as a case study in order to prove the compatibility of the cogeneration power calculated by our decision-making technique. In addition, the thermal energy produced by the cogeneration systems which capacity is selected by our technique shows compatibility with the thermal demand for district heating.

  15. Thermal-economic optimisation of a CHP gas turbine system by applying a fit-problem genetic algorithm

    Science.gov (United States)

    Ferreira, Ana C. M.; Teixeira, Senhorinha F. C. F.; Silva, Rui G.; Silva, Ângela M.

    2018-04-01

    Cogeneration allows the optimal use of the primary energy sources and significant reductions in carbon emissions. Its use has great potential for applications in the residential sector. This study aims to develop a methodology for thermal-economic optimisation of small-scale micro-gas turbine for cogeneration purposes, able to fulfil domestic energy needs with a thermal power out of 125 kW. A constrained non-linear optimisation model was built. The objective function is the maximisation of the annual worth from the combined heat and power, representing the balance between the annual incomes and the expenditures subject to physical and economic constraints. A genetic algorithm coded in the java programming language was developed. An optimal micro-gas turbine able to produce 103.5 kW of electrical power with a positive annual profit (i.e. 11,925 €/year) was disclosed. The investment can be recovered in 4 years and 9 months, which is less than half of system lifetime expectancy.

  16. SLAM-based dense surface reconstruction in monocular Minimally Invasive Surgery and its application to Augmented Reality.

    Science.gov (United States)

    Chen, Long; Tang, Wen; John, Nigel W; Wan, Tao Ruan; Zhang, Jian Jun

    2018-05-01

    While Minimally Invasive Surgery (MIS) offers considerable benefits to patients, it also imposes big challenges on a surgeon's performance due to well-known issues and restrictions associated with the field of view (FOV), hand-eye misalignment and disorientation, as well as the lack of stereoscopic depth perception in monocular endoscopy. Augmented Reality (AR) technology can help to overcome these limitations by augmenting the real scene with annotations, labels, tumour measurements or even a 3D reconstruction of anatomy structures at the target surgical locations. However, previous research attempts of using AR technology in monocular MIS surgical scenes have been mainly focused on the information overlay without addressing correct spatial calibrations, which could lead to incorrect localization of annotations and labels, and inaccurate depth cues and tumour measurements. In this paper, we present a novel intra-operative dense surface reconstruction framework that is capable of providing geometry information from only monocular MIS videos for geometry-aware AR applications such as site measurements and depth cues. We address a number of compelling issues in augmenting a scene for a monocular MIS environment, such as drifting and inaccurate planar mapping. A state-of-the-art Simultaneous Localization And Mapping (SLAM) algorithm used in robotics has been extended to deal with monocular MIS surgical scenes for reliable endoscopic camera tracking and salient point mapping. A robust global 3D surface reconstruction framework has been developed for building a dense surface using only unorganized sparse point clouds extracted from the SLAM. The 3D surface reconstruction framework employs the Moving Least Squares (MLS) smoothing algorithm and the Poisson surface reconstruction framework for real time processing of the point clouds data set. Finally, the 3D geometric information of the surgical scene allows better understanding and accurate placement AR augmentations

  17. Periodic modulation-based stochastic resonance algorithm applied to quantitative analysis for weak liquid chromatography-mass spectrometry signal of granisetron in plasma

    Science.gov (United States)

    Xiang, Suyun; Wang, Wei; Xiang, Bingren; Deng, Haishan; Xie, Shaofei

    2007-05-01

    The periodic modulation-based stochastic resonance algorithm (PSRA) was used to amplify and detect the weak liquid chromatography-mass spectrometry (LC-MS) signal of granisetron in plasma. In the algorithm, the stochastic resonance (SR) was achieved by introducing an external periodic force to the nonlinear system. The optimization of parameters was carried out in two steps to give attention to both the signal-to-noise ratio (S/N) and the peak shape of output signal. By applying PSRA with the optimized parameters, the signal-to-noise ratio of LC-MS peak was enhanced significantly and distorted peak shape that often appeared in the traditional stochastic resonance algorithm was corrected by the added periodic force. Using the signals enhanced by PSRA, this method extended the limit of detection (LOD) and limit of quantification (LOQ) of granisetron in plasma from 0.05 and 0.2 ng/mL, respectively, to 0.01 and 0.02 ng/mL, and exhibited good linearity, accuracy and precision, which ensure accurate determination of the target analyte.

  18. Recent infection testing algorithm (RITA) applied to new HIV diagnoses in England, Wales and Northern Ireland, 2009 to 2011.

    Science.gov (United States)

    Aghaizu, A; Murphy, G; Tosswill, J; DeAngelis, D; Charlett, A; Gill, O N; Ward, H; Lattimore, S; Simmons, Rd; Delpech, V

    2014-01-16

    In 2009, Public Health England (PHE) introduced the routine application of a recent infection testing algorithm (RITA) to new HIV diagnoses, where a positive RITA result indicates likely acquisition of infection in the previous six months. Laboratories submit serum specimens to PHE for testing using the HIV 1/2gO AxSYM assay modified for the determination of HIV antibody avidity. Results are classified according to avidity index and data on CD₄ count, antiretroviral treatment and the presence of an AIDS-defining illness. Between 2009 and 2011, 38.4% (6,966/18,134) of new HIV diagnoses in England, Wales and Northern Ireland were tested. Demographic characteristics of those tested were similar to all persons with diagnosed HIV. Overall, recent infection was 14.7% (1,022/6,966) and higher among men who have sex with men (MSM) (22.3%, 720/3,223) compared with heterosexual men and women (7.8%, 247/3,164). Higher proportions were among persons aged 15-24 years compared with those ≥50 years (MSM 31.2% (139/445) vs 13.6% (42/308); heterosexual men and women 17.3% (43/249) vs 6.2% (31/501)). Among heterosexual men and women, black Africans were least likely to have recent infection compared with whites (4.8%, 90/1,892 vs 13.3%, 97/728; adjusted odds ratio: 0.6; 95% CI: 0.4-0.9). Our results indicate evidence of ongoing HIV transmission during the study period, particularly among MSM.

  19. Deriving causes of child mortality by re–analyzing national verbal autopsy data applying a standardized computer algorithm in Uganda, Rwanda and Ghana

    Directory of Open Access Journals (Sweden)

    Li Liu

    2015-06-01

    Full Text Available Background To accelerate progress toward the Millennium Development Goal 4, reliable information on causes of child mortality is critical. With more national verbal autopsy (VA studies becoming available, how to improve consistency of national VA derived child causes of death should be considered for the purpose of global comparison. We aimed to adapt a standardized computer algorithm to re–analyze national child VA studies conducted in Uganda, Rwanda and Ghana recently, and compare our results with those derived from physician review to explore issues surrounding the application of the standardized algorithm in place of physician review. Methods and Findings We adapted the standardized computer algorithm considering the disease profile in Uganda, Rwanda and Ghana. We then derived cause–specific mortality fractions applying the adapted algorithm and compared the results with those ascertained by physician review by examining the individual– and population–level agreement. Our results showed that the leading causes of child mortality in Uganda, Rwanda and Ghana were pneumonia (16.5–21.1% and malaria (16.8–25.6% among children below five years and intrapartum–related complications (6.4–10.7% and preterm birth complications (4.5–6.3% among neonates. The individual level agreement was poor to substantial across causes (kappa statistics: –0.03 to 0.83, with moderate to substantial agreement observed for injury, congenital malformation, preterm birth complications, malaria and measles. At the population level, despite fairly different cause–specific mortality fractions, the ranking of the leading causes was largely similar. Conclusions The standardized computer algorithm produced internally consistent distribution of causes of child mortality. The results were also qualitatively comparable to those based on physician review from the perspective of public health policy. The standardized computer algorithm has the advantage of

  20. Natural speech algorithm applied to baseline interview data can predict which patients will respond to psilocybin for treatment-resistant depression.

    Science.gov (United States)

    Carrillo, Facundo; Sigman, Mariano; Fernández Slezak, Diego; Ashton, Philip; Fitzgerald, Lily; Stroud, Jack; Nutt, David J; Carhart-Harris, Robin L

    2018-04-01

    Natural speech analytics has seen some improvements over recent years, and this has opened a window for objective and quantitative diagnosis in psychiatry. Here, we used a machine learning algorithm applied to natural speech to ask whether language properties measured before psilocybin for treatment-resistant can predict for which patients it will be effective and for which it will not. A baseline autobiographical memory interview was conducted and transcribed. Patients with treatment-resistant depression received 2 doses of psilocybin, 10 mg and 25 mg, 7 days apart. Psychological support was provided before, during and after all dosing sessions. Quantitative speech measures were applied to the interview data from 17 patients and 18 untreated age-matched healthy control subjects. A machine learning algorithm was used to classify between controls and patients and predict treatment response. Speech analytics and machine learning successfully differentiated depressed patients from healthy controls and identified treatment responders from non-responders with a significant level of 85% of accuracy (75% precision). Automatic natural language analysis was used to predict effective response to treatment with psilocybin, suggesting that these tools offer a highly cost-effective facility for screening individuals for treatment suitability and sensitivity. The sample size was small and replication is required to strengthen inferences on these results. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Using Elman recurrent neural networks with conjugate gradient algorithm in determining the anesthetic the amount of anesthetic medicine to be applied.

    Science.gov (United States)

    Güntürkün, Rüştü

    2010-08-01

    In this study, Elman recurrent neural networks have been defined by using conjugate gradient algorithm in order to determine the depth of anesthesia in the continuation stage of the anesthesia and to estimate the amount of medicine to be applied at that moment. The feed forward neural networks are also used for comparison. The conjugate gradient algorithm is compared with back propagation (BP) for training of the neural Networks. The applied artificial neural network is composed of three layers, namely the input layer, the hidden layer and the output layer. The nonlinear activation function sigmoid (sigmoid function) has been used in the hidden layer and the output layer. EEG data has been recorded with Nihon Kohden 9200 brand 22-channel EEG device. The international 8-channel bipolar 10-20 montage system (8 TB-b system) has been used in assembling the recording electrodes. EEG data have been recorded by being sampled once in every 2 milliseconds. The artificial neural network has been designed so as to have 60 neurons in the input layer, 30 neurons in the hidden layer and 1 neuron in the output layer. The values of the power spectral density (PSD) of 10-second EEG segments which correspond to the 1-50 Hz frequency range; the ratio of the total power of PSD values of the EEG segment at that moment in the same range to the total of PSD values of EEG segment taken prior to the anesthesia.

  2. Two-step algorithm of generalized PAPA method applied to linear programming solution of dynamic matrix control

    International Nuclear Information System (INIS)

    Shimizu, Yoshiaki

    1991-01-01

    In recent complicated nuclear systems, there are increasing demands for developing highly advanced procedures for various problems-solvings. Among them keen interests have been paid on man-machine communications to improve both safety and economy factors. Many optimization methods have been good enough to elaborate on these points. In this preliminary note, we will concern with application of linear programming (LP) for this purpose. First we will present a new superior version of the generalized PAPA method (GEPAPA) to solve LP problems. We will then examine its effectiveness when applied to derive dynamic matrix control (DMC) as the LP solution. The approach is to aim at the above goal through a quality control of process that will appear in the system. (author)

  3. CCS Site Optimization by Applying a Multi-objective Evolutionary Algorithm to Semi-Analytical Leakage Models

    Science.gov (United States)

    Cody, B. M.; Gonzalez-Nicolas, A.; Bau, D. A.

    2011-12-01

    Carbon capture and storage (CCS) has been proposed as a method of reducing global carbon dioxide (CO2) emissions. Although CCS has the potential to greatly retard greenhouse gas loading to the atmosphere while cleaner, more sustainable energy solutions are developed, there is a possibility that sequestered CO2 may leak and intrude into and adversely affect groundwater resources. It has been reported [1] that, while CO2 intrusion typically does not directly threaten underground drinking water resources, it may cause secondary effects, such as the mobilization of hazardous inorganic constituents present in aquifer minerals and changes in pH values. These risks must be fully understood and minimized before CCS project implementation. Combined management of project resources and leakage risk is crucial for the implementation of CCS. In this work, we present a method of: (a) minimizing the total CCS cost, the summation of major project costs with the cost associated with CO2 leakage; and (b) maximizing the mass of injected CO2, for a given proposed sequestration site. Optimization decision variables include the number of CO2 injection wells, injection rates, and injection well locations. The capital and operational costs of injection wells are directly related to injection well depth, location, injection flow rate, and injection duration. The cost of leakage is directly related to the mass of CO2 leaked through weak areas, such as abandoned oil wells, in the cap rock layers overlying the injected formation. Additional constraints on fluid overpressure caused by CO2 injection are imposed to maintain predefined effective stress levels that prevent cap rock fracturing. Here, both mass leakage and fluid overpressure are estimated using two semi-analytical models based upon work by [2,3]. A multi-objective evolutionary algorithm coupled with these semi-analytical leakage flow models is used to determine Pareto-optimal trade-off sets giving minimum total cost vs. maximum mass

  4. How employees perceive organizational learning: construct validation of the 25-item short form of the strategic learning assessment map (SF-SLAM)

    NARCIS (Netherlands)

    Mainert, Jakob; Niepel, Christoph; Lans, T.; Greiff, Samuel

    2018-01-01

    Purpose: The Strategic Learning Assessment Map (SLAM) originally assessed organizational learning (OL) at the level of the firm by addressing managers, who rated OL in the SLAM on five dimensions of individual learning, group learning, organizational learning, feed-forward learning, and feedback

  5. SLAM, een transportmodel voor de korte termijn en de korte afstand met als toepassing de beschrijving van de verspreiding van ammoniak

    NARCIS (Netherlands)

    Boermans GMF; van Pul WAJ

    1993-01-01

    SLAM (Short-term Local scale Ammonia transport Model) has been developed to calculate the ammonia concentrations in a multiple source area on a short term (hour) and local scale (100 m up to 15 km). In SLAM the dispersion in the surface layer is modelled using a description given by Gryning et al.

  6. Evaluating the statistical performance of less applied algorithms in classification of worldview-3 imagery data in an urbanized landscape

    Science.gov (United States)

    Ranaie, Mehrdad; Soffianian, Alireza; Pourmanafi, Saeid; Mirghaffari, Noorollah; Tarkesh, Mostafa

    2018-03-01

    In recent decade, analyzing the remotely sensed imagery is considered as one of the most common and widely used procedures in the environmental studies. In this case, supervised image classification techniques play a central role. Hence, taking a high resolution Worldview-3 over a mixed urbanized landscape in Iran, three less applied image classification methods including Bagged CART, Stochastic gradient boosting model and Neural network with feature extraction were tested and compared with two prevalent methods: random forest and support vector machine with linear kernel. To do so, each method was run ten time and three validation techniques was used to estimate the accuracy statistics consist of cross validation, independent validation and validation with total of train data. Moreover, using ANOVA and Tukey test, statistical difference significance between the classification methods was significantly surveyed. In general, the results showed that random forest with marginal difference compared to Bagged CART and stochastic gradient boosting model is the best performing method whilst based on independent validation there was no significant difference between the performances of classification methods. It should be finally noted that neural network with feature extraction and linear support vector machine had better processing speed than other.

  7. Independent component analysis-based algorithm for automatic identification of Raman spectra applied to artistic pigments and pigment mixtures.

    Science.gov (United States)

    González-Vidal, Juan José; Pérez-Pueyo, Rosanna; Soneira, María José; Ruiz-Moreno, Sergio

    2015-03-01

    A new method has been developed to automatically identify Raman spectra, whether they correspond to single- or multicomponent spectra. The method requires no user input or judgment. There are thus no parameters to be tweaked. Furthermore, it provides a reliability factor on the resulting identification, with the aim of becoming a useful support tool for the analyst in the decision-making process. The method relies on the multivariate techniques of principal component analysis (PCA) and independent component analysis (ICA), and on some metrics. It has been developed for the application of automated spectral analysis, where the analyzed spectrum is provided by a spectrometer that has no previous knowledge of the analyzed sample, meaning that the number of components in the sample is unknown. We describe the details of this method and demonstrate its efficiency by identifying both simulated spectra and real spectra. The method has been applied to artistic pigment identification. The reliable and consistent results that were obtained make the methodology a helpful tool suitable for the identification of pigments in artwork or in paint in general.

  8. Identification of Key Residues in Virulent Canine Distemper Virus Hemagglutinin That Control CD150/SLAM-Binding Activity▿

    Science.gov (United States)

    Zipperle, Ljerka; Langedijk, Johannes P. M.; Örvell, Claes; Vandevelde, Marc; Zurbriggen, Andreas; Plattet, Philippe

    2010-01-01

    Morbillivirus cell entry is controlled by hemagglutinin (H), an envelope-anchored viral glycoprotein determining interaction with multiple host cell surface receptors. Subsequent to virus-receptor attachment, H is thought to transduce a signal triggering the viral fusion glycoprotein, which in turn drives virus-cell fusion activity. Cell entry through the universal morbillivirus receptor CD150/SLAM was reported to depend on two nearby microdomains located within the hemagglutinin. Here, we provide evidence that three key residues in the virulent canine distemper virus A75/17 H protein (Y525, D526, and R529), clustering at the rim of a large recessed groove created by β-propeller blades 4 and 5, control SLAM-binding activity without drastically modulating protein surface expression or SLAM-independent F triggering. PMID:20631152

  9. LERC-SLAM - THE NASA LEWIS RESEARCH CENTER SATELLITE LINK ATTENUATION MODEL PROGRAM (IBM PC VERSION)

    Science.gov (United States)

    Manning, R. M.

    1994-01-01

    The frequency and intensity of rain attenuation affecting the communication between a satellite and an earth terminal is an important consideration in planning satellite links. The NASA Lewis Research Center Satellite Link Attenuation Model Program (LeRC-SLAM) provides a static and dynamic statistical assessment of the impact of rain attenuation on a communications link established between an earth terminal and a geosynchronous satellite. The program is designed for use in the specification, design and assessment of satellite links for any terminal location in the continental United States. The basis for LeRC-SLAM is the ACTS Rain Attenuation Prediction Model, which uses a log-normal cumulative probability distribution to describe the random process of rain attenuation on satellite links. The derivation of the statistics for the rainrate process at the specified terminal location relies on long term rainfall records compiled by the U.S. Weather Service during time periods of up to 55 years in length. The theory of extreme value statistics is also utilized. The user provides 1) the longitudinal position of the satellite in geosynchronous orbit, 2) the geographical position of the earth terminal in terms of latitude and longitude, 3) the height above sea level of the terminal site, 4) the yearly average rainfall at the terminal site, and 5) the operating frequency of the communications link (within 1 to 1000 GHz, inclusive). Based on the yearly average rainfall at the terminal location, LeRC-SLAM calculates the relevant rain statistics for the site using an internal data base. The program then generates rain attenuation data for the satellite link. This data includes a description of the static (i.e., yearly) attenuation process, an evaluation of the cumulative probability distribution for attenuation effects, and an evaluation of the probability of fades below selected fade depths. In addition, LeRC-SLAM calculates the elevation and azimuth angles of the terminal

  10. LERC-SLAM - THE NASA LEWIS RESEARCH CENTER SATELLITE LINK ATTENUATION MODEL PROGRAM (MACINTOSH VERSION)

    Science.gov (United States)

    Manning, R. M.

    1994-01-01

    The frequency and intensity of rain attenuation affecting the communication between a satellite and an earth terminal is an important consideration in planning satellite links. The NASA Lewis Research Center Satellite Link Attenuation Model Program (LeRC-SLAM) provides a static and dynamic statistical assessment of the impact of rain attenuation on a communications link established between an earth terminal and a geosynchronous satellite. The program is designed for use in the specification, design and assessment of satellite links for any terminal location in the continental United States. The basis for LeRC-SLAM is the ACTS Rain Attenuation Prediction Model, which uses a log-normal cumulative probability distribution to describe the random process of rain attenuation on satellite links. The derivation of the statistics for the rainrate process at the specified terminal location relies on long term rainfall records compiled by the U.S. Weather Service during time periods of up to 55 years in length. The theory of extreme value statistics is also utilized. The user provides 1) the longitudinal position of the satellite in geosynchronous orbit, 2) the geographical position of the earth terminal in terms of latitude and longitude, 3) the height above sea level of the terminal site, 4) the yearly average rainfall at the terminal site, and 5) the operating frequency of the communications link (within 1 to 1000 GHz, inclusive). Based on the yearly average rainfall at the terminal location, LeRC-SLAM calculates the relevant rain statistics for the site using an internal data base. The program then generates rain attenuation data for the satellite link. This data includes a description of the static (i.e., yearly) attenuation process, an evaluation of the cumulative probability distribution for attenuation effects, and an evaluation of the probability of fades below selected fade depths. In addition, LeRC-SLAM calculates the elevation and azimuth angles of the terminal

  11. AUTOMATIC CALIBRATION OF A STOCHASTIC-LAGRANGIAN TRANSPORT MODEL (SLAM)

    Science.gov (United States)

    Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...

  12. Evaluation of a metal artifact reduction algorithm applied to post-interventional flat detector CT in comparison to pre-treatment CT in patients with acute subarachnoid haemorrhage

    Energy Technology Data Exchange (ETDEWEB)

    Mennecke, Angelika; Svergun, Stanislav; Doerfler, Arnd; Struffert, Tobias [University of Erlangen-Nuremberg, Department of Neuroradiology, Erlangen (Germany); Scholz, Bernhard [Siemens Healthcare GmbH, Forchheim (Germany); Royalty, Kevin [Siemens Medical Solutions, USA, Inc., Hoffman Estates, IL (United States)

    2017-01-15

    Metal artefacts can impair accurate diagnosis of haemorrhage using flat detector CT (FD-CT), especially after aneurysm coiling. Within this work we evaluate a prototype metal artefact reduction algorithm by comparison of the artefact-reduced and the non-artefact-reduced FD-CT images to pre-treatment FD-CT and multi-slice CT images. Twenty-five patients with acute aneurysmal subarachnoid haemorrhage (SAH) were selected retrospectively. FD-CT and multi-slice CT before endovascular treatment as well as FD-CT data sets after treatment were available for all patients. The algorithm was applied to post-treatment FD-CT. The effect of the algorithm was evaluated utilizing the pre-post concordance of a modified Fisher score, a subjective image quality assessment, the range of the Hounsfield units within three ROIs, and the pre-post slice-wise Pearson correlation. The pre-post concordance of the modified Fisher score, the subjective image quality, and the pre-post correlation of the ranges of the Hounsfield units were significantly higher for artefact-reduced than for non-artefact-reduced images. Within the metal-affected slices, the pre-post slice-wise Pearson correlation coefficient was higher for artefact-reduced than for non-artefact-reduced images. The overall diagnostic quality of the artefact-reduced images was improved and reached the level of the pre-interventional FD-CT images. The metal-unaffected parts of the image were not modified. (orig.)

  13. Evaluation of a metal artifact reduction algorithm applied to post-interventional flat detector CT in comparison to pre-treatment CT in patients with acute subarachnoid haemorrhage

    International Nuclear Information System (INIS)

    Mennecke, Angelika; Svergun, Stanislav; Doerfler, Arnd; Struffert, Tobias; Scholz, Bernhard; Royalty, Kevin

    2017-01-01

    Metal artefacts can impair accurate diagnosis of haemorrhage using flat detector CT (FD-CT), especially after aneurysm coiling. Within this work we evaluate a prototype metal artefact reduction algorithm by comparison of the artefact-reduced and the non-artefact-reduced FD-CT images to pre-treatment FD-CT and multi-slice CT images. Twenty-five patients with acute aneurysmal subarachnoid haemorrhage (SAH) were selected retrospectively. FD-CT and multi-slice CT before endovascular treatment as well as FD-CT data sets after treatment were available for all patients. The algorithm was applied to post-treatment FD-CT. The effect of the algorithm was evaluated utilizing the pre-post concordance of a modified Fisher score, a subjective image quality assessment, the range of the Hounsfield units within three ROIs, and the pre-post slice-wise Pearson correlation. The pre-post concordance of the modified Fisher score, the subjective image quality, and the pre-post correlation of the ranges of the Hounsfield units were significantly higher for artefact-reduced than for non-artefact-reduced images. Within the metal-affected slices, the pre-post slice-wise Pearson correlation coefficient was higher for artefact-reduced than for non-artefact-reduced images. The overall diagnostic quality of the artefact-reduced images was improved and reached the level of the pre-interventional FD-CT images. The metal-unaffected parts of the image were not modified. (orig.)

  14. Hydroelastic slamming of flexible wedges: Modeling and experiments from water entry to exit

    Science.gov (United States)

    Shams, Adel; Zhao, Sam; Porfiri, Maurizio

    2017-03-01

    Fluid-structure interactions during hull slamming are of great interest for the design of aircraft and marine vessels. The main objective of this paper is to establish a semi-analytical model to investigate the entire hydroelastic slamming of a wedge, from the entry to the exit phase. The structural dynamics is described through Euler-Bernoulli beam theory and the hydrodynamic loading is estimated using potential flow theory. A Galerkin method is used to obtain a reduced order modal model in closed-form, and a Newmark-type integration scheme is utilized to find an approximate solution. To benchmark the proposed semi-analytical solution, we experimentally investigate fluid-structure interactions through particle image velocimetry (PIV). PIV is used to estimate the velocity field, and the pressure is reconstructed by solving the incompressible Navier-Stokes equations from PIV data. Experimental results confirm that the flow physics and free-surface elevation during water exit are different from water entry. While water entry is characterized by positive values of the pressure field, with respect to the atmospheric pressure, the pressure field during water exit may be less than atmospheric. Experimental observations indicate that the location where the maximum pressure in the fluid is attained moves from the pile-up region to the keel, as the wedge reverses its motion from the entry to the exit stage. Comparing experimental results with semi-analytical findings, we observe that the model is successful in predicting the free-surface elevation and the overall distribution of the hydrodynamic loading on the wedge. These integrated experimental and theoretical analyses of water exit problems are expected to aid in the design of lightweight structures, which experience repeated slamming events during their operation.

  15. Applying Genetic Algorithms and RIA technologies to the development of Complex-VRP Tools in real-world distribution of petroleum products

    Directory of Open Access Journals (Sweden)

    Antonio Moratilla Ocaña

    2014-12-01

    Full Text Available Distribution problems had held a large body of research and development covering the VRP problem and its multiple characteristics, but few investigations examine it as an Information System, and far fewer as how it should be addressed from a development and implementation point of view. This paper describes the characteristics of a real information system for fuel distribution problems at country scale, joining the VRP research and development work using Genetic Algorithms, with the design of a Web based Information System. In this paper a view of traditional workflow in this area is shown, with a new approximation in which is based proposed system. Taking into account all constraint in the field, the authors have developed a VRPWeb-based solution using Genetic Algorithms with multiple web frameworks for each architecture layer, focusing on functionality and usability, in order to minimizing human error and maximizing productivity. To achieve these goals, authors have use SmartGWT as a powerful Web based RIA SPA framework with java integration, and multiple server frameworks and OSS based solutions,applied to development of a very complex VRP system for a logistics operator of petroleum products.

  16. Developing a Reading Concentration Monitoring System by Applying an Artificial Bee Colony Algorithm to E-Books in an Intelligent Classroom

    Directory of Open Access Journals (Sweden)

    Yueh-Min Huang

    2012-10-01

    Full Text Available A growing number of educational studies apply sensors to improve student learning in real classroom settings. However, how can sensors be integrated into classrooms to help instructors find out students’ reading concentration rates and thus better increase learning effectiveness? The aim of the current study was to develop a reading concentration monitoring system for use with e-books in an intelligent classroom and to help instructors find out the students’ reading concentration rates. The proposed system uses three types of sensor technologies, namely a webcam, heartbeat sensor, and blood oxygen sensor to detect the learning behaviors of students by capturing various physiological signals. An artificial bee colony (ABC optimization approach is applied to the data gathered from these sensors to help instructors understand their students’ reading concentration rates in a classroom learning environment. The results show that the use of the ABC algorithm in the proposed system can effectively obtain near-optimal solutions. The system has a user-friendly graphical interface, making it easy for instructors to clearly understand the reading status of their students.

  17. Developing a reading concentration monitoring system by applying an artificial bee colony algorithm to e-books in an intelligent classroom.

    Science.gov (United States)

    Hsu, Chia-Cheng; Chen, Hsin-Chin; Su, Yen-Ning; Huang, Kuo-Kuang; Huang, Yueh-Min

    2012-10-22

    A growing number of educational studies apply sensors to improve student learning in real classroom settings. However, how can sensors be integrated into classrooms to help instructors find out students' reading concentration rates and thus better increase learning effectiveness? The aim of the current study was to develop a reading concentration monitoring system for use with e-books in an intelligent classroom and to help instructors find out the students' reading concentration rates. The proposed system uses three types of sensor technologies, namely a webcam, heartbeat sensor, and blood oxygen sensor to detect the learning behaviors of students by capturing various physiological signals. An artificial bee colony (ABC) optimization approach is applied to the data gathered from these sensors to help instructors understand their students' reading concentration rates in a classroom learning environment. The results show that the use of the ABC algorithm in the proposed system can effectively obtain near-optimal solutions. The system has a user-friendly graphical interface, making it easy for instructors to clearly understand the reading status of their students.

  18. SLAM-Aided Stem Mapping for Forest Inventory with Small-Footprint Mobile LiDAR

    Directory of Open Access Journals (Sweden)

    Jian Tang

    2015-12-01

    Full Text Available Accurately retrieving tree stem location distributions is a basic requirement for biomass estimation of forest inventory. Combining Inertial Measurement Units (IMU with Global Navigation Satellite Systems (GNSS is a commonly used positioning strategy in most Mobile Laser Scanning (MLS systems for accurate forest mapping. Coupled with a tactical or consumer grade IMU, GNSS offers a satisfactory solution in open forest environments, for which positioning accuracy better than one decimeter can be achieved. However, for such MLS systems, positioning in a mature and dense forest is still a challenging task because of the loss of GNSS signals attenuated by thick canopy. Most often laser scanning sensors in MLS systems are used for mapping and modelling rather than positioning. In this paper, we investigate a Simultaneous Localization and Mapping (SLAM-aided positioning solution with point clouds collected by a small-footprint LiDAR. Based on the field test data, we evaluate the potential of SLAM positioning and mapping in forest inventories. The results show that the positioning accuracy in the selected test field is improved by 38% compared to that of a traditional tactical grade IMU + GNSS positioning system in a mature forest environment and, as a result, we are able to produce a unambiguous tree distribution map.

  19. Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery

    Science.gov (United States)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2012-10-01

    In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.

  20. Combining Hector SLAM and Artificial Potential Field for Autonomous Navigation Inside a Greenhouse

    Directory of Open Access Journals (Sweden)

    El Houssein Chouaib Harik

    2018-05-01

    Full Text Available The key factor for autonomous navigation is efficient perception of the surroundings, while being able to move safely from an initial to a final point. We deal in this paper with a wheeled mobile robot working in a GPS-denied environment typical for a greenhouse. The Hector Simultaneous Localization and Mapping (SLAM approach is used in order to estimate the robots’ pose using a LIght Detection And Ranging (LIDAR sensor. Waypoint following and obstacle avoidance are ensured by means of a new artificial potential field (APF controller presented in this paper. The combination of the Hector SLAM and the APF controller allows the mobile robot to perform periodic tasks that require autonomous navigation between predefined waypoints. It also provides the mobile robot with a robustness to changing conditions that may occur inside the greenhouse, caused by the dynamic of plant development through the season. In this study, we show that the robot is safe to operate autonomously with a human presence, and that in contrast to classical odometry methods, no calibration is needed for repositioning the robot over repetitive runs. We include here both hardware and software descriptions, as well as simulation and experimental results.

  1. a Preliminary Work on Layout Slam for Reconstruction of Indoor Corridor Environments

    Science.gov (United States)

    Baligh Jahromi, A.; Sohn, G.; Shahbazi, M.; Kang, J.

    2017-09-01

    We propose a real time indoor corridor layout estimation method based on visual Simultaneous Localization and Mapping (SLAM). The proposed method adopts the Manhattan World Assumption at indoor spaces and uses the detected single image straight line segments and their corresponding orthogonal vanishing points to improve the feature matching scheme in the adopted visual SLAM system. Using the proposed real time indoor corridor layout estimation method, the system is able to build an online sparse map of structural corner point features. The challenges presented by abrupt camera rotation in the 3D space are successfully handled through matching vanishing directions of consecutive video frames on the Gaussian sphere. Using the single image based indoor layout features for initializing the system, permitted the proposed method to perform real time layout estimation and camera localization in indoor corridor areas. For layout structural corner points matching, we adopted features which are invariant under scale, translation, and rotation. We proposed a new feature matching cost function which considers both local and global context information. The cost function consists of a unary term, which measures pixel to pixel orientation differences of the matched corners, and a binary term, which measures the amount of angle differences between directly connected layout corner features. We have performed the experiments on real scenes at York University campus buildings and the available RAWSEEDS dataset. The incoming results depict that the proposed method robustly performs along with producing very limited position and orientation errors.

  2. SLAM: a fast high volume additive manufacturing concept by impact welding; application to Ti6Al4V alloy

    NARCIS (Netherlands)

    Wentzel, C.M.; Carton, E.P.; Kloosterman, A.

    2006-01-01

    Against the manufacturing requirement for both lower lead time and reduced machining time for titanium components, a new concept was conceived assembling sheet material and other stock into semi finished parts by (explosive) impact welding. It is believed that this concept (which we named SLAM)

  3. Map generation in unknown environments by AUKF-SLAM using line segment-type and point-type landmarks

    Science.gov (United States)

    Nishihta, Sho; Maeyama, Shoichi; Watanebe, Keigo

    2018-02-01

    Recently, autonomous mobile robots that collect information at disaster sites are being developed. Since it is difficult to obtain maps in advance in disaster sites, the robots being capable of autonomous movement under unknown environments are required. For this objective, the robots have to build maps, as well as the estimation of self-location. This is called a SLAM problem. In particular, AUKF-SLAM which uses corners in the environment as point-type landmarks has been developed as a solution method so far. However, when the robots move in an environment like a corridor consisting of few point-type features, the accuracy of self-location estimated by the landmark is decreased and it causes some distortions in the map. In this research, we propose AUKF-SLAM which uses walls in the environment as a line segment-type landmark. We demonstrate that the robot can generate maps in unknown environment by AUKF-SLAM, using line segment-type and point-type landmarks.

  4. Vision-Based SLAM System for Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2016-03-01

    Full Text Available The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs. The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i an orientation sensor (AHRS; (ii a position sensor (GPS; and (iii a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  5. Vision-Based SLAM System for Unmanned Aerial Vehicles.

    Science.gov (United States)

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-03-15

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  6. Final Report- "An Algorithmic and Software Framework for Applied Partial Differential Equations (APDEC): A DOE SciDAC Integrated Software Infrastructure Center (ISIC)

    Energy Technology Data Exchange (ETDEWEB)

    Elbridge Gerry Puckett

    2008-05-13

    All of the work conducted under the auspices of DE-FC02-01ER25473 was characterized by exceptionally close collaboration with researchers at the Lawrence Berkeley National Laboratory (LBNL). This included having one of my graduate students - Sarah Williams - spend the summer working with Dr. Ann Almgren a staff scientist in the Center for Computational Sciences and Engineering (CCSE) which is a part of the National Energy Research Supercomputer Center (NERSC) at LBNL. As a result of this visit Sarah decided to work on a problem suggested by Dr. John Bell the head of CCSE for her PhD thesis, which she finished in June 2007. Writing a PhD thesis while working at one of the University of California (UC) managed DOE laboratories is a long established tradition at the University of California and I have always encouraged my students to consider doing this. For example, in 2000 one of my graduate students - Matthew Williams - finished his PhD thesis while working with Dr. Douglas Kothe at the Los Alamos National Laboratory (LANL). Matt is now a staff scientist in the Diagnostic Applications Group in the Applied Physics Division at LANL. Another one of my graduate students - Christopher Algieri - who was partially supported with funds from DE-FC02-01ER25473 wrote am MS Thesis that analyzed and extended work published by Dr. Phil Colella and his colleagues in 1998. Dr. Colella is the head of the Applied Numerical Algorithms Group (ANAG) in the National Energy Research Supercomputer Center at LBNL and is the lead PI for the APDEC ISIC which was comprised of several National Laboratory research groups and at least five University PI's at five different universities. Chris Algieri is now employed as a staff member in Dr. Bill Collins' research group at LBNL developing computational models for climate change research. Bill Collins was recently hired at LBNL to start and be the Head of the Climate Science Department in the Earth Sciences Division at LBNL. Prior to

  7. SLAM- and nectin-4-independent noncytolytic spread of canine distemper virus in astrocytes.

    Science.gov (United States)

    Alves, Lisa; Khosravi, Mojtaba; Avila, Mislay; Ader-Ebert, Nadine; Bringolf, Fanny; Zurbriggen, Andreas; Vandevelde, Marc; Plattet, Philippe

    2015-05-01

    Measles and canine distemper viruses (MeV and CDV, respectively) first replicate in lymphatic and epithelial tissues by using SLAM and nectin-4 as entry receptors, respectively. The viruses may also invade the brain to establish persistent infections, triggering fatal complications, such as subacute sclerosis pan-encephalitis (SSPE) in MeV infection or chronic, multiple sclerosis-like, multifocal demyelinating lesions in the case of CDV infection. In both diseases, persistence is mediated by viral nucleocapsids that do not require packaging into particles for infectivity but are directly transmitted from cell to cell (neurons in SSPE or astrocytes in distemper encephalitis), presumably by relying on restricted microfusion events. Indeed, although morphological evidence of fusion remained undetectable, viral fusion machineries and, thus, a putative cellular receptor, were shown to contribute to persistent infections. Here, we first showed that nectin-4-dependent cell-cell fusion in Vero cells, triggered by a demyelinating CDV strain, remained extremely limited, thereby supporting a potential role of nectin-4 in mediating persistent infections in astrocytes. However, nectin-4 could not be detected in either primary cultured astrocytes or the white matter of tissue sections. In addition, a bioengineered "nectin-4-blind" recombinant CDV retained full cell-to-cell transmission efficacy in primary astrocytes. Combined with our previous report demonstrating the absence of SLAM expression in astrocytes, these findings are suggestive for the existence of a hitherto unrecognized third CDV receptor expressed by glial cells that contributes to the induction of noncytolytic cell-to-cell viral transmission in astrocytes. While persistent measles virus (MeV) infection induces SSPE in humans, persistent canine distemper virus (CDV) infection causes chronic progressive or relapsing demyelination in carnivores. Common to both central nervous system (CNS) infections is that

  8. Using SLAM to Look For the Dog Valley Fault, Truckee Area, California

    Science.gov (United States)

    Cronin, V. S.; Ashburn, J. A.; Sverdrup, K. A.

    2014-12-01

    The Truckee earthquake (9/12/1966, ML6.0) was a left-lateral event on a previously unrecognized NW-trending fault. The Prosser Creek and Boca Dams sustained damage, and the trace of the suspected causative fault passes near or through the site of the then-incomplete Stampede Dam. Another M6 earthquake occurred along the same general trend in 1948 with an epicenter in Dog Valley ~14 km to the NW of the 1966 epicenter. This trend is called the Dog Valley Fault (DVF), and its location on the ground surface is suggested by a prominent but broad zone of geomorphic lineaments near the cloud of aftershock epicenters determined for the 1966 event. Various ground effects of the 1966 event described by Kachadoorian et al. (1967) were located within this broad zone. The upper shoreface of reservoirs in the Truckee-Prosser-Martis basin are now exposed due to persistent drought. We have examined fault strands in a roadcut and exposed upper shoreface adjacent to the NE abutment of Stampede Dam. These are interpreted to be small-displacement splays associated with the DVF -- perhaps elements of the DVF damage zone. We have used the Seismo-Lineament Analysis Method (SLAM) to help us constrain the location of the DVF, based on earthquake focal mechanisms. Seismo-lineaments were computed, using recent revisions in the SLAM code (bearspace.baylor.edu/Vince_Cronin/www/SLAM/), for the 1966 main earthquake and for the better-recorded earthquakes of 7/3/1983 (M4) and 8/30/1992 (M3.2) that are inferred to have occurred along the DVF. Associated geomorphic analysis and some field reconnaissance identified a trend that might be associated with a fault, extending from the NW end of Prosser Creek Reservoir ~32° toward the Stampede Dam area. Triangle-strain analysis using horizontal velocities of local Plate Boundary Observatory GPS sites P146, P149, P150 and SLID indicates that the area rotates clockwise ~1-2°/Myr relative to the stable craton, as might be expected because the study area is

  9. Using a Novel Evolutionary Algorithm to More Effectively Apply Community-Driven EcoHealth Interventions in Big Data with Application to Chagas Disease

    Science.gov (United States)

    Rizzo, D. M.; Hanley, J.; Monroy, C.; Rodas, A.; Stevens, L.; Dorn, P.

    2016-12-01

    algorithm to efficiently search for higher order interactions in a T. dimidiata infestation dataset that contains 1,132 houses, 61 risk factors (both nominal and ordinal), and 16% of the data is missing. Our goal is determine the risk factors that are most commonly associated with infestation to more efficiently apply EcoHealth interventions.

  10. Check valve slam caused by air intrusion in emergency cooling water system

    International Nuclear Information System (INIS)

    Martin, C.S.

    2011-01-01

    Waterhammer pressures were experienced during periodic starting of Residual Heat Removal (RHR) pumps at a nuclear plant. Prior to an analytical investigation careful analysis performed by plant engineers indicated that the spring effect of entrapped air in a heat exchanger resulted in water hammer due to check valve slam following flow reversal. In order to determine in more detail the values of pertinent parameters controlling this water hammer a hydraulic transient analysis was performed of the RHR piping system, including essential elements such as the pump, check valve, and heat exchanger. Using characteristic torque and pressure loss curves the motion of the check valve was determined. By comparing output of the water hammer analysis with site recordings of pump discharge pressure the computer model was calibrated, allowing for a realistic estimate of the quantity of entrapped air in the heat exchanger. (author)

  11. Slam Dunk

    Science.gov (United States)

    Herek, Matthew

    2011-01-01

    There's nothing like a worldwide financial meltdown to kick-start an alumni association's career networking offerings. In 2009, the Northwestern University alumni board provided clear direction to its regional affiliates and to the full-time staff working at the Evanston, Illinois, campus: Develop ways to purposefully connect alumni with each…

  12. Can the same edge-detection algorithm be applied to on-line and off-line analysis systems? Validation of a new cinefilm-based geometric coronary measurement software

    NARCIS (Netherlands)

    J. Haase (Jürgen); C. di Mario (Carlo); P.W.J.C. Serruys (Patrick); M.M.J.M. van der Linden (Mark); D.P. Foley (David); W.J. van der Giessen (Wim)

    1993-01-01

    textabstractIn the Cardiovascular Measurement System (CMS) the edge-detection algorithm, which was primarily designed for the Philips digital cardiac imaging system (DCI), is applied to cinefilms. Comparative validation of CMS and DCI was performed in vitro and in vivo with intracoronary insertion

  13. Line Segmentation of 2d Laser Scanner Point Clouds for Indoor Slam Based on a Range of Residuals

    Science.gov (United States)

    Peter, M.; Jafri, S. R. U. N.; Vosselman, G.

    2017-09-01

    Indoor mobile laser scanning (IMLS) based on the Simultaneous Localization and Mapping (SLAM) principle proves to be the preferred method to acquire data of indoor environments at a large scale. In previous work, we proposed a backpack IMLS system containing three 2D laser scanners and an according SLAM approach. The feature-based SLAM approach solves all six degrees of freedom simultaneously and builds on the association of lines to planes. Because of the iterative character of the SLAM process, the quality and reliability of the segmentation of linear segments in the scanlines plays a crucial role in the quality of the derived poses and consequently the point clouds. The orientations of the lines resulting from the segmentation can be influenced negatively by narrow objects which are nearly coplanar with walls (like e.g. doors) which will cause the line to be tilted if those objects are not detected as separate segments. State-of-the-art methods from the robotics domain like Iterative End Point Fit and Line Tracking were found to not handle such situations well. Thus, we describe a novel segmentation method based on the comparison of a range of residuals to a range of thresholds. For the definition of the thresholds we employ the fact that the expected value for the average of residuals of n points with respect to the line is σ / √n. Our method, as shown by the experiments and the comparison to other methods, is able to deliver more accurate results than the two approaches it was tested against.

  14. LINE SEGMENTATION OF 2D LASER SCANNER POINT CLOUDS FOR INDOOR SLAM BASED ON A RANGE OF RESIDUALS

    Directory of Open Access Journals (Sweden)

    M. Peter

    2017-09-01

    Full Text Available Indoor mobile laser scanning (IMLS based on the Simultaneous Localization and Mapping (SLAM principle proves to be the preferred method to acquire data of indoor environments at a large scale. In previous work, we proposed a backpack IMLS system containing three 2D laser scanners and an according SLAM approach. The feature-based SLAM approach solves all six degrees of freedom simultaneously and builds on the association of lines to planes. Because of the iterative character of the SLAM process, the quality and reliability of the segmentation of linear segments in the scanlines plays a crucial role in the quality of the derived poses and consequently the point clouds. The orientations of the lines resulting from the segmentation can be influenced negatively by narrow objects which are nearly coplanar with walls (like e.g. doors which will cause the line to be tilted if those objects are not detected as separate segments. State-of-the-art methods from the robotics domain like Iterative End Point Fit and Line Tracking were found to not handle such situations well. Thus, we describe a novel segmentation method based on the comparison of a range of residuals to a range of thresholds. For the definition of the thresholds we employ the fact that the expected value for the average of residuals of n points with respect to the line is σ / √n. Our method, as shown by the experiments and the comparison to other methods, is able to deliver more accurate results than the two approaches it was tested against.

  15. A negative search of acute canine distemper virus infection in DogSLAM transgenic C57BL/6 mice

    Directory of Open Access Journals (Sweden)

    Somporn Techangamsuwan

    2010-12-01

    Full Text Available Canine distemper is a highly contagious and immunosuppressive viral disease caused by canine distemper virus(CDV, an enveloped RNA virus of the family Paramyxoviridae. The susceptible host spectrum of CDV is broad andincludes all families of the order Carnivora. To accomplish the infection, CDV requires an expression of signaling lymphocyteactivation molecule (SLAM functioning as a cellular receptor which generally presents in a variety of different lymphoid cellsubpopulations, including immature thymocytes, primary B cells, activated T cells, memory T cells, macrophages and maturedendritic cells. The distribution of SLAM-presenting cells is in accordance with the lymphotropism and immunosuppressionfollowing morbillivirus infection. In the present study, the C57BL/6 mice engrafted with dog-specific SLAM sequence(DogSLAM were used. The weanling (3-week-old transgenic offspring C57BL/6 mice were infected with CDV Snyder Hill(CDV-SH strain via the intranasal (n=6, intracerebral (n=6 and intraperitoneal (n=5 routes. Clinical signs, hematology,histopathology, immunohistochemistry, virus isolation and RT-PCR were observed for two weeks post infection. Resultsshowed that CDV-SH-inoculated transgenic mice displayed mild-to-moderate congestion of various organs (brain, lung,spleen, kidney, lymph node, and adrenal gland. By means of immunohistochemistry, virus isolation and RT-PCR, CDV couldnot be detected. The evidence of CDV infection in this study could not be demonstrated in acute phase. Even though thetransgenic mouse is not a suitable animal model for CDV, or a longer incubation period is prerequisite, it needs to be clarifiedin a future study.

  16. Improved Data Reduction Algorithm for the Needle Probe Method Applied to In-Situ Thermal Conductivity Measurements of Lunar and Planetary Regoliths

    Science.gov (United States)

    Nagihara, S.; Hedlund, M.; Zacny, K.; Taylor, P. T.

    2013-01-01

    The needle probe method (also known as the' hot wire' or 'line heat source' method) is widely used for in-situ thermal conductivity measurements on soils and marine sediments on the earth. Variants of this method have also been used (or planned) for measuring regolith on the surfaces of extra-terrestrial bodies (e.g., the Moon, Mars, and comets). In the near-vacuum condition on the lunar and planetary surfaces, the measurement method used on the earth cannot be simply duplicated, because thermal conductivity of the regolith can be approximately 2 orders of magnitude lower. In addition, the planetary probes have much greater diameters, due to engineering requirements associated with the robotic deployment on extra-terrestrial bodies. All of these factors contribute to the planetary probes requiring much longer time of measurement, several tens of (if not over a hundred) hours, while a conventional terrestrial needle probe needs only 1 to 2 minutes. The long measurement time complicates the surface operation logistics of the lander. It also negatively affects accuracy of the thermal conductivity measurement, because the cumulative heat loss along the probe is no longer negligible. The present study improves the data reduction algorithm of the needle probe method by shortening the measurement time on planetary surfaces by an order of magnitude. The main difference between the new scheme and the conventional one is that the former uses the exact mathematical solution to the thermal model on which the needle probe measurement theory is based, while the latter uses an approximate solution that is valid only for large times. The present study demonstrates the benefit of the new data reduction technique by applying it to data from a series of needle probe experiments carried out in a vacuum chamber on JSC-1A lunar regolith stimulant. The use of the exact solution has some disadvantage, however, in requiring three additional parameters, but two of them (the diameter and the

  17. Deciphering complex dynamics of water counteraction around secondary structural elements of allosteric protein complex: Case study of SAP-SLAM system in signal transduction cascade.

    Science.gov (United States)

    Samanta, Sudipta; Mukherjee, Sanchita

    2018-01-28

    The first hydration shell of a protein exhibits heterogeneous behavior owing to several attributes, majorly local polarity and structural flexibility as revealed by solvation dynamics of secondary structural elements. We attempt to recognize the change in complex water counteraction generated due to substantial alteration in flexibility during protein complex formation. The investigation is carried out with the signaling lymphocytic activation molecule (SLAM) family of receptors, expressed by an array of immune cells, and interacting with SLAM-associated protein (SAP), composed of one SH2 domain. All atom molecular dynamics simulations are employed to the aqueous solutions of free SAP and SLAM-peptide bound SAP. We observed that water dynamics around different secondary structural elements became highly affected as well as nicely correlated with the SLAM-peptide induced change in structural rigidity obtained by thermodynamic quantification. A few instances of contradictory dynamic features of water to the change in structural flexibility are explained by means of occluded polar residues by the peptide. For βD, EFloop, and BGloop, both structural flexibility and solvent accessibility of the residues confirm the obvious contribution. Most importantly, we have quantified enhanced restriction in water dynamics around the second Fyn-binding site of the SAP due to SAP-SLAM complexation, even prior to the presence of Fyn. This observation leads to a novel argument that SLAM induced more restricted water molecules could offer more water entropic contribution during the subsequent Fyn binding and provide enhanced stability to the SAP-Fyn complex in the signaling cascade. Finally, SLAM induced water counteraction around the second binding site of the SAP sheds light on the allosteric property of the SAP, which becomes an integral part of the underlying signal transduction mechanism.

  18. Deciphering complex dynamics of water counteraction around secondary structural elements of allosteric protein complex: Case study of SAP-SLAM system in signal transduction cascade

    Science.gov (United States)

    Samanta, Sudipta; Mukherjee, Sanchita

    2018-01-01

    The first hydration shell of a protein exhibits heterogeneous behavior owing to several attributes, majorly local polarity and structural flexibility as revealed by solvation dynamics of secondary structural elements. We attempt to recognize the change in complex water counteraction generated due to substantial alteration in flexibility during protein complex formation. The investigation is carried out with the signaling lymphocytic activation molecule (SLAM) family of receptors, expressed by an array of immune cells, and interacting with SLAM-associated protein (SAP), composed of one SH2 domain. All atom molecular dynamics simulations are employed to the aqueous solutions of free SAP and SLAM-peptide bound SAP. We observed that water dynamics around different secondary structural elements became highly affected as well as nicely correlated with the SLAM-peptide induced change in structural rigidity obtained by thermodynamic quantification. A few instances of contradictory dynamic features of water to the change in structural flexibility are explained by means of occluded polar residues by the peptide. For βD, EFloop, and BGloop, both structural flexibility and solvent accessibility of the residues confirm the obvious contribution. Most importantly, we have quantified enhanced restriction in water dynamics around the second Fyn-binding site of the SAP due to SAP-SLAM complexation, even prior to the presence of Fyn. This observation leads to a novel argument that SLAM induced more restricted water molecules could offer more water entropic contribution during the subsequent Fyn binding and provide enhanced stability to the SAP-Fyn complex in the signaling cascade. Finally, SLAM induced water counteraction around the second binding site of the SAP sheds light on the allosteric property of the SAP, which becomes an integral part of the underlying signal transduction mechanism.

  19. Detection of boiling by Piety's on-line PSD-pattern recognition algorithm applied to neutron noise signals in the SAPHIR reactor

    International Nuclear Information System (INIS)

    Spiekerman, G.

    1988-09-01

    A partial blockage of the cooling channels of a fuel element in a swimming pool reactor could lead to vapour generation and to burn-out. To detect such anomalies, a pattern recognition algorithm based on power spectra density (PSD) proposed by Piety was further developed and implemented on a PDP 11/23 for on-line applications. This algorithm identifies anomalies by measuring the PSD on the process signal and comparing them with a standard baseline previously formed. Up to 8 decision discriminants help to recognize spectral changes due to anomalies. In our application, to detect boiling as quickly as possible with sufficient sensitivity, Piety's algorithm was modified using overlapped Fast-Fourier-Transform-Processing and the averaging of the PSDs over a large sample of preceding instantaneous PSDs. This processing allows high sensitivity in detecting weak disturbances without reducing response time. The algorithm was tested with simulation-of-boiling experiments where nitrogen in a cooling channel of a mock-up of a fuel element was injected. Void fractions higher than 30 % in the channel can be detected. In the case of boiling, it is believed that this limit is lower because collapsing bubbles could give rise to stronger fluctuations. The algorithm was also tested with a boiling experiment where the reactor coolant flow was actually reduced. The results showed that the discriminant D5 of Piety's algorithm based on neutron noise obtained from the existing neutron chambers of the reactor control system could sensitively recognize boiling. The detection time amounts to 7-30 s depending on the strength of the disturbances. Other events, which arise during a normal reactor run like scrams, removal of isotope elements without scramming or control rod movements and which could lead to false alarms, can be distinguished from boiling. 49 refs., 104 figs., 5 tabs

  20. Design of problem-specific evolutionary algorithm/mixed-integer programming hybrids: two-stage stochastic integer programming applied to chemical batch scheduling

    Science.gov (United States)

    Urselmann, Maren; Emmerich, Michael T. M.; Till, Jochen; Sand, Guido; Engell, Sebastian

    2007-07-01

    Engineering optimization often deals with large, mixed-integer search spaces with a rigid structure due to the presence of a large number of constraints. Metaheuristics, such as evolutionary algorithms (EAs), are frequently suggested as solution algorithms in such cases. In order to exploit the full potential of these algorithms, it is important to choose an adequate representation of the search space and to integrate expert-knowledge into the stochastic search operators, without adding unnecessary bias to the search. Moreover, hybridisation with mathematical programming techniques such as mixed-integer programming (MIP) based on a problem decomposition can be considered for improving algorithmic performance. In order to design problem-specific EAs it is desirable to have a set of design guidelines that specify properties of search operators and representations. Recently, a set of guidelines has been proposed that gives rise to so-called Metric-based EAs (MBEAs). Extended by the minimal moves mutation they allow for a generalization of EA with self-adaptive mutation strength in discrete search spaces. In this article, a problem-specific EA for process engineering task is designed, following the MBEA guidelines and minimal moves mutation. On the background of the application, the usefulness of the design framework is discussed, and further extensions and corrections proposed. As a case-study, a two-stage stochastic programming problem in chemical batch process scheduling is considered. The algorithm design problem can be viewed as the choice of a hierarchical decision structure, where on different layers of the decision process symmetries and similarities can be exploited for the design of minimal moves. After a discussion of the design approach and its instantiation for the case-study, the resulting problem-specific EA/MIP is compared to a straightforward application of a canonical EA/MIP and to a monolithic mathematical programming algorithm. In view of the

  1. On-patient see-through augmented reality based on visual SLAM.

    Science.gov (United States)

    Mahmoud, Nader; Grasa, Óscar G; Nicolau, Stéphane A; Doignon, Christophe; Soler, Luc; Marescaux, Jacques; Montiel, J M M

    2017-01-01

    An augmented reality system to visualize a 3D preoperative anatomical model on intra-operative patient is proposed. The hardware requirement is commercial tablet-PC equipped with a camera. Thus, no external tracking device nor artificial landmarks on the patient are required. We resort to visual SLAM to provide markerless real-time tablet-PC camera location with respect to the patient. The preoperative model is registered with respect to the patient through 4-6 anchor points. The anchors correspond to anatomical references selected on the tablet-PC screen at the beginning of the procedure. Accurate and real-time preoperative model alignment (approximately 5-mm mean FRE and TRE) was achieved, even when anchors were not visible in the current field of view. The system has been experimentally validated on human volunteers, in vivo pigs and a phantom. The proposed system can be smoothly integrated into the surgical workflow because it: (1) operates in real time, (2) requires minimal additional hardware only a tablet-PC with camera, (3) is robust to occlusion, (4) requires minimal interaction from the medical staff.

  2. Simulation of an integrated age replacement and spare provisioning policy using SLAM

    International Nuclear Information System (INIS)

    Zohrul Kabir, A.B.M.; Farrash, S.H.A.

    1996-01-01

    This paper presents a SLAM simulation model for determining a jointly optimal age replacement and spare part provisioning policy. The policy, referred to as a stocking policy, is formulated by combining age replacement policy with a continuous review (s, S) type inventory policy, where s is the stock reorder level and S is the maximum stock level. The optimal values of the decision variables are obtained by minimizing the total cost of replacement and inventory. The simulation procedure outlined in the paper can be used to model any operating situation having either a single item or a number of identical items. Results from a number of case problems specifically constructed by 5-factor second order rotatory design have been presented and the effects of different cost elements, item failure characteristics and lead time characteristics have been highlighted. For all case problems, optimal (s, S) policies to support the Barlow-Proschan age policy have also been determined. Simulation results clearly indicate the separate optimizations of replacement and spare provisioning policies do not ensure global optimality when total system cost has to be minimized

  3. SLAM-seq defines direct gene-regulatory functions of the BRD4-MYC axis.

    Science.gov (United States)

    Muhar, Matthias; Ebert, Anja; Neumann, Tobias; Umkehrer, Christian; Jude, Julian; Wieshofer, Corinna; Rescheneder, Philipp; Lipp, Jesse J; Herzog, Veronika A; Reichholf, Brian; Cisneros, David A; Hoffmann, Thomas; Schlapansky, Moritz F; Bhat, Pooja; von Haeseler, Arndt; Köcher, Thomas; Obenauf, Anna C; Popow, Johannes; Ameres, Stefan L; Zuber, Johannes

    2018-05-18

    Defining direct targets of transcription factors and regulatory pathways is key to understanding their roles in physiology and disease. We combined SLAM-seq [thiol(SH)-linked alkylation for the metabolic sequencing of RNA], a method for direct quantification of newly synthesized messenger RNAs (mRNAs), with pharmacological and chemical-genetic perturbation in order to define regulatory functions of two transcriptional hubs in cancer, BRD4 and MYC, and to interrogate direct responses to BET bromodomain inhibitors (BETis). We found that BRD4 acts as general coactivator of RNA polymerase II-dependent transcription, which is broadly repressed upon high-dose BETi treatment. At doses triggering selective effects in leukemia, BETis deregulate a small set of hypersensitive targets including MYC. In contrast to BRD4, MYC primarily acts as a selective transcriptional activator controlling metabolic processes such as ribosome biogenesis and de novo purine synthesis. Our study establishes a simple and scalable strategy to identify direct transcriptional targets of any gene or pathway. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  4. Slamming pressures on the bottom of a free-falling vertical wedge

    Science.gov (United States)

    Ikeda, C. M.; Judge, C. Q.

    2013-11-01

    High-speed planing boats are subjected to repeat impacts due to slamming, which can cause structural damage and injury to passengers. A first step in understanding and predicting the physics of a craft re-entering the water after becoming partially airborne is an experimental vertical drop test of a prismastic wedge (deadrise angle, β =20° beam, B = 300 mm; and length, L = 600 mm). The acrylic wedge was mounted to a rig allowing it to free-fall into a deep-water tank (5.2m × 5.2m × 4.2m deep) from heights 0 camera (1000 fps, resolution of 1920 × 1200 pixels) is mounted above the wedge model to record the wetted surface as the wedge descended below the free surface. The pressure measurements taken with both conventional surface pressure transducers and the pressure mapping system agree within 10% of the peak pressure values (0.7 bar, typical). Supported by the Office of Naval Research.

  5. Numerical Evaluation of Dynamic Response for Flexible Composite Structures under Slamming Impact for Naval Applications

    Science.gov (United States)

    Hassoon, O. H.; Tarfaoui, M.; El Moumen, A.; Benyahia, H.; Nachtane, M.

    2018-06-01

    The deformable composite structures subjected to water-entry impact can be caused a phenomenon called hydroelastic effect, which can modified the fluid flow and estimated hydrodynamic loads comparing with rigid body. This is considered very important for ship design engineers to predict the global and the local hydrodynamic loads. This paper presents a numerical model to simulate the slamming water impact of flexible composite panels using an explicit finite element method. In order to better describe the hydroelastic influence and mechanical properties, composite materials panels with different stiffness and under different impact velocities with deadrise angle of 100 have been studied. In the other hand, the inertia effect was observed in the early stage of the impact that relative to the loading rate. Simulation results have been indicated that the lower stiffness panel has a higher hydroelastic effect and becomes more important when decreasing of the deadrise angle and increasing the impact velocity. Finally, the simulation results were compared with the experimental data and the analytical approaches of the rigid body to describe the behavior of the hydroelastic influence.

  6. Study of check valve slamming in a BWR feedwater system following a postulated pipe break

    International Nuclear Information System (INIS)

    Safwat, H.H.; Arastu, A.H.; Norman, A.

    1985-01-01

    This study deals with a swing check valve slamming due to a break at relatively short distance from the valve. Under this situation, substantial flashing occurs near the valve and the result of the study are subject to what is believed to be a conservative simplifying assumption, i.e., the hydrodynamic moment acting on the valve during the transient is represented by resultant moment due to the pressure differential across the valve. It is believed that vapor voids forming at the valve would actually reduce the disk impact velocities in comparison to those predicted under this simplifying assumption. A technique used to represent a double-ended break through hypothetical valves may have some influence on the results particularly for long break opening times. The study has yielded good insight to help understand the complex problem. The study has focused on some parameters and the reader may raise questions on the effects of other parameters. Nevertheless, the present study underlines the complexity facing analysts dealing with this transient using analytical methods. Though some experimental data are available, the authors believe that an experimental study (recognizing the complexity of the experimental setup and instrumentation), would be quite useful. It can provide answers to questions facing analysts dealing with this problem and thus avoid unnecessary conservatisms due to uncertainties in input data

  7. The particle swarm optimization algorithm applied to nuclear systems surveillance test planning; Otimizacao aplicada ao planejamento de politicas de testes em sistemas nucleares por enxame de particulas

    Energy Technology Data Exchange (ETDEWEB)

    Siqueira, Newton Norat

    2006-12-15

    This work shows a new approach to solve availability maximization problems in electromechanical systems, under periodic preventive scheduled tests. This approach uses a new Optimization tool called PSO developed by Kennedy and Eberhart (2001), Particle Swarm Optimization, integrated with probabilistic safety analysis model. Two maintenance optimization problems are solved by the proposed technique, the first one is a hypothetical electromechanical configuration and the second one is a real case from a nuclear power plant (Emergency Diesel Generators). For both problem PSO is compared to a genetic algorithm (GA). In the experiments made, PSO was able to obtain results comparable or even slightly better than those obtained b GA. Therefore, the PSO algorithm is simpler and its convergence is faster, indicating that PSO is a good alternative for solving such kind of problems. (author)

  8. Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) Applied in Optimization of Radiation Pattern Control of Phased-Array Radars for Rocket Tracking Systems

    Science.gov (United States)

    Silva, Leonardo W. T.; Barros, Vitor F.; Silva, Sandro G.

    2014-01-01

    In launching operations, Rocket Tracking Systems (RTS) process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs) with phased arrays (PAs). These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs), the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs). For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence. PMID:25196013

  9. Mono and multi-objective optimization techniques applied to a large range of industrial test cases using Metamodel assisted Evolutionary Algorithms

    Science.gov (United States)

    Fourment, Lionel; Ducloux, Richard; Marie, Stéphane; Ejday, Mohsen; Monnereau, Dominique; Massé, Thomas; Montmitonnet, Pierre

    2010-06-01

    The use of material processing numerical simulation allows a strategy of trial and error to improve virtual processes without incurring material costs or interrupting production and therefore save a lot of money, but it requires user time to analyze the results, adjust the operating conditions and restart the simulation. Automatic optimization is the perfect complement to simulation. Evolutionary Algorithm coupled with metamodelling makes it possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. Ten industrial partners have been selected to cover the different area of the mechanical forging industry and provide different examples of the forming simulation tools. It aims to demonstrate that it is possible to obtain industrially relevant results on a very large range of applications within a few tens of simulations and without any specific automatic optimization technique knowledge. The large computational time is handled by a metamodel approach. It allows interpolating the objective function on the entire parameter space by only knowing the exact function values at a reduced number of "master points". Two algorithms are used: an evolution strategy combined with a Kriging metamodel and a genetic algorithm combined with a Meshless Finite Difference Method. The later approach is extended to multi-objective optimization. The set of solutions, which corresponds to the best possible compromises between the different objectives, is then computed in the same way. The population based approach allows using the parallel capabilities of the utilized computer with a high efficiency. An optimization module, fully embedded within the Forge2009 IHM, makes possible to cover all the defined examples, and the use of new multi-core hardware to compute several simulations at the same time reduces the needed time dramatically. The presented examples

  10. The South London and Maudsley NHS Foundation Trust Biomedical Research Centre (SLAM BRC case register: development and descriptive data

    Directory of Open Access Journals (Sweden)

    Denis Mike

    2009-08-01

    Full Text Available Abstract Background Case registers have been used extensively in mental health research. Recent developments in electronic medical records, and in computer software to search and analyse these in anonymised format, have the potential to revolutionise this research tool. Methods We describe the development of the South London and Maudsley NHS Foundation Trust (SLAM Biomedical Research Centre (BRC Case Register Interactive Search tool (CRIS which allows research-accessible datasets to be derived from SLAM, the largest provider of secondary mental healthcare in Europe. All clinical data, including free text, are available for analysis in the form of anonymised datasets. Development involved both the building of the system and setting in place the necessary security (with both functional and procedural elements. Results Descriptive data are presented for the Register database as of October 2008. The database at that point included 122,440 cases, 35,396 of whom were receiving active case management under the Care Programme Approach. In terms of gender and ethnicity, the database was reasonably representative of the source population. The most common assigned primary diagnoses were within the ICD mood disorders (n = 12,756 category followed by schizophrenia and related disorders (8158, substance misuse (7749, neuroses (7105 and organic disorders (6414. Conclusion The SLAM BRC Case Register represents a 'new generation' of this research design, built on a long-running system of fully electronic clinical records and allowing in-depth secondary analysis of both numerical, string and free text data, whilst preserving anonymity through technical and procedural safeguards.

  11. New Optimization Algorithms in Physics

    CERN Document Server

    Hartmann, Alexander K

    2004-01-01

    Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

  12. Incorporating the Uncertainties of Nodal-Plane Orientation in the Seismo-Lineament Analysis Method (SLAM)

    Science.gov (United States)

    Cronin, V.; Sverdrup, K. A.

    2013-05-01

    The process of delineating a seismo-lineament has evolved since the first description of the Seismo-Lineament Analysis Method (SLAM) by Cronin et al. (2008, Env & Eng Geol 14(3) 199-219). SLAM is a reconnaissance tool to find the trace of the fault that produced an shallow-focus earthquake by projecting the corresponding nodal planes (NP) upward to their intersections with the ground surface, as represented by a DEM or topographic map. A seismo-lineament is formed by the intersection of the uncertainty volume associated with a given NP and the ground surface. The ground-surface trace of the fault that produced the earthquake is likely to be within one of the two seismo-lineaments associated with the two NPs derived from the earthquake's focal mechanism solution. When no uncertainty estimate has been reported for the NP orientation, the uncertainty volume associated with a given NP is bounded by parallel planes that are [1] tangent to the ellipsoidal uncertainty volume around the focus and [2] parallel to the NP. If the ground surface is planar, the resulting seismo-lineament is bounded by parallel lines. When an uncertainty is reported for the NP orientation, the seismo-lineament resembles a bow tie, with the epicenter located adjacent to or within the "knot." Some published lists of focal mechanisms include only one NP with associated uncertainties. The NP orientation uncertainties in strike azimuth (+/- gamma), dip angle (+/- epsilon) and rake that are output from an FPFIT analysis (Reasenberg and Oppenheimer, 1985, USGS OFR 85-739) are taken to be the same for both NPs (Oppenheimer, 2013, pers com). The boundaries of the NP uncertainty volume are each comprised by planes that are tangent to the focal uncertainty ellipsoid. One boundary, whose nearest horizontal distance from the epicenter is greater than or equal to that of the other boundary, is formed by the set of all planes with strike azimuths equal to the reported NP strike azimuth +/- gamma, and dip angle

  13. Indoor Modelling from Slam-Based Laser Scanner: Door Detection to Envelope Reconstruction

    Science.gov (United States)

    Díaz-Vilariño, L.; Verbree, E.; Zlatanova, S.; Diakité, A.

    2017-09-01

    Updated and detailed indoor models are being increasingly demanded for various applications such as emergency management or navigational assistance. The consolidation of new portable and mobile acquisition systems has led to a higher availability of 3D point cloud data from indoors. In this work, we explore the combined use of point clouds and trajectories from SLAM-based laser scanner to automate the reconstruction of building indoors. The methodology starts by door detection, since doors represent transitions from one indoor space to other, which constitutes an initial approach about the global configuration of the point cloud into building rooms. For this purpose, the trajectory is used to create a vertical point cloud profile in which doors are detected as local minimum of vertical distances. As point cloud and trajectory are related by time stamp, this feature is used to subdivide the point cloud into subspaces according to the location of the doors. The correspondence between subspaces and building rooms is not unambiguous. One subspace always corresponds to one room, but one room is not necessarily depicted by just one subspace, for example, in case of a room containing several doors and in which the acquisition is performed in a discontinue way. The labelling problem is formulated as combinatorial approach solved as a minimum energy optimization. Once the point cloud is subdivided into building rooms, envelop (conformed by walls, ceilings and floors) is reconstructed for each space. The connectivity between spaces is included by adding the previously detected doors to the reconstructed model. The methodology is tested in a real case study.

  14. INDOOR MODELLING FROM SLAM-BASED LASER SCANNER: DOOR DETECTION TO ENVELOPE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    L. Díaz-Vilariño

    2017-09-01

    Full Text Available Updated and detailed indoor models are being increasingly demanded for various applications such as emergency management or navigational assistance. The consolidation of new portable and mobile acquisition systems has led to a higher availability of 3D point cloud data from indoors. In this work, we explore the combined use of point clouds and trajectories from SLAM-based laser scanner to automate the reconstruction of building indoors. The methodology starts by door detection, since doors represent transitions from one indoor space to other, which constitutes an initial approach about the global configuration of the point cloud into building rooms. For this purpose, the trajectory is used to create a vertical point cloud profile in which doors are detected as local minimum of vertical distances. As point cloud and trajectory are related by time stamp, this feature is used to subdivide the point cloud into subspaces according to the location of the doors. The correspondence between subspaces and building rooms is not unambiguous. One subspace always corresponds to one room, but one room is not necessarily depicted by just one subspace, for example, in case of a room containing several doors and in which the acquisition is performed in a discontinue way. The labelling problem is formulated as combinatorial approach solved as a minimum energy optimization. Once the point cloud is subdivided into building rooms, envelop (conformed by walls, ceilings and floors is reconstructed for each space. The connectivity between spaces is included by adding the previously detected doors to the reconstructed model. The methodology is tested in a real case study.

  15. Patients as teachers, medical students as filmmakers: the video slam, a pilot study.

    Science.gov (United States)

    Shapiro, Dan; Tomasa, Lynne; Koff, Nancy Alexander

    2009-09-01

    In 2006-2007 and 2007-2008, the authors pilot-tested a filmmaking project, (medical students filmed patients) to assess the project's potential to teach about the challenges of living with serious chronic illness. Two years of second-year medical students (N = 32) from The University of Arizona, working in groups of two or three, were paired with patients and filmed multiple home visits during eight months. Students edited their films to 7 to 10 minutes and added transitions, titles, and music. A mixed audience of students and faculty viewed the resulting 12 films in a "Video Slam." Faculty also used the films in the formal curriculum to illustrate teaching points related to chronic illness. Student filmmakers, on average, made 4.4 visits, collected 5.6 hours of film, and edited for 26.6 hours. Students reported that the project affected what they planned to cover in clinic visits, increased their plans to involve patients in care, enhanced their appreciation for patient-centered care, improved their knowledge of community resources, improved their understanding of allied health professionals' roles, and taught them about patients' innovative adaptations. Overall, students rated the project highly for its impact on their education (mean = 4.52 of 5). Student and faculty viewers of the films (N = 74) found the films compelling (mean = 4.95 of 5) and informative (mean = 4.93 of 5). The authors encountered the ethical dilemmas of deciding who controls the patients' recorded stories and navigating between patient anonymity/confidentiality and allowing patients to use their stories to teach.

  16. Prediction of Endocrine System Affectation in Fisher 344 Rats by Food Intake Exposed with Malathion, Applying Naïve Bayes Classifier and Genetic Algorithms.

    Science.gov (United States)

    Mora, Juan David Sandino; Hurtado, Darío Amaya; Sandoval, Olga Lucía Ramos

    2016-01-01

    Reported cases of uncontrolled use of pesticides and its produced effects by direct or indirect exposition, represent a high risk for human health. Therefore, in this paper, it is shown the results of the development and execution of an algorithm that predicts the possible effects in endocrine system in Fisher 344 (F344) rats, occasioned by ingestion of malathion. It was referred to ToxRefDB database in which different case studies in F344 rats exposed to malathion were collected. The experimental data were processed using Naïve Bayes (NB) machine learning classifier, which was subsequently optimized using genetic algorithms (GAs). The model was executed in an application with a graphical user interface programmed in C#. There was a tendency to suffer bigger alterations, increasing levels in the parathyroid gland in dosages between 4 and 5 mg/kg/day, in contrast to the thyroid gland for doses between 739 and 868 mg/kg/day. It was showed a greater resistance for females to contract effects on the endocrine system by the ingestion of malathion. Females were more susceptible to suffer alterations in the pituitary gland with exposure times between 3 and 6 months. The prediction model based on NB classifiers allowed to analyze all the possible combinations of the studied variables and improving its accuracy using GAs. Excepting the pituitary gland, females demonstrated better resistance to contract effects by increasing levels on the rest of endocrine system glands.

  17. New Methodology for Optimal Flight Control Using Differential Evolution Algorithms Applied on the Cessna Citation X Business Aircraft – Part 1. Design and Optimization

    Directory of Open Access Journals (Sweden)

    Yamina BOUGHARI

    2017-06-01

    Full Text Available Setting the appropriate controllers for aircraft stability and control augmentation systems are complicated and time consuming tasks. As in the Linear Quadratic Regulator method gains are found by selecting the appropriate weights or as in the Proportional Integrator Derivative control by tuning gains. A trial and error process is usually employed for the determination of weighting matrices, which is normally a time consuming procedure. Flight Control Law were optimized and designed by combining the Deferential Evolution algorithm, the Linear Quadratic Regulator method, and the Proportional Integral controller. The optimal controllers were used to reach satisfactory aircraft’s dynamic and safe flight operations with respect to the augmentation systems’ handling qualities, and design requirements for different flight conditions. Furthermore the design and the clearance of the controllers over the flight envelope were automated using a Graphical User Interface, which offers to the designer, the flexibility to change the design requirements. In the aim of reducing time, and costs of the Flight Control Law design, one fitness function has been used for both optimizations, and using design requirements as constraints. Consequently the Flight Control Law design process complexity was reduced by using the meta-heuristic algorithm.

  18. Central composite design and genetic algorithm applied for the optimization of ultrasonic-assisted removal of malachite green by ZnO Nanorod-loaded activated carbon

    Science.gov (United States)

    Ghaedi, M.; Azad, F. Nasiri; Dashtian, K.; Hajati, S.; Goudarzi, A.; Soylak, M.

    2016-10-01

    Maximum malachite green (MG) adsorption onto ZnO Nanorod-loaded activated carbon (ZnO-NR-AC) was achieved following the optimization of conditions, while the mass transfer was accelerated by ultrasonic. The central composite design (CCD) and genetic algorithm (GA) were used to estimate the effect of individual variables and their mutual interactions on the MG adsorption as response and to optimize the adsorption process. The ZnO-NR-AC surface morphology and its properties were identified via FESEM, XRD and FTIR. The adsorption equilibrium isotherm and kinetic models investigation revealed the well fit of the experimental data to Langmuir isotherm and pseudo-second-order kinetic model, respectively. It was shown that a small amount of ZnO-NR-AC (with adsorption capacity of 20 mg g- 1) is sufficient for the rapid removal of high amount of MG dye in short time (3.99 min).

  19. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  20. SLAM family markers are conserved among hematopoietic stem cells from old and reconstituted mice and markedly increase their purity.

    Science.gov (United States)

    Yilmaz, Omer H; Kiel, Mark J; Morrison, Sean J

    2006-02-01

    Recent advances have increased the purity of hematopoietic stem cells (HSCs) isolated from young mouse bone marrow. However, little attention has been paid to the purity of HSCs from other contexts. Although Thy-1 low Sca-1+ Lineage- c-kit+ cells from young bone marrow are highly enriched for HSCs (1 in 5 cells gives long-term multilineage reconstitution after transplantation into irradiated mice), the same population from old, reconstituted, or cytokine-mobilized mice engrafts much less efficiently (1 in 78 to 1 in 185 cells gives long-term multilineage reconstitution). To test whether we could increase the purity of HSCs isolated from these contexts, we examined the SLAM family markers CD150 and CD48. All detectable HSCs from old, reconstituted, and cyclophosphamide/G-CSF-mobilized mice were CD150+ CD48-, just as in normal young bone marrow. Thy-1 low Sca-1+ Lineage- c-kit+ cells from old, reconstituted, or mobilized mice included mainly CD48+ and/or CD150- cells that lacked reconstituting ability. CD150+ CD48- Sca-1+ Lineage- c-kit+ cells from old, reconstituted, or mobilized mice were much more highly enriched for HSCs, with 1 in 3 to 1 in 7 cells giving long-term multilineage reconstitution. SLAM family receptor expression is conserved among HSCs from diverse contexts, and HSCs from old, reconstituted, and mobilized mice engraft relatively efficiently after transplantation when contaminating cells are eliminated.

  1. Dissection of SAP-dependent and SAP-independent SLAM family signaling in NKT cell development and humoral immunity

    Science.gov (United States)

    Cai, Chenxu; Liu, Guangao; Wang, Yuande; Du, Juan; Lin, Xin; Yang, Meixiang

    2017-01-01

    Signaling lymphocytic activation molecule (SLAM)–associated protein (SAP) mutations in X-linked lymphoproliferative disease (XLP) lead to defective NKT cell development and impaired humoral immunity. Because of the redundancy of SLAM family receptors (SFRs) and the complexity of SAP actions, how SFRs and SAP mediate these processes remains elusive. Here, we examined NKT cell development and humoral immunity in mice completely deficient in SFR. We found that SFR deficiency severely impaired NKT cell development. In contrast to SAP deficiency, SFR deficiency caused no apparent defect in follicular helper T (TFH) cell differentiation. Intriguingly, the deletion of SFRs completely rescued the severe defect in TFH cell generation caused by SAP deficiency, whereas SFR deletion had a minimal effect on the defective NKT cell development in SAP-deficient mice. These findings suggest that SAP-dependent activating SFR signaling is essential for NKT cell selection; however, SFR signaling is inhibitory in SAP-deficient TFH cells. Thus, our current study revises our understanding of the mechanisms underlying T cell defects in patients with XLP. PMID:28049627

  2. Dissection of SAP-dependent and SAP-independent SLAM family signaling in NKT cell development and humoral immunity.

    Science.gov (United States)

    Chen, Shasha; Cai, Chenxu; Li, Zehua; Liu, Guangao; Wang, Yuande; Blonska, Marzenna; Li, Dan; Du, Juan; Lin, Xin; Yang, Meixiang; Dong, Zhongjun

    2017-02-01

    Signaling lymphocytic activation molecule (SLAM)-associated protein (SAP) mutations in X-linked lymphoproliferative disease (XLP) lead to defective NKT cell development and impaired humoral immunity. Because of the redundancy of SLAM family receptors (SFRs) and the complexity of SAP actions, how SFRs and SAP mediate these processes remains elusive. Here, we examined NKT cell development and humoral immunity in mice completely deficient in SFR. We found that SFR deficiency severely impaired NKT cell development. In contrast to SAP deficiency, SFR deficiency caused no apparent defect in follicular helper T (T FH ) cell differentiation. Intriguingly, the deletion of SFRs completely rescued the severe defect in T FH cell generation caused by SAP deficiency, whereas SFR deletion had a minimal effect on the defective NKT cell development in SAP-deficient mice. These findings suggest that SAP-dependent activating SFR signaling is essential for NKT cell selection; however, SFR signaling is inhibitory in SAP-deficient T FH cells. Thus, our current study revises our understanding of the mechanisms underlying T cell defects in patients with XLP. © 2017 Chen et al.

  3. Algoritmo genético aplicado a la programación en talleres de maquinado//Genetic algorithm applied to scheduling in machine shops

    Directory of Open Access Journals (Sweden)

    José Eduardo Márquez-Delgado

    2012-09-01

    Full Text Available En este trabajo se utiliza la metaheurística nombrada algoritmo genético, para dos variantes típicas de problemas de planificación presentes en un taller de maquinado de piezas: las variantes flujo general y flujo regular, y se ha seleccionado la minimización del tiempo de finalización de todos los trabajos ocamino máximo, como objetivo a optimizar en un plan de trabajo. Este problema es considerado de difícil solución y es típico de la optimización combinatoria. Los resultados demuestran la calidad de las soluciones encontradas en correspondencia con el tiempo de cómputo empleado, al ser comparados conproblemas clásicos reportados por otros autores. La representación propuesta de cada cromosoma genera el universo completo de soluciones factibles, donde es posible encontrar valores óptimos globales de solución y cumple con las restricciones del problema.Palabras claves: algoritmo genético, cromosomas, flujo general, flujo regular, planificación, camino máximo._____________________________________________________________________________AbstractIn this paper we use the metaheuristic named genetic algorithm, for two typical variants of problems of scheduling present in a in a machine shop parts: the variant job shop and flow shop, and the minimization of the time of finalization of all the works has been selected, good known as makespan, as objective tooptimize in a work schedule. This problem is considered to be a difficult solution and is typical in combinatory optimization. The results demonstrate the quality of the solutions found in correspondence with the time of used computation, when being compared with classic problems reported by other authors.The proposed representation of each chromosome generates the complete universe of feasible solutions, where it is possible to find global good values of solution and it fulfills the restrictions of the problem.Key words: genetic algorithm, chromosomes, flow shop, job shop

  4. Applying genetic algorithms for calibrating a hexagonal cellular automata model for the simulation of debris flows characterised by strong inertial effects

    Science.gov (United States)

    Iovine, G.; D'Ambrosio, D.; Di Gregorio, S.

    2005-03-01

    In modelling complex a-centric phenomena which evolve through local interactions within a discrete time-space, cellular automata (CA) represent a valid alternative to standard solution methods based on differential equations. Flow-type phenomena (such as lava flows, pyroclastic flows, earth flows, and debris flows) can be viewed as a-centric dynamical systems, and they can therefore be properly investigated in CA terms. SCIDDICA S 4a is the last release of a two-dimensional hexagonal CA model for simulating debris flows characterised by strong inertial effects. S 4a has been obtained by progressively enriching an initial simplified model, originally derived for simulating very simple cases of slow-moving flow-type landslides. Using an empirical strategy, in S 4a, the inertial character of the flowing mass is translated into CA terms by means of local rules. In particular, in the transition function of the model, the distribution of landslide debris among the cells is obtained through a double cycle of computation. In the first phase, the inertial character of the landslide debris is taken into account by considering indicators of momentum. In the second phase, any remaining debris in the central cell is distributed among the adjacent cells, according to the principle of maximum possible equilibrium. The complexities of the model and of the phenomena to be simulated suggested the need for an automated technique of evaluation for the determination of the best set of global parameters. Accordingly, the model is calibrated using a genetic algorithm and by considering the May 1998 Curti-Sarno (Southern Italy) debris flow. The boundaries of the area affected by the debris flow are simulated well with the model. Errors computed by comparing the simulations with the mapped areal extent of the actual landslide are smaller than those previously obtained without genetic algorithms. As the experiments have been realised in a sequential computing environment, they could be

  5. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  6. Evaluating the diagnostic utility of applying a machine learning algorithm to diffusion tensor MRI measures in individuals with major depressive disorder.

    Science.gov (United States)

    Schnyer, David M; Clasen, Peter C; Gonzalez, Christopher; Beevers, Christopher G

    2017-06-30

    Using MRI to diagnose mental disorders has been a long-term goal. Despite this, the vast majority of prior neuroimaging work has been descriptive rather than predictive. The current study applies support vector machine (SVM) learning to MRI measures of brain white matter to classify adults with Major Depressive Disorder (MDD) and healthy controls. In a precisely matched group of individuals with MDD (n =25) and healthy controls (n =25), SVM learning accurately (74%) classified patients and controls across a brain map of white matter fractional anisotropy values (FA). The study revealed three main findings: 1) SVM applied to DTI derived FA maps can accurately classify MDD vs. healthy controls; 2) prediction is strongest when only right hemisphere white matter is examined; and 3) removing FA values from a region identified by univariate contrast as significantly different between MDD and healthy controls does not change the SVM accuracy. These results indicate that SVM learning applied to neuroimaging data can classify the presence versus absence of MDD and that predictive information is distributed across brain networks rather than being highly localized. Finally, MDD group differences revealed through typical univariate contrasts do not necessarily reveal patterns that provide accurate predictive information. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  7. CTC-ask: a new algorithm for conversion of CT numbers to tissue parameters for Monte Carlo dose calculations applying DICOM RS knowledge

    International Nuclear Information System (INIS)

    Ottosson, Rickard O; Behrens, Claus F

    2011-01-01

    One of the building blocks in Monte Carlo (MC) treatment planning is to convert patient CT data to MC compatible phantoms, consisting of density and media matrices. The resulting dose distribution is highly influenced by the accuracy of the conversion. Two major contributing factors are precise conversion of CT number to density and proper differentiation between air and lung. Existing tools do not address this issue specifically. Moreover, their density conversion may depend on the number of media used. Differentiation between air and lung is an important task in MC treatment planning and misassignment may lead to local dose errors on the order of 10%. A novel algorithm, CTC-ask, is presented in this study. It enables locally confined constraints for the media assignment and is independent of the number of media used for the conversion of CT number to density. MC compatible phantoms were generated for two clinical cases using a CT-conversion scheme implemented in both CTC-ask and the DICOM-RT toolbox. Full MC dose calculation was subsequently conducted and the resulting dose distributions were compared. The DICOM-RT toolbox inaccurately assigned lung in 9.9% and 12.2% of the voxels located outside of the lungs for the two cases studied, respectively. This was completely avoided by CTC-ask. CTC-ask is able to reduce anatomically irrational media assignment. The CTC-ask source code can be made available upon request to the authors. (note)

  8. Research on Innovating, Applying Multiple Paths Routing Technique Based on Fuzzy Logic and Genetic Algorithm for Routing Messages in Service - Oriented Routing

    Directory of Open Access Journals (Sweden)

    Nguyen Thanh Long

    2015-02-01

    Full Text Available MANET (short for Mobile Ad-Hoc Network consists of a set of mobile network nodes, network configuration changes very fast. In content based routing, data is transferred from source node to request nodes is not based on destination addresses. Therefore, it is very flexible and reliable, because source node does not need to know destination nodes. If We can find multiple paths that satisfies bandwidth requirement, split the original message into multiple smaller messages to transmit concurrently on these paths. On destination nodes, combine separated messages into the original message. Hence it can utilize better network resources, causes data transfer rate to be higher, load balancing, failover. Service Oriented Routing is inherited from the model of content based routing (CBR, combined with several advanced techniques such as Multicast, multiple path routing, Genetic algorithm to increase the data rate, and data encryption to ensure information security. Fuzzy logic is a logical field study evaluating the accuracy of the results based on the approximation of the components involved, make decisions based on many factors relative accuracy based on experimental or mathematical proof. This article presents some techniques to support multiple path routing from one network node to a set of nodes with guaranteed quality of service. By using these techniques can decrease the network load, congestion, use network resources efficiently.

  9. Methodology for Check Valve Selection to Maintain the Integrity of Pipeline against the Check Valve Slam for the KIJANG Research Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dayong; Yoon, Hyungi; Seo, Kyoungwoo; Kim, Seonhoon [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The check valve slam results in a water hammer and unexpected system pressure rise in the pipeline. Sometimes, the pressure rise by check valve slam in the pipeline exceeds the design pressure and then it causes the rupture of pipeline. Therefore, check valve slam significantly influences on the integrity of pipe. Especially, this it is most likely to occur by check valve installed in the discharge of pump when one pump trips among the two or more running in parallel pump system. This study focuses on the check valve selection to maintain the integrity of PCS pipeline against the check valve slam. If design head for KJRR PCS pipeline is higher than the sum of static head and 11 m, any type check valves can be installed at the discharge of pump. However, if design head for KJRR PCS pipeline is lower than the sum of static head and 11 m, installation of swing and ball check on the discharge of pump must be avoid to prevent the rupture of PCS pipeline.

  10. Análisis de Detectores y Descriptores de Características Visuales en SLAM en Entornos Interiores y Exteriores

    Directory of Open Access Journals (Sweden)

    M. Ballesta

    2010-04-01

    Full Text Available Resumen: El objetivo de este artículo es encontrar un extractor de características visuales que pueda ser utilizado en un proceso de SLAM (Simultaneous Localization and Mapping. Este extractor de características consiste en la combinación de un detector que extrae puntos significativos del entorno, y un descriptor local que caracteriza dichos puntos. Este artículo presenta la comparación de un conjunto de detectores de puntos de interés y de descriptores locales que se utilizan como marcas visuales en un proceso de SLAM. El análisis comparativo se divide en dos fases diferenciadas: detección y descripción. Se evalúa la repetibilidad de los detectores, así como la invariabilidad de los descriptores ante cambios de vista, escala e iluminación. Los experimentos se han realizado a partir de un conjunto de secuencias de imágenes tanto interiores (entorno de oficinas como exteriores, con diversas variaciones en la imagen (iluminación y posición, representando así de una forma bastante general los entornos típicos de un robot. Se considera que los resultados de este trabajo pueden ser útiles a la hora de seleccionar una marca adecuada en SLAM visual, tanto para entornos interiores como exteriores. Palabras clave: SLAM visual, marcas visuales, detectores de puntos de interés, descriptores locales

  11. Methodology for Check Valve Selection to Maintain the Integrity of Pipeline against the Check Valve Slam for the KIJANG Research Reactor

    International Nuclear Information System (INIS)

    Kim, Dayong; Yoon, Hyungi; Seo, Kyoungwoo; Kim, Seonhoon

    2016-01-01

    The check valve slam results in a water hammer and unexpected system pressure rise in the pipeline. Sometimes, the pressure rise by check valve slam in the pipeline exceeds the design pressure and then it causes the rupture of pipeline. Therefore, check valve slam significantly influences on the integrity of pipe. Especially, this it is most likely to occur by check valve installed in the discharge of pump when one pump trips among the two or more running in parallel pump system. This study focuses on the check valve selection to maintain the integrity of PCS pipeline against the check valve slam. If design head for KJRR PCS pipeline is higher than the sum of static head and 11 m, any type check valves can be installed at the discharge of pump. However, if design head for KJRR PCS pipeline is lower than the sum of static head and 11 m, installation of swing and ball check on the discharge of pump must be avoid to prevent the rupture of PCS pipeline

  12. Algebraic Algorithm Design and Local Search

    National Research Council Canada - National Science Library

    Graham, Robert

    1996-01-01

    .... Algebraic techniques have been applied successfully to algorithm synthesis by the use of algorithm theories and design tactics, an approach pioneered in the Kestrel Interactive Development System (KIDS...

  13. Simultaneous data pre-processing and SVM classification model selection based on a parallel genetic algorithm applied to spectroscopic data of olive oils.

    Science.gov (United States)

    Devos, Olivier; Downey, Gerard; Duponchel, Ludovic

    2014-04-01

    Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Algoritmos genéticos aplicados a la optimización de antenas Yagi-Uda Genetic algorithms applied to Yagi-Uda antenna optimization

    Directory of Open Access Journals (Sweden)

    Edgardo César De La Asunción López

    2009-07-01

    Full Text Available En el presente artículo se muestra un proceso de optimización implementado usando algoritmos genéticos. La población inicial del AG está compuesta por 128 cromosomas con 11 genes por cromosoma. Los cromosomas del AG están compuestos por las longitudes y separaciones de los elementos de la antena Yagi-Uda; los rangos de estos genes fueron escogidos siguiendo estándares de diseño para dichas antenas. Los genes pasan un proceso de análisis para medir cada una las antenas de cada generación de del AG para asignar la aptitud de los individuos. Con el fin de verificar los resultados obtenidos, se aplicaron varias pruebas, entre ellas la construcción de una antena Yagi-Uda optimizada a la cual se le midieron y verificaron sus características electromagnéticas.This paper describes an optimization process implemented using Genetic Algorithms. The initial population of the GA is composed of 128 chromosomes with 11 genes per chromosome. The chromosomes of the GA are composed by the length and separations of the elements of the Yagi-Uda antenna; the ranks of this genes where chosen by design standards for such antennas. All genes undergo a process of analysis to assess every one of the antennas of each generation of the GA to assign the fitness of the individuals. In order to verify the obtained results, various tests were made, and among them excel the construction of the optimized Yagi-Uda antenna to measure and verify it electromagnetic characteristics.

  15. A Discussion on Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms based on Kalman Filter Estimation Applied to Prognostics of Electronics Components

    Science.gov (United States)

    Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.

  16. Recursive forgetting algorithms

    DEFF Research Database (Denmark)

    Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan

    1992-01-01

    In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...

  17. Trends in causes of death among children under 5 in Bangladesh, 1993-2004: an exercise applying a standardized computer algorithm to assign causes of death using verbal autopsy data

    Directory of Open Access Journals (Sweden)

    Walker Neff

    2011-08-01

    Full Text Available Abstract Background Trends in the causes of child mortality serve as important global health information to guide efforts to improve child survival. With child mortality declining in Bangladesh, the distribution of causes of death also changes. The three verbal autopsy (VA studies conducted with the Bangladesh Demographic and Health Surveys provide a unique opportunity to study these changes in child causes of death. Methods To ensure comparability of these trends, we developed a standardized algorithm to assign causes of death using symptoms collected through the VA studies. The original algorithms applied were systematically reviewed and key differences in cause categorization, hierarchy, case definition, and the amount of data collected were compared to inform the development of the standardized algorithm. Based primarily on the 2004 cause categorization and hierarchy, the standardized algorithm guarantees comparability of the trends by only including symptom data commonly available across all three studies. Results Between 1993 and 2004, pneumonia remained the leading cause of death in Bangladesh, contributing to 24% to 33% of deaths among children under 5. The proportion of neonatal mortality increased significantly from 36% (uncertainty range [UR]: 31%-41% to 56% (49%-62% during the same period. The cause-specific mortality fractions due to birth asphyxia/birth injury and prematurity/low birth weight (LBW increased steadily, with both rising from 3% (2%-5% to 13% (10%-17% and 10% (7%-15%, respectively. The cause-specific mortality rates decreased significantly due to neonatal tetanus and several postneonatal causes (tetanus: from 7 [4-11] to 2 [0.4-4] per 1,000 live births (LB; pneumonia: from 26 [20-33] to 15 [11-20] per 1,000 LB; diarrhea: from 12 [8-17] to 4 [2-7] per 1,000 LB; measles: from 5 [2-8] to 0.2 [0-0.7] per 1,000 LB; injury: from 11 [7-17] to 3 [1-5] per 1,000 LB; and malnutrition: from 9 [6-13] to 5 [2-7]. Conclusions

  18. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  19. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  20. Smart watch RSSI localization and refinement for behavioral classification using laser-SLAM for mapping and fingerprinting.

    Science.gov (United States)

    Carlson, Jay D; Mittek, Mateusz; Parkison, Steven A; Sathler, Pedro; Bayne, David; Psota, Eric T; Perez, Lance C; Bonasera, Stephen J

    2014-01-01

    As a first step toward building a smart home behavioral monitoring system capable of classifying a wide variety of human behavior, a wireless sensor network (WSN) system is presented for RSSI localization. The low-cost, non-intrusive system uses a smart watch worn by the user to broadcast data to the WSN, where the strength of the radio signal is evaluated at each WSN node to localize the user. A method is presented that uses simultaneous localization and mapping (SLAM) for system calibration, providing automated fingerprinting associating the radio signal strength patterns to the user's location within the living space. To improve the accuracy of localization, a novel refinement technique is introduced that takes into account typical movement patterns of people within their homes. Experimental results demonstrate that the system is capable of providing accurate localization results in a typical living space.

  1. Portfolios of quantum algorithms.

    Science.gov (United States)

    Maurer, S M; Hogg, T; Huberman, B A

    2001-12-17

    Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.

  2. “TORINO 1911” PROJECT: A CONTRIBUTION OF A SLAM-BASED SURVEY TO EXTENSIVE 3D HERITAGE MODELING

    Directory of Open Access Journals (Sweden)

    F. Chiabrando

    2018-05-01

    Full Text Available In the framework of the digital documentation of complex environments the advanced Geomatics researches offers integrated solution and multi-sensor strategies for the 3D accurate reconstruction of stratified structures and articulated volumes in the heritage domain. The use of handheld devices for rapid mapping, both image- and range-based, can help the production of suitable easy-to use and easy-navigable 3D model for documentation projects. These types of reality-based modelling could support, with their tailored integrated geometric and radiometric aspects, valorisation and communication projects including virtual reconstructions, interactive navigation settings, immersive reality for dissemination purposes and evoking past places and atmospheres. The aim of this research is localized within the “Torino 1911” project, led by the University of San Diego (California in cooperation with the PoliTo. The entire project is conceived for multi-scale reconstruction of the real and no longer existing structures in the whole park space of more than 400,000 m2, for a virtual and immersive visualization of the Turin 1911 International “Fabulous Exposition” event, settled in the Valentino Park. Particularly, in the presented research, a 3D metric documentation workflow is proposed and validated in order to integrate the potentialities of LiDAR mapping by handheld SLAM-based device, the ZEB REVO Real Time instrument by GeoSLAM (2017 release, instead of TLS consolidated systems. Starting from these kind of models, the crucial aspects of the trajectories performances in the 3D reconstruction and the radiometric content from imaging approaches are considered, specifically by means of compared use of common DSLR cameras and portable sensors.

  3. Sistema de informação geográfica para mapeamento da renda líquida aplicado no planejamento da agricultura irrigada Algorithm to mapping net income applied in irrigated agriculture planning

    Directory of Open Access Journals (Sweden)

    Wilson A. Silva

    2008-03-01

    Full Text Available O objetivo deste trabalho foi desenvolver um algoritmo na linguagem computacional MATLAB para aplicações em sistemas de informações geográficas, visando ao mapeamento da renda líquida maximizada de cultivos irrigados. O estudo foi desenvolvido para as culturas do maracujá, da cana-de-açúcar, do abacaxi e do mamão, em área de aproximadamente 2.500 ha, localizada no município de Campos dos Goytacazes, norte do Estado do Rio de Janeiro. Os dados de entrada do algoritmo foram informações edafoclimáticas, funções de resposta das culturas à água, dados de localização geográfica da área e índices econômicos referentes ao custo do processo produtivo. Os resultados permitiram concluir que o algoritmo desenvolvido se mostrou eficiente para o mapeamento da renda líquida de cultivos irrigados, sendo capaz de localizar áreas que apresentam maiores retornos econômicos.The objective of this work was to develop an algorithm in MATLAB computational language to be applied in geographical information systems to map net income irrigated crops to plan irrigated agriculture. The study was developed for the crops of passion fruit plant, sugarcane, pineapple and papaya, in an area of approximately 2,500 ha, at Campos dos Goytacazes, located at north of the State of Rio de Janeiro, Brazil. The algorithm input data were: information about soil, climate, crop water response functions, geographical location and economical cost indexes of the productive process. The results allowed concluding that developed algorithm was efficient to map net income of irrigated crops, been able to locate areas that present larger economical net income.

  4. Semioptimal practicable algorithmic cooling

    International Nuclear Information System (INIS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-01-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  5. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  6. An improved ASIFT algorithm for indoor panorama image matching

    Science.gov (United States)

    Fu, Han; Xie, Donghai; Zhong, Ruofei; Wu, Yu; Wu, Qiong

    2017-07-01

    The generation of 3D models for indoor objects and scenes is an attractive tool for digital city, virtual reality and SLAM purposes. Panoramic images are becoming increasingly more common in such applications due to their advantages to capture the complete environment in one single image with large field of view. The extraction and matching of image feature points are important and difficult steps in three-dimensional reconstruction, and ASIFT is a state-of-the-art algorithm to implement these functions. Compared with the SIFT algorithm, more feature points can be generated and the matching accuracy of ASIFT algorithm is higher, even for the panoramic images with obvious distortions. However, the algorithm is really time-consuming because of complex operations and performs not very well for some indoor scenes under poor light or without rich textures. To solve this problem, this paper proposes an improved ASIFT algorithm for indoor panoramic images: firstly, the panoramic images are projected into multiple normal perspective images. Secondly, the original ASIFT algorithm is simplified from the affine transformation of tilt and rotation with the images to the only tilt affine transformation. Finally, the results are re-projected to the panoramic image space. Experiments in different environments show that this method can not only ensure the precision of feature points extraction and matching, but also greatly reduce the computing time.

  7. Applied algebra codes, ciphers and discrete algorithms

    CERN Document Server

    Hardy, Darel W; Walker, Carol L

    2009-01-01

    This book attempts to show the power of algebra in a relatively simple setting.-Mathematical Reviews, 2010… The book supports learning by doing. In each section we can find many examples which clarify the mathematics introduced in the section and each section is followed by a series of exercises of which approximately half are solved in the end of the book. Additional the book comes with a CD-ROM containing an interactive version of the book powered by the computer algebra system Scientific Notebook. … the mathematics in the book are developed as needed and the focus of the book lies clearly o

  8. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  9. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  10. Vision Based Autonomous Robot Navigation Algorithms and Implementations

    CERN Document Server

    Chatterjee, Amitava; Nirmal Singh, N

    2013-01-01

    This book is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book descri...

  11. Use of SLAM and PVRL4 and identification of pro-HB-EGF as cell entry receptors for wild type phocine distemper virus.

    Directory of Open Access Journals (Sweden)

    Mary M Melia

    Full Text Available Signalling lymphocyte activation molecule (SLAM has been identified as an immune cell receptor for the morbilliviruses, measles (MV, canine distemper (CDV, rinderpest and peste des petits ruminants (PPRV viruses, while CD46 is a receptor for vaccine strains of MV. More recently poliovirus like receptor 4 (PVRL4, also known as nectin 4, has been identified as a receptor for MV, CDV and PPRV on the basolateral surface of polarised epithelial cells. PVRL4 is also up-regulated by MV in human brain endothelial cells. Utilisation of PVRL4 as a receptor by phocine distemper virus (PDV remains to be demonstrated as well as confirmation of use of SLAM. We have observed that unlike wild type (wt MV or wtCDV, wtPDV strains replicate in African green monkey kidney Vero cells without prior adaptation, suggesting the use of a further receptor. We therefore examined candidate molecules, glycosaminoglycans (GAG and the tetraspan proteins, integrin β and the membrane bound form of heparin binding epithelial growth factor (proHB-EGF,for receptor usage by wtPDV in Vero cells. We show that wtPDV replicates in Chinese hamster ovary (CHO cells expressing SLAM and PVRL4. Similar wtPDV titres are produced in Vero and VeroSLAM cells but more limited fusion occurs in the latter. Infection of Vero cells was not inhibited by anti-CD46 antibody. Removal/disruption of GAG decreased fusion but not the titre of virus. Treatment with anti-integrin β antibody increased rather than decreased infection of Vero cells by wtPDV. However, infection was inhibited by antibody to HB-EGF and the virus replicated in CHO-proHB-EGF cells, indicating use of this molecule as a receptor. Common use of SLAM and PVRL4 by morbilliviruses increases the possibility of cross-species infection. Lack of a requirement for wtPDV adaptation to Vero cells raises the possibility of usage of proHB-EGF as a receptor in vivo but requires further investigation.

  12. Anti-slamming bulbous bow and tunnel stern applications on a novel Deep-V catamaran for improved performance

    Directory of Open Access Journals (Sweden)

    Mehmet Atlar

    2013-06-01

    Full Text Available While displacement type Deep-V mono hulls have superior seakeeping behaviour at speed, catamarans typically have modest behaviour in rough seas. It is therefore a logical progression to combine the superior seakeeping performance of a displacement type Deep-V mono-hull with the high-speed benefits of a catamaran to take the advantages of both hull forms. The displacement Deep-V catamaran concept was developed in Newcastle University and Newcastle University's own multi-purpose research vessel, which was launched in 2011, pushed the design envelope still further with the successful adoption of a novel anti-slamming bulbous bow and tunnel stern for improved efficiency. This paper presents the hullform development of this unique vessel to understand the contribution of the novel bow and stern features on the performance of the Deep-V catamaran. The study is also a further validation of the hull resistance by using advanced numerical analysis methods in conjunction with the model test. An assessment of the numerical predictions of the hull resistance is also made against physical model test results and shows a good agreement between them.

  13. AN ANALYSIS OF TEN YEARS OF THE FOUR GRAND SLAM MEN'S SINGLES DATA FOR LACK OF INDEPENDENCE OF SET OUTCOMES

    Directory of Open Access Journals (Sweden)

    Denny Meyer

    2006-12-01

    Full Text Available The objective of this paper is to use data from the highest level in men's tennis to assess whether there is any evidence to reject the hypothesis that the two players in a match have a constant probability of winning each set in the match. The data consists of all 4883 matches of grand slam men's singles over a 10 year period from 1995 to 2004. Each match is categorised by its sequence of win (W or loss (L (in set 1, set 2, set 3,... to the eventual winner. Thus, there are several categories of matches from WWW to LLWWW. The methodology involves fitting several probabilistic models to the frequencies of the above ten categories. One four-set category is observed to occur significantly more often than the other two. Correspondingly, a couple of the five-set categories occur more frequently than the others. This pattern is consistent when the data is split into two five-year subsets. The data provides significant statistical evidence that the probability of winning a set within a match varies from set to set. The data supports the conclusion that, at the highest level of men's singles tennis, the better player (not necessarily the winner lifts his play in certain situations at least some of the time

  14. Modelling and analysis of a compensator burst after a check valve slam with the pressure surge code DYVRO mod. 3

    International Nuclear Information System (INIS)

    Neuhaus, Thorsten; Schaffrath, Andreas

    2009-01-01

    In this contribution the analysis and calculation of a compensator burst after a pump start and check valve slam with the pressure surge code DYVRO mod. 3 are presented. The compensator burst occurred in the essential service water system (ESWS) of a pressurized water reactor (PWR) in a deviant operation mode. Due to lack of knowledge about the causes a systematic investigation has been performed by TUV NORD SysTec GmbH and Co. KG. The following scenario was identified as most likely: Because of maintenance a heat exchanger was shut off from the ESWS by a closed valve. Due to the hydrostatic pressure profile air had been sucked in through this leaky closed valve forming an air bubble. After the pump start the water was accelerated against the closed valve where the air bubble was compressed. The subsequent backflow resulted in a fast closing of a check valve and a pressure surge that caused the compensator burst. Calculations have been performed with the self developed and validated pressure surge computer code DYVRO mod. 3. The present paper is focussed on the modelling of the pipe system, the pump, the check valve and the behaviour of the air bubble as well as the simulation of the incident. The calculated maximum pressure in the ESWS is above 3 MPa, which is approx. four times higher than the design pressure of 0.7 MPa. This pressure increase has led most likely to the abrupt compensator failure. (author)

  15. A Hybrid Chaotic Quantum Evolutionary Algorithm

    DEFF Research Database (Denmark)

    Cai, Y.; Zhang, M.; Cai, H.

    2010-01-01

    A hybrid chaotic quantum evolutionary algorithm is proposed to reduce amount of computation, speed up convergence and restrain premature phenomena of quantum evolutionary algorithm. The proposed algorithm adopts the chaotic initialization method to generate initial population which will form a pe...... tests. The presented algorithm is applied to urban traffic signal timing optimization and the effect is satisfied....

  16. Applying genetic algorithms to set the optimal combination of forest fire related variables and model forest fire susceptibility based on data mining models. The case of Dayu County, China.

    Science.gov (United States)

    Hong, Haoyuan; Tsangaratos, Paraskevas; Ilia, Ioanna; Liu, Junzhi; Zhu, A-Xing; Xu, Chong

    2018-07-15

    The main objective of the present study was to utilize Genetic Algorithms (GA) in order to obtain the optimal combination of forest fire related variables and apply data mining methods for constructing a forest fire susceptibility map. In the proposed approach, a Random Forest (RF) and a Support Vector Machine (SVM) was used to produce a forest fire susceptibility map for the Dayu County which is located in southwest of Jiangxi Province, China. For this purpose, historic forest fires and thirteen forest fire related variables were analyzed, namely: elevation, slope angle, aspect, curvature, land use, soil cover, heat load index, normalized difference vegetation index, mean annual temperature, mean annual wind speed, mean annual rainfall, distance to river network and distance to road network. The Natural Break and the Certainty Factor method were used to classify and weight the thirteen variables, while a multicollinearity analysis was performed to determine the correlation among the variables and decide about their usability. The optimal set of variables, determined by the GA limited the number of variables into eight excluding from the analysis, aspect, land use, heat load index, distance to river network and mean annual rainfall. The performance of the forest fire models was evaluated by using the area under the Receiver Operating Characteristic curve (ROC-AUC) based on the validation dataset. Overall, the RF models gave higher AUC values. Also the results showed that the proposed optimized models outperform the original models. Specifically, the optimized RF model gave the best results (0.8495), followed by the original RF (0.8169), while the optimized SVM gave lower values (0.7456) than the RF, however higher than the original SVM (0.7148) model. The study highlights the significance of feature selection techniques in forest fire susceptibility, whereas data mining methods could be considered as a valid approach for forest fire susceptibility modeling

  17. Identification of Pou5f1, Sox2, and Nanog downstream target genes with statistical confidence by applying a novel algorithm to time course microarray and genome-wide chromatin immunoprecipitation data

    Directory of Open Access Journals (Sweden)

    Xin Li

    2008-06-01

    Full Text Available Abstract Background Target genes of a transcription factor (TF Pou5f1 (Oct3/4 or Oct4, which is essential for pluripotency maintenance and self-renewal of embryonic stem (ES cells, have previously been identified based on their response to Pou5f1 manipulation and occurrence of Chromatin-immunoprecipitation (ChIP-binding sites in promoters. However, many responding genes with binding sites may not be direct targets because response may be mediated by other genes and ChIP-binding site may not be functional in terms of transcription regulation. Results To reduce the number of false positives, we propose to separate responding genes into groups according to direction, magnitude, and time of response, and to apply the false discovery rate (FDR criterion to each group individually. Using this novel algorithm with stringent statistical criteria (FDR Pou5f1 suppression and published ChIP data, we identified 420 tentative target genes (TTGs for Pou5f1. The majority of TTGs (372 were down-regulated after Pou5f1 suppression, indicating that the Pou5f1 functions as an activator of gene expression when it binds to promoters. Interestingly, many activated genes are potent suppressors of transcription, which include polycomb genes, zinc finger TFs, chromatin remodeling factors, and suppressors of signaling. Similar analysis showed that Sox2 and Nanog also function mostly as transcription activators in cooperation with Pou5f1. Conclusion We have identified the most reliable sets of direct target genes for key pluripotency genes – Pou5f1, Sox2, and Nanog, and found that they predominantly function as activators of downstream gene expression. Thus, most genes related to cell differentiation are suppressed indirectly.

  18. Development of Kinematic 3D Laser Scanning System for Indoor Mapping and As-Built BIM Using Constrained SLAM

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2015-10-01

    Full Text Available The growing interest and use of indoor mapping is driving a demand for improved data-acquisition facility, efficiency and productivity in the era of the Building Information Model (BIM. The conventional static laser scanning method suffers from some limitations on its operability in complex indoor environments, due to the presence of occlusions. Full scanning of indoor spaces without loss of information requires that surveyors change the scanner position many times, which incurs extra work for registration of each scanned point cloud. Alternatively, a kinematic 3D laser scanning system, proposed herein, uses line-feature-based Simultaneous Localization and Mapping (SLAM technique for continuous mapping. Moreover, to reduce the uncertainty of line-feature extraction, we incorporated constrained adjustment based on an assumption made with respect to typical indoor environments: that the main structures are formed of parallel or orthogonal line features. The superiority of the proposed constrained adjustment is its reduction for uncertainties of the adjusted lines, leading to successful data association process. In the present study, kinematic scanning with and without constrained adjustment were comparatively evaluated in two test sites, and the results confirmed the effectiveness of the proposed system. The accuracy of the 3D mapping result was additionally evaluated by comparison with the reference points acquired by a total station: the Euclidean average distance error was 0.034 m for the seminar room and 0.043 m for the corridor, which satisfied the error tolerance for point cloud acquisition (0.051 m according to the guidelines of the General Services Administration for BIM accuracy.

  19. A survey of parallel multigrid algorithms

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1987-01-01

    A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.

  20. Adaptive switching gravitational search algorithm: an attempt to ...

    Indian Academy of Sciences (India)

    Nor Azlina Ab Aziz

    An adaptive gravitational search algorithm (GSA) that switches between synchronous and ... genetic algorithm (GA), bat-inspired algorithm (BA) and grey wolf optimizer (GWO). ...... heuristic with applications in applied electromagnetics. Prog.

  1. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  2. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  3. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  4. Multisensor data fusion algorithm development

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  5. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  6. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  7. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  8. Algorithm FIRE-Feynman Integral REduction

    International Nuclear Information System (INIS)

    Smirnov, A.V.

    2008-01-01

    The recently developed algorithm FIRE performs the reduction of Feynman integrals to master integrals. It is based on a number of strategies, such as applying the Laporta algorithm, the s-bases algorithm, region-bases and integrating explicitly over loop momenta when possible. Currently it is being used in complicated three-loop calculations.

  9. Applied physics

    International Nuclear Information System (INIS)

    Anon.

    1980-01-01

    The Physics Division research program that is dedicated primarily to applied research goals involves the interaction of energetic particles with solids. This applied research is carried out in conjunction with the basic research studies from which it evolved

  10. Navegación Autónoma Asistida Basada en SLAM para una Silla de Ruedas Robotizada en Entornos Restringidos

    Directory of Open Access Journals (Sweden)

    Fernando A. Auat Cheein

    2011-04-01

    Full Text Available Resumen: En este trabajo se presenta una interfaz especialmente diseñada para la navegación de una silla de ruedas robotizada dentro de entornos restringidos. El funcionamiento de la interfaz se rige por dos modos: un modo autónomo y un modo no-autónomo. El manejo no-autónomo de la interfaz de la silla de ruedas se realiza por medio de un joystick adecuado a las capacidades del usuario el cual gobierna el movimiento del vehículo dentro del ambiente. El modo autónomo de la silla de ruedas se ejecuta cuando el usuario tiene que girar un determinado ángulo dentro del ambiente. La estrategia de giro se ejecuta mediante un algoritmo de maniobrabilidad compatible con la cinemática del vehículo y mediante el uso del algoritmo de SLAM (por sus siglas en inglés de Simultaneous Localization and Mapping. El modo autónomo se compone de dos módulos: un módulo de planificación de caminos y un módulo de control. El módulo de planificación de caminos usa la información del mapa provista por el algoritmo de SLAM para trazar un camino seguro y compatible con la silla de ruedas, que le permita al vehículo alcanzar la orientación deseada. El módulo de control gobierna el movimiento del vehículo en el seguimiento del camino trazado mediante un controlador de seguimiento de trayectorias. Las referencias del controlador son actualizadas mediante la estimación de la postura de la silla de ruedas dentro del ambiente, obtenida por el algoritmo de SLAM. Acompañan a este trabajo resultados experimentales utilizando una silla de ruedas robotizada real. Palabras clave: Vehículos autónomos, Sistemas biomédicos, Navegación de robots

  11. Reactive Collision Avoidance Algorithm

    Science.gov (United States)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  12. Monte Carlo algorithms with absorbing Markov chains: Fast local algorithms for slow dynamics

    International Nuclear Information System (INIS)

    Novotny, M.A.

    1995-01-01

    A class of Monte Carlo algorithms which incorporate absorbing Markov chains is presented. In a particular limit, the lowest order of these algorithms reduces to the n-fold way algorithm. These algorithms are applied to study the escape from the metastable state in the two-dimensional square-lattice nearest-neighbor Ising ferromagnet in an unfavorable applied field, and the agreement with theoretical predictions is very good. It is demonstrated that the higher-order algorithms can be many orders of magnitude faster than either the traditional Monte Carlo or n-fold way algorithms

  13. New Methodology for Optimal Flight Control using Differential Evolution Algorithms applied on the Cessna Citation X Business Aircraft – Part 2. Validation on Aircraft Research Flight Level D Simulator

    OpenAIRE

    Yamina BOUGHARI; Georges GHAZI; Ruxandra Mihaela BOTEZ; Florian THEEL

    2017-01-01

    In this paper the Cessna Citation X clearance criteria were evaluated for a new Flight Controller. The Flight Control Law were optimized and designed for the Cessna Citation X flight envelope by combining the Deferential Evolution algorithm, the Linear Quadratic Regulator method, and the Proportional Integral controller during a previous research presented in part 1. The optimal controllers were used to reach satisfactory aircraft’s dynamic and safe flight operations with respect to the augme...

  14. Neural Networks and Genetic Algorithms Applied for Implementing the Management Model “Triple A” in a Supply Chain. Case: Collection Centers of Raw Milk in the Azuay Province

    Directory of Open Access Journals (Sweden)

    Juan Pablo Bermeo M.

    2016-01-01

    Full Text Available To get successful the companies need a combination of several factors, the most important one is the management of Supply Chain. This paper proposes the use of intelligent systems such as Artificial Neural Networks (ANN and Genetic Algorithms as support systems together with monitoring indicators and monitoring, in order to implement the management model Triple A, which is focused on Agility-Adaptability-Alignment, where the “Agility” is the speed of response to changes in demand, “Adaptability” is the ability to tailor the supply chain front market fluctuations and "Alignment" is to align the chain between consumers and suppliers. The Neural Network was trained to work as a predictor of demand and will improve the “agility” of the supply chain, the genetic algorithm is used to obtain optimal routes of pickup from providers, this support to the “alignment” the product of suppliers in the supply chain to final customers; the Neural Network with the Genetic Algorithm together serve as support to “adapt” the supply chain to variations of demand and the suppliers, however, for successful of the model are need other factors such as the use of indicators and training of staff on the administration of management model triple A in the supply chain.

  15. Majorization arrow in quantum-algorithm design

    International Nuclear Information System (INIS)

    Latorre, J.I.; Martin-Delgado, M.A.

    2002-01-01

    We apply majorization theory to study the quantum algorithms known so far and find that there is a majorization principle underlying the way they operate. Grover's algorithm is a neat instance of this principle where majorization works step by step until the optimal target state is found. Extensions of this situation are also found in algorithms based in quantum adiabatic evolution and the family of quantum phase-estimation algorithms, including Shor's algorithm. We state that in quantum algorithms the time arrow is a majorization arrow

  16. Comparing Online Algorithms for Bin Packing Problems

    DEFF Research Database (Denmark)

    Epstein, Leah; Favrholdt, Lene Monrad; Kohrt, Jens Svalgaard

    2012-01-01

    The relative worst-order ratio is a measure of the quality of online algorithms. In contrast to the competitive ratio, this measure compares two online algorithms directly instead of using an intermediate comparison with an optimal offline algorithm. In this paper, we apply the relative worst-ord......-order ratio to online algorithms for several common variants of the bin packing problem. We mainly consider pairs of algorithms that are not distinguished by the competitive ratio and show that the relative worst-order ratio prefers the intuitively better algorithm of each pair....

  17. Hybrid employment recommendation algorithm based on Spark

    Science.gov (United States)

    Li, Zuoquan; Lin, Yubei; Zhang, Xingming

    2017-08-01

    Aiming at the real-time application of collaborative filtering employment recommendation algorithm (CF), a clustering collaborative filtering recommendation algorithm (CCF) is developed, which applies hierarchical clustering to CF and narrows the query range of neighbour items. In addition, to solve the cold-start problem of content-based recommendation algorithm (CB), a content-based algorithm with users’ information (CBUI) is introduced for job recommendation. Furthermore, a hybrid recommendation algorithm (HRA) which combines CCF and CBUI algorithms is proposed, and implemented on Spark platform. The experimental results show that HRA can overcome the problems of cold start and data sparsity, and achieve good recommendation accuracy and scalability for employment recommendation.

  18. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  19. Algorithm 865

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  20. Hurricane slams gulf operations

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    This paper reports that reports of damage by Hurricane Andrew escalated last week as operators stepped up inspections of oil and gas installations in the Gulf of Mexico. By midweek, companies operating in the gulf and South Louisiana were beginning to agree that earlier assessments of damage only scratched the surface. Damage reports included scores of lost, toppled, or crippled platforms, pipeline ruptures, and oil slicks. By midweek the U.S. coast Guard had received reports of 79 oil spills. Even platforms capable of resuming production in some instances were begin curtailed because of damaged pipelines. Offshore service companies the another 2-4 weeks could be needed to fully assess Andrew's wrath. Lack of personnel and equipment was slowing damage assessment and repair

  1. Visual Trajectory Based SLAM

    NARCIS (Netherlands)

    Esteban, I.

    2008-01-01

    SLAMstands for Simultaneous Localization AndMapping. It is a fundamental topic in Autonomous Systems and Robotics as it represents one of the most basic skills that any robot requires in order to be truly autonomous. This skill will allow a robot placed in an unknown environment at an unknown

  2. Algorithms for boundary detection in radiographic images

    International Nuclear Information System (INIS)

    Gonzaga, Adilson; Franca, Celso Aparecido de

    1996-01-01

    Edge detecting techniques applied to radiographic digital images are discussed. Some algorithms have been implemented and the results are displayed to enhance boundary or hide details. An algorithm applied in a pre processed image with contrast enhanced is proposed and the results are discussed

  3. Effect of normalization methods on the performance of supervised learning algorithms applied to HTSeq-FPKM-UQ data sets: 7SK RNA expression as a predictor of survival in patients with colon adenocarcinoma.

    Science.gov (United States)

    Shahriyari, Leili

    2017-11-03

    One of the main challenges in machine learning (ML) is choosing an appropriate normalization method. Here, we examine the effect of various normalization methods on analyzing FPKM upper quartile (FPKM-UQ) RNA sequencing data sets. We collect the HTSeq-FPKM-UQ files of patients with colon adenocarcinoma from TCGA-COAD project. We compare three most common normalization methods: scaling, standardizing using z-score and vector normalization by visualizing the normalized data set and evaluating the performance of 12 supervised learning algorithms on the normalized data set. Additionally, for each of these normalization methods, we use two different normalization strategies: normalizing samples (files) or normalizing features (genes). Regardless of normalization methods, a support vector machine (SVM) model with the radial basis function kernel had the maximum accuracy (78%) in predicting the vital status of the patients. However, the fitting time of SVM depended on the normalization methods, and it reached its minimum fitting time when files were normalized to the unit length. Furthermore, among all 12 learning algorithms and 6 different normalization techniques, the Bernoulli naive Bayes model after standardizing files had the best performance in terms of maximizing the accuracy as well as minimizing the fitting time. We also investigated the effect of dimensionality reduction methods on the performance of the supervised ML algorithms. Reducing the dimension of the data set did not increase the maximum accuracy of 78%. However, it leaded to discovery of the 7SK RNA gene expression as a predictor of survival in patients with colon adenocarcinoma with accuracy of 78%. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  4. Accounting for the Effects of Surface BRDF on Satellite Cloud and Trace-Gas Retrievals: A New Approach Based on Geometry-Dependent Lambertian-Equivalent Reflectivity Applied to OMI Algorithms

    Science.gov (United States)

    Vasilkov, Alexander; Qin, Wenhan; Krotkov, Nickolay; Lamsal, Lok; Spurr, Robert; Haffner, David; Joiner, Joanna; Yang, Eun-Su; Marchenko, Sergey

    2017-01-01

    Most satellite nadir ultraviolet and visible cloud, aerosol, and trace-gas algorithms make use of climatological surface reflectivity databases. For example, cloud and NO2 retrievals for the Ozone Monitoring Instrument (OMI) use monthly gridded surface reflectivity climatologies that do not depend upon the observation geometry. In reality, reflection of incoming direct and diffuse solar light from land or ocean surfaces is sensitive to the sun-sensor geometry. This dependence is described by the bidirectional reflectance distribution function (BRDF). To account for the BRDF, we propose to use a new concept of geometry-dependent Lambertian equivalent reflectivity (LER). Implementation within the existing OMI cloud and NO2 retrieval infrastructure requires changes only to the input surface reflectivity database. The geometry-dependent LER is calculated using a vector radiative transfer model with high spatial resolution BRDF information from the Moderate Resolution Imaging Spectroradiometer (MODIS) over land and the Cox-Munk slope distribution over ocean with a contribution from water-leaving radiance. We compare the geometry-dependent and climatological LERs for two wavelengths, 354 and 466 nm, that are used in OMI cloud algorithms to derive cloud fractions. A detailed comparison of the cloud fractions and pressures derived with climatological and geometry-dependent LERs is carried out. Geometry-dependent LER and corresponding retrieved cloud products are then used as inputs to our OMI NO2 algorithm. We find that replacing the climatological OMI-based LERs with geometry-dependent LERs can increase NO2 vertical columns by up to 50% in highly polluted areas; the differences include both BRDF effects and biases between the MODIS and OMI-based surface reflectance data sets. Only minor changes to NO2 columns (within 5 %) are found over unpolluted and overcast areas.

  5. Accounting for the effects of surface BRDF on satellite cloud and trace-gas retrievals: a new approach based on geometry-dependent Lambertian equivalent reflectivity applied to OMI algorithms

    Science.gov (United States)

    Vasilkov, Alexander; Qin, Wenhan; Krotkov, Nickolay; Lamsal, Lok; Spurr, Robert; Haffner, David; Joiner, Joanna; Yang, Eun-Su; Marchenko, Sergey

    2017-01-01

    Most satellite nadir ultraviolet and visible cloud, aerosol, and trace-gas algorithms make use of climatological surface reflectivity databases. For example, cloud and NO2 retrievals for the Ozone Monitoring Instrument (OMI) use monthly gridded surface reflectivity climatologies that do not depend upon the observation geometry. In reality, reflection of incoming direct and diffuse solar light from land or ocean surfaces is sensitive to the sun-sensor geometry. This dependence is described by the bidirectional reflectance distribution function (BRDF). To account for the BRDF, we propose to use a new concept of geometry-dependent Lambertian equivalent reflectivity (LER). Implementation within the existing OMI cloud and NO2 retrieval infrastructure requires changes only to the input surface reflectivity database. The geometry-dependent LER is calculated using a vector radiative transfer model with high spatial resolution BRDF information from the Moderate Resolution Imaging Spectroradiometer (MODIS) over land and the Cox-Munk slope distribution over ocean with a contribution from water-leaving radiance. We compare the geometry-dependent and climatological LERs for two wavelengths, 354 and 466 nm, that are used in OMI cloud algorithms to derive cloud fractions. A detailed comparison of the cloud fractions and pressures derived with climatological and geometry-dependent LERs is carried out. Geometry-dependent LER and corresponding retrieved cloud products are then used as inputs to our OMI NO2 algorithm. We find that replacing the climatological OMI-based LERs with geometry-dependent LERs can increase NO2 vertical columns by up to 50 % in highly polluted areas; the differences include both BRDF effects and biases between the MODIS and OMI-based surface reflectance data sets. Only minor changes to NO2 columns (within 5 %) are found over unpolluted and overcast areas.

  6. A Hybrid Algorithm for Optimizing Multi- Modal Functions

    Institute of Scientific and Technical Information of China (English)

    Li Qinghua; Yang Shida; Ruan Youlin

    2006-01-01

    A new genetic algorithm is presented based on the musical performance. The novelty of this algorithm is that a new genetic algorithm, mimicking the musical process of searching for a perfect state of harmony, which increases the robustness of it greatly and gives a new meaning of it in the meantime, has been developed. Combining the advantages of the new genetic algorithm, simplex algorithm and tabu search, a hybrid algorithm is proposed. In order to verify the effectiveness of the hybrid algorithm, it is applied to solving some typical numerical function optimization problems which are poorly solved by traditional genetic algorithms. The experimental results show that the hybrid algorithm is fast and reliable.

  7. A retrodictive stochastic simulation algorithm

    International Nuclear Information System (INIS)

    Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.

    2010-01-01

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  8. Autonomous algorithms for image restoration

    OpenAIRE

    Griniasty , Meir

    1994-01-01

    We describe a general theoretical framework for algorithms that adaptively tune all their parameters during the restoration of a noisy image. The adaptation procedure is based on a mean field approach which is known as ``Deterministic Annealing'', and is reminiscent of the ``Deterministic Bolzmann Machiné'. The algorithm is less time consuming in comparison with its simulated annealing alternative. We apply the theory to several architectures and compare their performances.

  9. Neutronic rebalance algorithms for SIMMER

    International Nuclear Information System (INIS)

    Soran, P.D.

    1976-05-01

    Four algorithms to solve the two-dimensional neutronic rebalance equations in SIMMER are investigated. Results of the study are presented and indicate that a matrix decomposition technique with a variable convergence criterion is the best solution algorithm in terms of accuracy and calculational speed. Rebalance numerical stability problems are examined. The results of the study can be applied to other neutron transport codes which use discrete ordinates techniques

  10. Applied Electromagnetics

    Energy Technology Data Exchange (ETDEWEB)

    Yamashita, H; Marinova, I; Cingoski, V [eds.

    2002-07-01

    These proceedings contain papers relating to the 3rd Japanese-Bulgarian-Macedonian Joint Seminar on Applied Electromagnetics. Included are the following groups: Numerical Methods I; Electrical and Mechanical System Analysis and Simulations; Inverse Problems and Optimizations; Software Methodology; Numerical Methods II; Applied Electromagnetics.

  11. Applied Electromagnetics

    International Nuclear Information System (INIS)

    Yamashita, H.; Marinova, I.; Cingoski, V.

    2002-01-01

    These proceedings contain papers relating to the 3rd Japanese-Bulgarian-Macedonian Joint Seminar on Applied Electromagnetics. Included are the following groups: Numerical Methods I; Electrical and Mechanical System Analysis and Simulations; Inverse Problems and Optimizations; Software Methodology; Numerical Methods II; Applied Electromagnetics

  12. Subcubic Control Flow Analysis Algorithms

    DEFF Research Database (Denmark)

    Midtgaard, Jan; Van Horn, David

    We give the first direct subcubic algorithm for performing control flow analysis of higher-order functional programs. Despite the long held belief that inclusion-based flow analysis could not surpass the ``cubic bottleneck, '' we apply known set compression techniques to obtain an algorithm...... that runs in time O(n^3/log n) on a unit cost random-access memory model machine. Moreover, we refine the initial flow analysis into two more precise analyses incorporating notions of reachability. We give subcubic algorithms for these more precise analyses and relate them to an existing analysis from...

  13. Partitional clustering algorithms

    CERN Document Server

    2015-01-01

    This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...

  14. Applied superconductivity

    CERN Document Server

    Newhouse, Vernon L

    1975-01-01

    Applied Superconductivity, Volume II, is part of a two-volume series on applied superconductivity. The first volume dealt with electronic applications and radiation detection, and contains a chapter on liquid helium refrigeration. The present volume discusses magnets, electromechanical applications, accelerators, and microwave and rf devices. The book opens with a chapter on high-field superconducting magnets, covering applications and magnet design. Subsequent chapters discuss superconductive machinery such as superconductive bearings and motors; rf superconducting devices; and future prospec

  15. Algorithmic chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  16. Applied mathematics

    CERN Document Server

    Logan, J David

    2013-01-01

    Praise for the Third Edition"Future mathematicians, scientists, and engineers should find the book to be an excellent introductory text for coursework or self-study as well as worth its shelf space for reference." -MAA Reviews Applied Mathematics, Fourth Edition is a thoroughly updated and revised edition on the applications of modeling and analyzing natural, social, and technological processes. The book covers a wide range of key topics in mathematical methods and modeling and highlights the connections between mathematics and the applied and nat

  17. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  18. Neural network design with combined backpropagation and creeping random search learning algorithms applied to the determination of retained austenite in TRIP steels; Diseno de redes neuronales con aprendizaje combinado de retropropagacion y busqueda aleatoria progresiva aplicado a la determinacion de austenita retenida en aceros TRIP

    Energy Technology Data Exchange (ETDEWEB)

    Toda-Caraballo, I.; Garcia-Mateo, C.; Capdevila, C.

    2010-07-01

    At the beginning of the decade of the nineties, the industrial interest for TRIP steels leads to a significant increase of the investigation and application in this field. In this work, the flexibility of neural networks for the modelling of complex properties is used to tackle the problem of determining the retained austenite content in TRIP-steel. Applying a combination of two learning algorithms (backpropagation and creeping-random-search) for the neural network, a model has been created that enables the prediction of retained austenite in low-Si / low-Al multiphase steels as a function of processing parameters. (Author). 34 refs.

  19. New Methodology for Optimal Flight Control using Differential Evolution Algorithms applied on the Cessna Citation X Business Aircraft – Part 2. Validation on Aircraft Research Flight Level D Simulator

    Directory of Open Access Journals (Sweden)

    Yamina BOUGHARI

    2017-06-01

    Full Text Available In this paper the Cessna Citation X clearance criteria were evaluated for a new Flight Controller. The Flight Control Law were optimized and designed for the Cessna Citation X flight envelope by combining the Deferential Evolution algorithm, the Linear Quadratic Regulator method, and the Proportional Integral controller during a previous research presented in part 1. The optimal controllers were used to reach satisfactory aircraft’s dynamic and safe flight operations with respect to the augmentation systems’ handling qualities, and design requirements. Furthermore the number of controllers used to control the aircraft in its flight envelope was optimized using the Linear Fractional Representations features. To validate the controller over the whole aircraft flight envelope, the linear stability, eigenvalue, and handling qualities criteria in addition of the nonlinear analysis criteria were investigated during this research to assess the business aircraft for flight control clearance and certification. The optimized gains provide a very good stability margins as the eigenvalue analysis shows that the aircraft has a high stability, and a very good flying qualities of the linear aircraft models are ensured in its entire flight envelope, its robustness is demonstrated with respect to uncertainties due to its mass and center of gravity variations.

  20. Applied Enzymology.

    Science.gov (United States)

    Manoharan, Asha; Dreisbach, Joseph H.

    1988-01-01

    Describes some examples of chemical and industrial applications of enzymes. Includes a background, a discussion of structure and reactivity, enzymes as therapeutic agents, enzyme replacement, enzymes used in diagnosis, industrial applications of enzymes, and immobilizing enzymes. Concludes that applied enzymology is an important factor in…

  1. Control algorithms for single inverter dual induction motor system applied to railway traction; Commande algorithmique d'un systeme mono-onduleur bimachine asynchrone destine a la traction ferroviaire

    Energy Technology Data Exchange (ETDEWEB)

    Pena Eguiluz, R.

    2002-11-15

    The goal of this work concerns the modelling and the behaviour characterisation of a single inverter dual induction motor system applied to a railway traction bogie (BB36000) in order to concept its control. First part of this job is dedicated to the detailed description of overall system. The influence analysis of the internal perturbations (motor parameters variation) and, external perturbations (pantograph detachment, adherence loss, stick-slip) of the system have made considering the field oriented control applied to each motor of the bogie (classical traction structure). vi In a second part, a novel propulsion structure is proposed. It is composed by a single pulse-width modulated two level voltage source inverter. It supplies two parallel connected induction motors, which generate the transmitted traction force to the bogie wheels. The locomotive case represents the common load for the two motors. Several co-operative control strategies (CS) are studied. They are: the mean CS, the double mean CS, the master - slave switched CS and, the mean differential CS. In addition, an appropriated electric modes observer structure for these different controls has studied. These controls have validated applying the perturbations to the models using the solver SABER. This special approach is equivalent to quasi-experimentation, because the mechanical and the electrical system components have modelled using MAST language and, the sample control has created by a C code programme in the SABER environment. Third part is dedicated to the mechanical sensor suppression and, its adaptation in the cooperative control strategies. The partial speed reconstruction methods are: the fundamental frequency relation, the mechanical Kalman filter, the variable structure observer and the MRAS. Finally, the hardware system configuration of the experimental realisation is described. (author)

  2. Applied dynamics

    CERN Document Server

    Schiehlen, Werner

    2014-01-01

    Applied Dynamics is an important branch of engineering mechanics widely applied to mechanical and automotive engineering, aerospace and biomechanics as well as control engineering and mechatronics. The computational methods presented are based on common fundamentals. For this purpose analytical mechanics turns out to be very useful where D’Alembert’s principle in the Lagrangian formulation proves to be most efficient. The method of multibody systems, finite element systems and continuous systems are treated consistently. Thus, students get a much better understanding of dynamical phenomena, and engineers in design and development departments using computer codes may check the results more easily by choosing models of different complexity for vibration and stress analysis.

  3. Preliminary Use of the Seismo-Lineament Analysis Method (SLAM) to Investigate Seismogenic Faulting in the Grand Canyon Area, Northern Arizona

    Science.gov (United States)

    Cronin, V. S.; Cleveland, D. M.; Prochnow, S. J.

    2007-12-01

    This is a progress report on our application of the Seismo-Lineament Analysis Method (SLAM) to the eastern Grand Canyon area of northern Arizona. SLAM is a new integrated method for identifying potentially seismogenic faults using earthquake focal-mechanism solutions, geomorphic analysis and field work. There are two nodal planes associated with any double-couple focal-mechanism solution, one of which is thought to coincide with the fault that produced the earthquake; the slip vector is normal to the other (auxiliary) plane. When no uncertainty in the orientation of the fault-plane solution is reported, we use the reported vertical and horizontal uncertainties in the focal location to define a tabular uncertainty volume whose orientation coincides with that of the fault-plane solution. The intersection of the uncertainty volume and the ground surface (represented by the DEM) is termed a seismo-lineament. An image of the DEM surface is illuminated perpendicular to the strike of the seismo- lineament to accentuate geomorphic features within the seismo-lineament that may be related to seismogenic faulting. This evaluation of structural geomorphology is repeated for several different azimuths and elevations of illumination. A map is compiled that includes possible geomorphic indicators of faulting as well as previously mapped faults within each seismo-lineament, constituting a set of hypotheses for the possible location of seismogenic fault segments that must be evaluated through fieldwork. A fault observed in the field that is located within a seismo-lineament, and that has an orientation and slip characteristics that are statistically compatible with the fault-plane solution, is considered potentially seismogenic. We compiled a digital elevation model (DEM) of the Grand Canyon area from published data sets. We used earthquake focal-mechanism solutions produced by David Brumbaugh (2005, BSSA, v. 95, p. 1561-1566) for five M > 3.5 events reported between 1989 and 1995

  4. Applied optics

    International Nuclear Information System (INIS)

    Orszag, A.; Antonetti, A.

    1988-01-01

    The 1988 progress report, of the Applied Optics laboratory, of the (Polytechnic School, France), is presented. The optical fiber activities are focused on the development of an optical gyrometer, containing a resonance cavity. The following domains are included, in the research program: the infrared laser physics, the laser sources, the semiconductor physics, the multiple-photon ionization and the nonlinear optics. Investigations on the biomedical, the biological and biophysical domains are carried out. The published papers and the congress communications are listed [fr

  5. Recent results on howard's algorithm

    DEFF Research Database (Denmark)

    Miltersen, P.B.

    2012-01-01

    is generally recognized as fast in practice, until recently, its worst case time complexity was poorly understood. However, a surge of results since 2009 has led us to a much more satisfactory understanding of the worst case time complexity of the algorithm in the various settings in which it applies...

  6. Associative Algorithms for Computational Creativity

    Science.gov (United States)

    Varshney, Lav R.; Wang, Jun; Varshney, Kush R.

    2016-01-01

    Computational creativity, the generation of new, unimagined ideas or artifacts by a machine that are deemed creative by people, can be applied in the culinary domain to create novel and flavorful dishes. In fact, we have done so successfully using a combinatorial algorithm for recipe generation combined with statistical models for recipe ranking…

  7. Normalization based K means Clustering Algorithm

    OpenAIRE

    Virmani, Deepali; Taneja, Shweta; Malhotra, Geetika

    2015-01-01

    K-means is an effective clustering technique used to separate similar data into groups based on initial centroids of clusters. In this paper, Normalization based K-means clustering algorithm(N-K means) is proposed. Proposed N-K means clustering algorithm applies normalization prior to clustering on the available data as well as the proposed approach calculates initial centroids based on weights. Experimental results prove the betterment of proposed N-K means clustering algorithm over existing...

  8. Alternative confidence measure for local matching stereo algorithms

    CSIR Research Space (South Africa)

    Ndhlovu, T

    2009-11-01

    Full Text Available The authors present a confidence measure applied to individual disparity estimates in local matching stereo correspondence algorithms. It aims at identifying textureless areas, where most local matching algorithms fail. The confidence measure works...

  9. Evaluation of the performance of different firefly algorithms to the ...

    African Journals Online (AJOL)

    of firefly algorithms are applied to solve the nonlinear ELD problem. ... problem using those recent variants and the classical firefly algorithm for different test cases. Efficiency ...... International Journal of Machine. Learning and Computing, Vol.

  10. A novel progressively swarmed mixed integer genetic algorithm for ...

    African Journals Online (AJOL)

    MIGA) which inherits the advantages of binary and real coded Genetic Algorithm approach. The proposed algorithm is applied for the conventional generation cost minimization Optimal Power Flow (OPF) problem and for the Security ...

  11. Learning algorithms and automatic processing of languages

    International Nuclear Information System (INIS)

    Fluhr, Christian Yves Andre

    1977-01-01

    This research thesis concerns the field of artificial intelligence. It addresses learning algorithms applied to automatic processing of languages. The author first briefly describes some mechanisms of human intelligence in order to describe how these mechanisms are simulated on a computer. He outlines the specific role of learning in various manifestations of intelligence. Then, based on the Markov's algorithm theory, the author discusses the notion of learning algorithm. Two main types of learning algorithms are then addressed: firstly, an 'algorithm-teacher dialogue' type sanction-based algorithm which aims at learning how to solve grammatical ambiguities in submitted texts; secondly, an algorithm related to a document system which structures semantic data automatically obtained from a set of texts in order to be able to understand by references to any question on the content of these texts

  12. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  13. Boolean Queries Optimization by Genetic Algorithms

    Czech Academy of Sciences Publication Activity Database

    Húsek, Dušan; Owais, S.S.J.; Krömer, P.; Snášel, Václav

    2005-01-01

    Roč. 15, - (2005), s. 395-409 ISSN 1210-0552 R&D Projects: GA AV ČR 1ET100300414 Institutional research plan: CEZ:AV0Z10300504 Keywords : evolutionary algorithms * genetic algorithms * genetic programming * information retrieval * Boolean query Subject RIV: BB - Applied Statistics, Operational Research

  14. Parallelization of TMVA Machine Learning Algorithms

    CERN Document Server

    Hajili, Mammad

    2017-01-01

    This report reflects my work on Parallelization of TMVA Machine Learning Algorithms integrated to ROOT Data Analysis Framework during summer internship at CERN. The report consists of 4 impor- tant part - data set used in training and validation, algorithms that multiprocessing applied on them, parallelization techniques and re- sults of execution time changes due to number of workers.

  15. Smoothed Analysis of Local Search Algorithms

    NARCIS (Netherlands)

    Manthey, Bodo; Dehne, Frank; Sack, Jörg-Rüdiger; Stege, Ulrike

    2015-01-01

    Smoothed analysis is a method for analyzing the performance of algorithms for which classical worst-case analysis fails to explain the performance observed in practice. Smoothed analysis has been applied to explain the performance of a variety of algorithms in the last years. One particular class of

  16. Effects of visualization on algorithm comprehension

    Science.gov (United States)

    Mulvey, Matthew

    Computer science students are expected to learn and apply a variety of core algorithms which are an essential part of the field. Any one of these algorithms by itself is not necessarily extremely complex, but remembering the large variety of algorithms and the differences between them is challenging. To address this challenge, we present a novel algorithm visualization tool designed to enhance students understanding of Dijkstra's algorithm by allowing them to discover the rules of the algorithm for themselves. It is hoped that a deeper understanding of the algorithm will help students correctly select, adapt and apply the appropriate algorithm when presented with a problem to solve, and that what is learned here will be applicable to the design of other visualization tools designed to teach different algorithms. Our visualization tool is currently in the prototype stage, and this thesis will discuss the pedagogical approach that informs its design, as well as the results of some initial usability testing. Finally, to clarify the direction for further development of the tool, four different variations of the prototype were implemented, and the instructional effectiveness of each was assessed by having a small sample participants use the different versions of the prototype and then take a quiz to assess their comprehension of the algorithm.

  17. Can risk assessment predict suicide in secondary mental healthcare? Findings from the South London and Maudsley NHS Foundation Trust Biomedical Research Centre (SLaM BRC) Case Register.

    Science.gov (United States)

    Lopez-Morinigo, Javier-David; Fernandes, Andrea C; Shetty, Hitesh; Ayesa-Arriola, Rosa; Bari, Ashraful; Stewart, Robert; Dutta, Rina

    2018-06-02

    The predictive value of suicide risk assessment in secondary mental healthcare remains unclear. This study aimed to investigate the extent to which clinical risk assessment ratings can predict suicide among people receiving secondary mental healthcare. Retrospective inception cohort study (n = 13,758) from the South London and Maudsley NHS Foundation Trust (SLaM) (London, UK) linked with national mortality data (n = 81 suicides). Cox regression models assessed survival from the last suicide risk assessment and ROC curves evaluated the performance of risk assessment total scores. Hopelessness (RR = 2.24, 95% CI 1.05-4.80, p = 0.037) and having a significant loss (RR = 1.91, 95% CI 1.03-3.55, p = 0.041) were significantly associated with suicide in the multivariable Cox regression models. However, screening statistics for the best cut-off point (4-5) of the risk assessment total score were: sensitivity 0.65 (95% CI 0.54-0.76), specificity 0.62 (95% CI 0.62-0.63), positive predictive value 0.01 (95% CI 0.01-0.01) and negative predictive value 0.99 (95% CI 0.99-1.00). Although suicide was linked with hopelessness and having a significant loss, risk assessment performed poorly to predict such an uncommon outcome in a large case register of patients receiving secondary mental healthcare.

  18. Applied geodesy

    International Nuclear Information System (INIS)

    Turner, S.

    1987-01-01

    This volume is based on the proceedings of the CERN Accelerator School's course on Applied Geodesy for Particle Accelerators held in April 1986. The purpose was to record and disseminate the knowledge gained in recent years on the geodesy of accelerators and other large systems. The latest methods for positioning equipment to sub-millimetric accuracy in deep underground tunnels several tens of kilometers long are described, as well as such sophisticated techniques as the Navstar Global Positioning System and the Terrameter. Automation of better known instruments such as the gyroscope and Distinvar is also treated along with the highly evolved treatment of components in a modern accelerator. Use of the methods described can be of great benefit in many areas of research and industrial geodesy such as surveying, nautical and aeronautical engineering, astronomical radio-interferometry, metrology of large components, deformation studies, etc

  19. Applied mathematics

    International Nuclear Information System (INIS)

    Nedelec, J.C.

    1988-01-01

    The 1988 progress report of the Applied Mathematics center (Polytechnic School, France), is presented. The research fields of the Center are the scientific calculus, the probabilities and statistics and the video image synthesis. The research topics developed are: the analysis of numerical methods, the mathematical analysis of the physics and mechanics fundamental models, the numerical solution of complex models related to the industrial problems, the stochastic calculus and the brownian movement, the stochastic partial differential equations, the identification of the adaptive filtering parameters, the discrete element systems, statistics, the stochastic control and the development, the image synthesis techniques for education and research programs. The published papers, the congress communications and the thesis are listed [fr

  20. Kidnapping Detection and Recognition in Previous Unknown Environment

    Directory of Open Access Journals (Sweden)

    Yang Tian

    2017-01-01

    Full Text Available An unaware event referred to as kidnapping makes the estimation result of localization incorrect. In a previous unknown environment, incorrect localization result causes incorrect mapping result in Simultaneous Localization and Mapping (SLAM by kidnapping. In this situation, the explored area and unexplored area are divided to make the kidnapping recovery difficult. To provide sufficient information on kidnapping, a framework to judge whether kidnapping has occurred and to identify the type of kidnapping with filter-based SLAM is proposed. The framework is called double kidnapping detection and recognition (DKDR by performing two checks before and after the “update” process with different metrics in real time. To explain one of the principles of DKDR, we describe a property of filter-based SLAM that corrects the mapping result of the environment using the current observations after the “update” process. Two classical filter-based SLAM algorithms, Extend Kalman Filter (EKF SLAM and Particle Filter (PF SLAM, are modified to show that DKDR can be simply and widely applied in existing filter-based SLAM algorithms. Furthermore, a technique to determine the adapted thresholds of metrics in real time without previous data is presented. Both simulated and experimental results demonstrate the validity and accuracy of the proposed method.