WorldWideScience

Sample records for algorithm-based automatic contiguation

  1. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  2. The research of automatic speed control algorithm based on Green CBTC

    Science.gov (United States)

    Lin, Ying; Xiong, Hui; Wang, Xiaoliang; Wu, Youyou; Zhang, Chuanqi

    2017-06-01

    Automatic speed control algorithm is one of the core technologies of train operation control system. It’s a typical multi-objective optimization control algorithm, which achieve the train speed control for timing, comfort, energy-saving and precise parking. At present, the train speed automatic control technology is widely used in metro and inter-city railways. It has been found that the automatic speed control technology can effectively reduce the driver’s intensity, and improve the operation quality. However, the current used algorithm is poor at energy-saving, even not as good as manual driving. In order to solve the problem of energy-saving, this paper proposes an automatic speed control algorithm based on Green CBTC system. Based on the Green CBTC system, the algorithm can adjust the operation status of the train to improve the efficient using rate of regenerative braking feedback energy while ensuring the timing, comfort and precise parking targets. Due to the reason, the energy-using of Green CBTC system is lower than traditional CBTC system. The simulation results show that the algorithm based on Green CBTC system can effectively reduce the energy-using due to the improvement of the using rate of regenerative braking feedback energy.

  3. Automatic differentiation algorithms in model analysis

    NARCIS (Netherlands)

    Huiskes, M.J.

    2002-01-01

    Title: Automatic differentiation algorithms in model analysis
    Author: M.J. Huiskes
    Date: 19 March, 2002

    In this thesis automatic differentiation algorithms and derivative-based methods

  4. Learning algorithms and automatic processing of languages

    International Nuclear Information System (INIS)

    Fluhr, Christian Yves Andre

    1977-01-01

    This research thesis concerns the field of artificial intelligence. It addresses learning algorithms applied to automatic processing of languages. The author first briefly describes some mechanisms of human intelligence in order to describe how these mechanisms are simulated on a computer. He outlines the specific role of learning in various manifestations of intelligence. Then, based on the Markov's algorithm theory, the author discusses the notion of learning algorithm. Two main types of learning algorithms are then addressed: firstly, an 'algorithm-teacher dialogue' type sanction-based algorithm which aims at learning how to solve grammatical ambiguities in submitted texts; secondly, an algorithm related to a document system which structures semantic data automatically obtained from a set of texts in order to be able to understand by references to any question on the content of these texts

  5. [Automatic Sleep Stage Classification Based on an Improved K-means Clustering Algorithm].

    Science.gov (United States)

    Xiao, Shuyuan; Wang, Bei; Zhang, Jian; Zhang, Qunfeng; Zou, Junzhong

    2016-10-01

    Sleep stage scoring is a hotspot in the field of medicine and neuroscience.Visual inspection of sleep is laborious and the results may be subjective to different clinicians.Automatic sleep stage classification algorithm can be used to reduce the manual workload.However,there are still limitations when it encounters complicated and changeable clinical cases.The purpose of this paper is to develop an automatic sleep staging algorithm based on the characteristics of actual sleep data.In the proposed improved K-means clustering algorithm,points were selected as the initial centers by using a concept of density to avoid the randomness of the original K-means algorithm.Meanwhile,the cluster centers were updated according to the‘Three-Sigma Rule’during the iteration to abate the influence of the outliers.The proposed method was tested and analyzed on the overnight sleep data of the healthy persons and patients with sleep disorders after continuous positive airway pressure(CPAP)treatment.The automatic sleep stage classification results were compared with the visual inspection by qualified clinicians and the averaged accuracy reached 76%.With the analysis of morphological diversity of sleep data,it was proved that the proposed improved K-means algorithm was feasible and valid for clinical practice.

  6. Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    Directory of Open Access Journals (Sweden)

    Yehu Shen

    2014-01-01

    Full Text Available Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying.

  7. Transmission History Based Distributed Adaptive Contention Window Adjustment Algorithm Cooperating with Automatic Rate Fallback for Wireless LANs

    Science.gov (United States)

    Ogawa, Masakatsu; Hiraguri, Takefumi; Nishimori, Kentaro; Takaya, Kazuhiro; Murakawa, Kazuo

    This paper proposes and investigates a distributed adaptive contention window adjustment algorithm based on the transmission history for wireless LANs called the transmission-history-based distributed adaptive contention window adjustment (THAW) algorithm. The objective of this paper is to reduce the transmission delay and improve the channel throughput compared to conventional algorithms. The feature of THAW is that it adaptively adjusts the initial contention window (CWinit) size in the binary exponential backoff (BEB) algorithm used in the IEEE 802.11 standard according to the transmission history and the automatic rate fallback (ARF) algorithm, which is the most basic algorithm in automatic rate controls. This effect is to keep CWinit at a high value in a congested state. Simulation results show that the THAW algorithm outperforms the conventional algorithms in terms of the channel throughput and delay, even if the timer in the ARF is changed.

  8. Learning algorithms and automatic processing of languages; Algorithmes a apprentissage et traitement automatique des langues

    Energy Technology Data Exchange (ETDEWEB)

    Fluhr, Christian Yves Andre

    1977-06-15

    This research thesis concerns the field of artificial intelligence. It addresses learning algorithms applied to automatic processing of languages. The author first briefly describes some mechanisms of human intelligence in order to describe how these mechanisms are simulated on a computer. He outlines the specific role of learning in various manifestations of intelligence. Then, based on the Markov's algorithm theory, the author discusses the notion of learning algorithm. Two main types of learning algorithms are then addressed: firstly, an 'algorithm-teacher dialogue' type sanction-based algorithm which aims at learning how to solve grammatical ambiguities in submitted texts; secondly, an algorithm related to a document system which structures semantic data automatically obtained from a set of texts in order to be able to understand by references to any question on the content of these texts.

  9. Design of an optimum computer vision-based automatic abalone (Haliotis discus hannai) grading algorithm.

    Science.gov (United States)

    Lee, Donggil; Lee, Kyounghoon; Kim, Seonghun; Yang, Yongsu

    2015-04-01

    An automatic abalone grading algorithm that estimates abalone weights on the basis of computer vision using 2D images is developed and tested. The algorithm overcomes the problems experienced by conventional abalone grading methods that utilize manual sorting and mechanical automatic grading. To design an optimal algorithm, a regression formula and R(2) value were investigated by performing a regression analysis for each of total length, body width, thickness, view area, and actual volume against abalone weights. The R(2) value between the actual volume and abalone weight was 0.999, showing a relatively high correlation. As a result, to easily estimate the actual volumes of abalones based on computer vision, the volumes were calculated under the assumption that abalone shapes are half-oblate ellipsoids, and a regression formula was derived to estimate the volumes of abalones through linear regression analysis between the calculated and actual volumes. The final automatic abalone grading algorithm is designed using the abalone volume estimation regression formula derived from test results, and the actual volumes and abalone weights regression formula. In the range of abalones weighting from 16.51 to 128.01 g, the results of evaluation of the performance of algorithm via cross-validation indicate root mean square and worst-case prediction errors of are 2.8 and ±8 g, respectively. © 2015 Institute of Food Technologists®

  10. Randomized algorithms in automatic control and data mining

    CERN Document Server

    Granichin, Oleg; Toledano-Kitai, Dvora

    2015-01-01

    In the fields of data mining and control, the huge amount of unstructured data and the presence of uncertainty in system descriptions have always been critical issues. The book Randomized Algorithms in Automatic Control and Data Mining introduces the readers to the fundamentals of randomized algorithm applications in data mining (especially clustering) and in automatic control synthesis. The methods proposed in this book guarantee that the computational complexity of classical algorithms and the conservativeness of standard robust control techniques will be reduced. It is shown that when a problem requires "brute force" in selecting among options, algorithms based on random selection of alternatives offer good results with certain probability for a restricted time and significantly reduce the volume of operations.

  11. A Cough-Based Algorithm for Automatic Diagnosis of Pertussis

    Science.gov (United States)

    Pramono, Renard Xaviero Adhi; Imtiaz, Syed Anas; Rodriguez-Villegas, Esther

    2016-01-01

    Pertussis is a contagious respiratory disease which mainly affects young children and can be fatal if left untreated. The World Health Organization estimates 16 million pertussis cases annually worldwide resulting in over 200,000 deaths. It is prevalent mainly in developing countries where it is difficult to diagnose due to the lack of healthcare facilities and medical professionals. Hence, a low-cost, quick and easily accessible solution is needed to provide pertussis diagnosis in such areas to contain an outbreak. In this paper we present an algorithm for automated diagnosis of pertussis using audio signals by analyzing cough and whoop sounds. The algorithm consists of three main blocks to perform automatic cough detection, cough classification and whooping sound detection. Each of these extract relevant features from the audio signal and subsequently classify them using a logistic regression model. The output from these blocks is collated to provide a pertussis likelihood diagnosis. The performance of the proposed algorithm is evaluated using audio recordings from 38 patients. The algorithm is able to diagnose all pertussis successfully from all audio recordings without any false diagnosis. It can also automatically detect individual cough sounds with 92% accuracy and PPV of 97%. The low complexity of the proposed algorithm coupled with its high accuracy demonstrates that it can be readily deployed using smartphones and can be extremely useful for quick identification or early screening of pertussis and for infection outbreaks control. PMID:27583523

  12. Fuzzy algorithm for an automatic reactor power control in a PWR

    International Nuclear Information System (INIS)

    Hah, Yung Joon; Song, In Ho; Yu, Sung Sik; Choi, Jung In; Lee, Byong Whi

    1994-01-01

    A fuzzy algorithm is presented for automatic reactor power control in a pressurized water reactor. Automatic power shape control is complicated by the use of control rods because it is highly coupled with reactivity compensation. Thus, manual shape controls are usually employed even for the limited capability for the load - follow operation including frequency control. In an attempt to achieve automatic power shape control without any design modification of the core, a fuzzy power control algorithm is proposed. For the fuzzy control, the rule base is formulated based on a multi - input multi - output system. The minimum operation rule and the center of area method are implemented for the development of the fuzzy algorithm. The fuzzy power control algorithm has been applied to the Yonggwang Nuclear Unit 3. The simulation results show that the fuzzy control can be adapted as a practical control strategy for automatic reactor power control of the pressurized water reactor during the load - follow operation

  13. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography.

    Science.gov (United States)

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-07

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.

  14. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    KAUST Repository

    Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin

    2013-01-01

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into p-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013

  15. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    KAUST Repository

    Abbas, Ahmed

    2013-01-07

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into p-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013

  16. Novel Handover Optimization with a Coordinated Contiguous Carrier Aggregation Deployment Scenario in LTE-Advanced Systems

    Directory of Open Access Journals (Sweden)

    Ibraheem Shayea

    2016-01-01

    Full Text Available The carrier aggregation (CA technique and Handover Parameters Optimization (HPO function have been introduced in LTE-Advanced systems to enhance system performance in terms of throughput, coverage area, and connection stability and to reduce management complexity. Although LTE-Advanced has benefited from the CA technique, the low spectral efficiency and high ping-pong effect with high outage probabilities in conventional Carrier Aggregation Deployment Scenarios (CADSs have become major challenges for cell edge User Equipment (UE. Also, the existing HPO algorithms are not optimal for selecting the appropriate handover control parameters (HCPs. This paper proposes two solutions by deploying a Coordinated Contiguous-CADS (CC-CADS and a Novel Handover Parameters Optimization algorithm that is based on the Weight Performance Function (NHPO-WPF. The CC-CADS uses two contiguous component carriers (CCs that have two different beam directions. The NHPO-WPF automatically adjusts the HCPs based on the Weight Performance Function (WPF, which is evaluated as a function of the Signal-to-Interference Noise Ratio (SINR, cell load, and UE’s velocity. Simulation results show that the CC-CADS and the NHPO-WPF algorithm provide significant enhancements in system performance over that of conventional CADSs and HPO algorithms from the literature, respectively. The integration of both solutions achieves even better performance than scenarios in which each solution is considered independently.

  17. Recursive automatic classification algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Bauman, E V; Dorofeyuk, A A

    1982-03-01

    A variational statement of the automatic classification problem is given. The dependence of the form of the optimal partition surface on the form of the classification objective functional is investigated. A recursive algorithm is proposed for maximising a functional of reasonably general form. The convergence problem is analysed in connection with the proposed algorithm. 8 references.

  18. Software design of automatic counting system for nuclear track based on mathematical morphology algorithm

    International Nuclear Information System (INIS)

    Pan Yi; Mao Wanchong

    2010-01-01

    The parameter measurement of nuclear track occupies an important position in the field of nuclear technology. However, traditional artificial counting method has many limitations. In recent years, DSP and digital image processing technology have been applied in nuclear field more and more. For the sake of reducing errors of visual measurement in artificial counting method, an automatic counting system for nuclear track based on DM642 real-time image processing platform is introduced in this article, which is able to effectively remove interferences from the background and noise points, as well as automatically extract nuclear track-points by using mathematical morphology algorithm. (authors)

  19. Task mapping for non-contiguous allocations.

    Energy Technology Data Exchange (ETDEWEB)

    Leung, Vitus Joseph; Bunde, David P.; Ebbers, Johnathan; Price, Nicholas W.; Swank, Matthew.; Feer, Stefan P.; Rhodes, Zachary D.

    2013-02-01

    This paper examines task mapping algorithms for non-contiguously allocated parallel jobs. Several studies have shown that task placement affects job running time for both contiguously and non-contiguously allocated jobs. Traditionally, work on task mapping either uses a very general model where the job has an arbitrary communication pattern or assumes that jobs are allocated contiguously, making them completely isolated from each other. A middle ground between these two cases is the mapping problem for non-contiguous jobs having a specific communication pattern. We propose several task mapping algorithms for jobs with a stencil communication pattern and evaluate them using experiments and simulations. Our strategies improve the running time of a MiniApp by as much as 30% over a baseline strategy. Furthermore, this improvement increases markedly with the job size, demonstrating the importance of task mapping as systems grow toward exascale.

  20. The irace package: Iterated racing for automatic algorithm configuration

    Directory of Open Access Journals (Sweden)

    Manuel López-Ibáñez

    2016-01-01

    Full Text Available Modern optimization algorithms typically require the setting of a large number of parameters to optimize their performance. The immediate goal of automatic algorithm configuration is to find, automatically, the best parameter settings of an optimizer. Ultimately, automatic algorithm configuration has the potential to lead to new design paradigms for optimization software. The irace package is a software package that implements a number of automatic configuration procedures. In particular, it offers iterated racing procedures, which have been used successfully to automatically configure various state-of-the-art algorithms. The iterated racing procedures implemented in irace include the iterated F-race algorithm and several extensions and improvements over it. In this paper, we describe the rationale underlying the iterated racing procedures and introduce a number of recent extensions. Among these, we introduce a restart mechanism to avoid premature convergence, the use of truncated sampling distributions to handle correctly parameter bounds, and an elitist racing procedure for ensuring that the best configurations returned are also those evaluated in the highest number of training instances. We experimentally evaluate the most recent version of irace and demonstrate with a number of example applications the use and potential of irace, in particular, and automatic algorithm configuration, in general.

  1. An automatic, stagnation point based algorithm for the delineation of Wellhead Protection Areas

    Science.gov (United States)

    Tosco, Tiziana; Sethi, Rajandrea; di Molfetta, Antonio

    2008-07-01

    Time-related capture areas are usually delineated using the backward particle tracking method, releasing circles of equally spaced particles around each well. In this way, an accurate delineation often requires both a very high number of particles and a manual capture zone encirclement. The aim of this work was to propose an Automatic Protection Area (APA) delineation algorithm, which can be coupled with any model of flow and particle tracking. The computational time is here reduced, thanks to the use of a limited number of nonequally spaced particles. The particle starting positions are determined coupling forward particle tracking from the stagnation point, and backward particle tracking from the pumping well. The pathlines are postprocessed for a completely automatic delineation of closed perimeters of time-related capture zones. The APA algorithm was tested for a two-dimensional geometry, in homogeneous and nonhomogeneous aquifers, steady state flow conditions, single and multiple wells. Results show that the APA algorithm is robust and able to automatically and accurately reconstruct protection areas with a very small number of particles, also in complex scenarios.

  2. Algorithm for automatic analysis of electro-oculographic data.

    Science.gov (United States)

    Pettersson, Kati; Jagadeesan, Sharman; Lukander, Kristian; Henelius, Andreas; Haeggström, Edward; Müller, Kiti

    2013-10-25

    Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics.

  3. A Robust Vision-based Runway Detection and Tracking Algorithm for Automatic UAV Landing

    KAUST Repository

    Abu Jbara, Khaled F.

    2015-05-01

    This work presents a novel real-time algorithm for runway detection and tracking applied to the automatic takeoff and landing of Unmanned Aerial Vehicles (UAVs). The algorithm is based on a combination of segmentation based region competition and the minimization of a specific energy function to detect and identify the runway edges from streaming video data. The resulting video-based runway position estimates are updated using a Kalman Filter, which can integrate other sensory information such as position and attitude angle estimates to allow a more robust tracking of the runway under turbulence. We illustrate the performance of the proposed lane detection and tracking scheme on various experimental UAV flights conducted by the Saudi Aerospace Research Center. Results show an accurate tracking of the runway edges during the landing phase under various lighting conditions. Also, it suggests that such positional estimates would greatly improve the positional accuracy of the UAV during takeoff and landing phases. The robustness of the proposed algorithm is further validated using Hardware in the Loop simulations with diverse takeoff and landing videos generated using a commercial flight simulator.

  4. Algorithm based on regional separation for automatic grain boundary extraction using improved mean shift method

    Science.gov (United States)

    Zhenying, Xu; Jiandong, Zhu; Qi, Zhang; Yamba, Philip

    2018-06-01

    Metallographic microscopy shows that the vast majority of metal materials are composed of many small grains; the grain size of a metal is important for determining the tensile strength, toughness, plasticity, and other mechanical properties. In order to quantitatively evaluate grain size in metals, grain boundaries must be identified in metallographic images. Based on the phenomenon of grain boundary blurring or disconnection in metallographic images, this study develops an algorithm based on regional separation for automatically extracting grain boundaries by an improved mean shift method. Experimental observation shows that the grain boundaries obtained by the proposed algorithm are highly complete and accurate. This research has practical value because the proposed algorithm is suitable for grain boundary extraction from most metallographic images.

  5. Software and hardware platform for testing of Automatic Generation Control algorithms

    Directory of Open Access Journals (Sweden)

    Vasiliev Alexey

    2017-01-01

    Full Text Available Development and implementation of new Automatic Generation Control (AGC algorithms requires testing them on a model that adequately simulates primary energetic, information and control processes. In this article an implementation of a test platform based on HRTSim (Hybrid Real Time Simulator and SCADA CK-2007 (which is widely used by the System Operator of Russia is proposed. Testing of AGC algorithms on the test platform based on the same SCADA system that is used in operation allows to exclude errors associated with the transfer of AGC algorithms and settings from the test platform to a real power system. A power system including relay protection, automatic control systems and emergency control automatics can be accurately simulated on HRTSim. Besides the information commonly used by conventional AGC systems HRTSim is able to provide a resemblance of Phasor Measurement Unit (PMU measurements (information about rotor angles, magnitudes and phase angles of currents and voltages etc.. The additional information significantly expands the number of possible AGC algorithms so the test platform is useful in modern AGC system developing. The obtained test results confirm that the proposed system is applicable for the tasks mentioned above.

  6. Automatic Circuit Design and Optimization Using Modified PSO Algorithm

    Directory of Open Access Journals (Sweden)

    Subhash Patel

    2016-04-01

    Full Text Available In this work, we have proposed modified PSO algorithm based optimizer for automatic circuit design. The performance of the modified PSO algorithm is compared with two other evolutionary algorithms namely ABC algorithm and standard PSO algorithm by designing two stage CMOS operational amplifier and bulk driven OTA in 130nm technology. The results show the robustness of the proposed algorithm. With modified PSO algorithm, the average design error for two stage op-amp is only 0.054% in contrast to 3.04% for standard PSO algorithm and 5.45% for ABC algorithm. For bulk driven OTA, average design error is 1.32% with MPSO compared to 4.70% with ABC algorithm and 5.63% with standard PSO algorithm.

  7. A DVH-guided IMRT optimization algorithm for automatic treatment planning and adaptive radiotherapy replanning

    International Nuclear Information System (INIS)

    Zarepisheh, Masoud; Li, Nan; Long, Troy; Romeijn, H. Edwin; Tian, Zhen; Jia, Xun; Jiang, Steve B.

    2014-01-01

    Purpose: To develop a novel algorithm that incorporates prior treatment knowledge into intensity modulated radiation therapy optimization to facilitate automatic treatment planning and adaptive radiotherapy (ART) replanning. Methods: The algorithm automatically creates a treatment plan guided by the DVH curves of a reference plan that contains information on the clinician-approved dose-volume trade-offs among different targets/organs and among different portions of a DVH curve for an organ. In ART, the reference plan is the initial plan for the same patient, while for automatic treatment planning the reference plan is selected from a library of clinically approved and delivered plans of previously treated patients with similar medical conditions and geometry. The proposed algorithm employs a voxel-based optimization model and navigates the large voxel-based Pareto surface. The voxel weights are iteratively adjusted to approach a plan that is similar to the reference plan in terms of the DVHs. If the reference plan is feasible but not Pareto optimal, the algorithm generates a Pareto optimal plan with the DVHs better than the reference ones. If the reference plan is too restricting for the new geometry, the algorithm generates a Pareto plan with DVHs close to the reference ones. In both cases, the new plans have similar DVH trade-offs as the reference plans. Results: The algorithm was tested using three patient cases and found to be able to automatically adjust the voxel-weighting factors in order to generate a Pareto plan with similar DVH trade-offs as the reference plan. The algorithm has also been implemented on a GPU for high efficiency. Conclusions: A novel prior-knowledge-based optimization algorithm has been developed that automatically adjust the voxel weights and generate a clinical optimal plan at high efficiency. It is found that the new algorithm can significantly improve the plan quality and planning efficiency in ART replanning and automatic treatment

  8. Optimal algorithm for automatic detection of microaneurysms based on receiver operating characteristic curve

    Science.gov (United States)

    Xu, Lili; Luo, Shuqian

    2010-11-01

    Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.

  9. Channel Access Algorithm Design for Automatic Identification System

    Institute of Scientific and Technical Information of China (English)

    Oh Sang-heon; Kim Seung-pum; Hwang Dong-hwan; Park Chan-sik; Lee Sang-jeong

    2003-01-01

    The Automatic Identification System (AIS) is a maritime equipment to allow an efficient exchange of the navigational data between ships and between ships and shore stations. It utilizes a channel access algorithm which can quickly resolve conflicts without any intervention from control stations. In this paper, a design of channel access algorithm for the AIS is presented. The input/output relationship of each access algorithm module is defined by drawing the state transition diagram, dataflow diagram and flowchart based on the technical standard, ITU-R M.1371. In order to verify the designed channel access algorithm, the simulator was developed using the C/C++ programming language. The results show that the proposed channel access algorithm can properly allocate transmission slots and meet the operational performance requirements specified by the technical standard.

  10. Development of the Contiguous-cells Transportation Problem

    Directory of Open Access Journals (Sweden)

    O. E. Charles-Owaba

    2015-08-01

    Full Text Available The issue of scheduling a long string of multi-period activities which have to be completed without interruption has always been an industrial challenge. The existing production/maintenance scheduling algorithms can only handle situations where activities can be split into two or more sets of activities carried out in non-contiguous sets of work periods. This study proposes a contiguous-periods production/maintenance scheduling approach using the Transportation Model. Relevant variables and parameters of contiguous-cells scheduling problem were taken from the literature. A scheduling optimization problem was defined and solved using a contiguous-cells transportation algorithm (CCTA which was applied in order to determine the optimal maintenance schedule of a fleet of ships at a dockyard in South-Western Nigeria. Fifteen different problems were solved. It is concluded that the contiguous-cells transportation approach to production/ maintenance scheduling is feasible. The model will be a useful decision support tool for scheduling maintenance operations.

  11. Automatic feature learning using multichannel ROI based on deep structured algorithms for computerized lung cancer diagnosis.

    Science.gov (United States)

    Sun, Wenqing; Zheng, Bin; Qian, Wei

    2017-10-01

    This study aimed to analyze the ability of extracting automatically generated features using deep structured algorithms in lung nodule CT image diagnosis, and compare its performance with traditional computer aided diagnosis (CADx) systems using hand-crafted features. All of the 1018 cases were acquired from Lung Image Database Consortium (LIDC) public lung cancer database. The nodules were segmented according to four radiologists' markings, and 13,668 samples were generated by rotating every slice of nodule images. Three multichannel ROI based deep structured algorithms were designed and implemented in this study: convolutional neural network (CNN), deep belief network (DBN), and stacked denoising autoencoder (SDAE). For the comparison purpose, we also implemented a CADx system using hand-crafted features including density features, texture features and morphological features. The performance of every scheme was evaluated by using a 10-fold cross-validation method and an assessment index of the area under the receiver operating characteristic curve (AUC). The observed highest area under the curve (AUC) was 0.899±0.018 achieved by CNN, which was significantly higher than traditional CADx with the AUC=0.848±0.026. The results from DBN was also slightly higher than CADx, while SDAE was slightly lower. By visualizing the automatic generated features, we found some meaningful detectors like curvy stroke detectors from deep structured schemes. The study results showed the deep structured algorithms with automatically generated features can achieve desirable performance in lung nodule diagnosis. With well-tuned parameters and large enough dataset, the deep learning algorithms can have better performance than current popular CADx. We believe the deep learning algorithms with similar data preprocessing procedure can be used in other medical image analysis areas as well. Copyright © 2017. Published by Elsevier Ltd.

  12. Automatic Tracking Of Remote Sensing Precipitation Data Using Genetic Algorithm Image Registration Based Automatic Morphing: September 1999 Storm Floyd Case Study

    Science.gov (United States)

    Chiu, L.; Vongsaard, J.; El-Ghazawi, T.; Weinman, J.; Yang, R.; Kafatos, M.

    U Due to the poor temporal sampling by satellites, data gaps exist in satellite derived time series of precipitation. This poses a challenge for assimilating rain- fall data into forecast models. To yield a continuous time series, the classic image processing technique of digital image morphing has been used. However, the digital morphing technique was applied manually and that is time consuming. In order to avoid human intervention in the process, an automatic procedure for image morphing is needed for real-time operations. For this purpose, Genetic Algorithm Based Image Registration Automatic Morphing (GRAM) model was developed and tested in this paper. Specifically, automatic morphing technique was integrated with Genetic Algo- rithm and Feature Based Image Metamorphosis technique to fill in data gaps between satellite coverage. The technique was tested using NOWRAD data which are gener- ated from the network of NEXRAD radars. Time series of NOWRAD data from storm Floyd that occurred at the US eastern region on September 16, 1999 for 00:00, 01:00, 02:00,03:00, and 04:00am were used. The GRAM technique was applied to data col- lected at 00:00 and 04:00am. These images were also manually morphed. Images at 01:00, 02:00 and 03:00am were interpolated from the GRAM and manual morphing and compared with the original NOWRAD rainrates. The results show that the GRAM technique outperforms manual morphing. The correlation coefficients between the im- ages generated using manual morphing are 0.905, 0.900, and 0.905 for the images at 01:00, 02:00,and 03:00 am, while the corresponding correlation coefficients are 0.946, 0.911, and 0.913, respectively, based on the GRAM technique. Index terms ­ Remote Sensing, Image Registration, Hydrology, Genetic Algorithm, Morphing, NEXRAD

  13. An automatic fuzzy-based multi-temporal brain digital subtraction angiography image fusion algorithm using curvelet transform and content selection strategy.

    Science.gov (United States)

    Momeni, Saba; Pourghassem, Hossein

    2014-08-01

    Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.

  14. Comparison Of Semi-Automatic And Automatic Slick Detection Algorithms For Jiyeh Power Station Oil Spill, Lebanon

    Science.gov (United States)

    Osmanoglu, B.; Ozkan, C.; Sunar, F.

    2013-10-01

    After air strikes on July 14 and 15, 2006 the Jiyeh Power Station started leaking oil into the eastern Mediterranean Sea. The power station is located about 30 km south of Beirut and the slick covered about 170 km of coastline threatening the neighboring countries Turkey and Cyprus. Due to the ongoing conflict between Israel and Lebanon, cleaning efforts could not start immediately resulting in 12 000 to 15 000 tons of fuel oil leaking into the sea. In this paper we compare results from automatic and semi-automatic slick detection algorithms. The automatic detection method combines the probabilities calculated for each pixel from each image to obtain a joint probability, minimizing the adverse effects of atmosphere on oil spill detection. The method can readily utilize X-, C- and L-band data where available. Furthermore wind and wave speed observations can be used for a more accurate analysis. For this study, we utilize Envisat ASAR ScanSAR data. A probability map is generated based on the radar backscatter, effect of wind and dampening value. The semi-automatic algorithm is based on supervised classification. As a classifier, Artificial Neural Network Multilayer Perceptron (ANN MLP) classifier is used since it is more flexible and efficient than conventional maximum likelihood classifier for multisource and multi-temporal data. The learning algorithm for ANN MLP is chosen as the Levenberg-Marquardt (LM). Training and test data for supervised classification are composed from the textural information created from SAR images. This approach is semiautomatic because tuning the parameters of classifier and composing training data need a human interaction. We point out the similarities and differences between the two methods and their results as well as underlining their advantages and disadvantages. Due to the lack of ground truth data, we compare obtained results to each other, as well as other published oil slick area assessments.

  15. Automatic Algorithm Selection for Complex Simulation Problems

    CERN Document Server

    Ewald, Roland

    2012-01-01

    To select the most suitable simulation algorithm for a given task is often difficult. This is due to intricate interactions between model features, implementation details, and runtime environment, which may strongly affect the overall performance. An automated selection of simulation algorithms supports users in setting up simulation experiments without demanding expert knowledge on simulation. Roland Ewald analyzes and discusses existing approaches to solve the algorithm selection problem in the context of simulation. He introduces a framework for automatic simulation algorithm selection and

  16. SU-F-T-352: Development of a Knowledge Based Automatic Lung IMRT Planning Algorithm with Non-Coplanar Beams

    International Nuclear Information System (INIS)

    Zhu, W; Wu, Q; Yuan, L

    2016-01-01

    Purpose: To improve the robustness of a knowledge based automatic lung IMRT planning method and to further validate the reliability of this algorithm by utilizing for the planning of clinical cases with non-coplanar beams. Methods: A lung IMRT planning method which automatically determines both plan optimization objectives and beam configurations with non-coplanar beams has been reported previously. A beam efficiency index map is constructed to guide beam angle selection in this algorithm. This index takes into account both the dose contributions from individual beams and the combined effect of multiple beams which is represented by a beam separation score. We studied the effect of this beam separation score on plan quality and determined the optimal weight for this score.14 clinical plans were re-planned with the knowledge-based algorithm. Significant dosimetric metrics for the PTV and OARs in the automatic plans are compared with those in the clinical plans by the two-sample t-test. In addition, a composite dosimetric quality index was defined to obtain the relationship between the plan quality and the beam separation score. Results: On average, we observed more than 15% reduction on conformity index and homogeneity index for PTV and V_4_0, V_6_0 for heart while an 8% and 3% increase on V_5, V_2_0 for lungs, respectively. The variation curve of the composite index as a function of angle spread score shows that 0.6 is the best value for the weight of the beam separation score. Conclusion: Optimal value for beam angle spread score in automatic lung IMRT planning is obtained. With this value, model can result in statistically the “best” achievable plans. This method can potentially improve the quality and planning efficiency for IMRT plans with no-coplanar angles.

  17. An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm.

    Science.gov (United States)

    Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya

    2015-01-01

    Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the "quality of service" as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services.

  18. Automatic Tuning of PID Controller for a 1-D Levitation System Using a Genetic Algorithm

    DEFF Research Database (Denmark)

    Yang, Zhenyu; Pedersen, Gerulf K.m.

    2006-01-01

    The automatic PID control design for a onedimensional magnetic levitation system is investigated. The PID controller is automatically tuned using the non-dominated sorting genetic algorithm (NSGA-II) based on a nonlinear system model. The developed controller is digitally implemented and tested...

  19. Automatic design of decision-tree induction algorithms

    CERN Document Server

    Barros, Rodrigo C; Freitas, Alex A

    2015-01-01

    Presents a detailed study of the major design components that constitute a top-down decision-tree induction algorithm, including aspects such as split criteria, stopping criteria, pruning, and the approaches for dealing with missing values. Whereas the strategy still employed nowadays is to use a 'generic' decision-tree induction algorithm regardless of the data, the authors argue on the benefits that a bias-fitting strategy could bring to decision-tree induction, in which the ultimate goal is the automatic generation of a decision-tree induction algorithm tailored to the application domain o

  20. Automatic control algorithm effects on energy production

    Science.gov (United States)

    Mcnerney, G. M.

    1981-01-01

    A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.

  1. Gradient Evolution-based Support Vector Machine Algorithm for Classification

    Science.gov (United States)

    Zulvia, Ferani E.; Kuo, R. J.

    2018-03-01

    This paper proposes a classification algorithm based on a support vector machine (SVM) and gradient evolution (GE) algorithms. SVM algorithm has been widely used in classification. However, its result is significantly influenced by the parameters. Therefore, this paper aims to propose an improvement of SVM algorithm which can find the best SVMs’ parameters automatically. The proposed algorithm employs a GE algorithm to automatically determine the SVMs’ parameters. The GE algorithm takes a role as a global optimizer in finding the best parameter which will be used by SVM algorithm. The proposed GE-SVM algorithm is verified using some benchmark datasets and compared with other metaheuristic-based SVM algorithms. The experimental results show that the proposed GE-SVM algorithm obtains better results than other algorithms tested in this paper.

  2. Automatic Correction Algorithm of Hyfrology Feature Attribute in National Geographic Census

    Science.gov (United States)

    Li, C.; Guo, P.; Liu, X.

    2017-09-01

    A subset of the attributes of hydrologic features data in national geographic census are not clear, the current solution to this problem was through manual filling which is inefficient and liable to mistakes. So this paper proposes an automatic correction algorithm of hydrologic features attribute. Based on the analysis of the structure characteristics and topological relation, we put forward three basic principles of correction which include network proximity, structure robustness and topology ductility. Based on the WJ-III map workstation, we realize the automatic correction of hydrologic features. Finally, practical data is used to validate the method. The results show that our method is highly reasonable and efficient.

  3. Towards automatic proofs of lock-free algorithms

    OpenAIRE

    Fejoz , Loïc; Merz , Stephan

    2008-01-01

    International audience; The verification of lock-free data structures has traditionally been considered as difficult. We propose a formal model for describing such algorithms. The verification conditions generated from this model can often be handled by automatic theorem provers.

  4. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing.

    Science.gov (United States)

    Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S; Xian, Xuefeng; Wu, Jian; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.

  5. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing

    Directory of Open Access Journals (Sweden)

    Chunhua Li

    2017-01-01

    Full Text Available Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.

  6. Effects of image compression and degradation on an automatic diabetic retinopathy screening algorithm

    Science.gov (United States)

    Agurto, C.; Barriga, S.; Murray, V.; Pattichis, M.; Soliz, P.

    2010-03-01

    Diabetic retinopathy (DR) is one of the leading causes of blindness among adult Americans. Automatic methods for detection of the disease have been developed in recent years, most of them addressing the segmentation of bright and red lesions. In this paper we present an automatic DR screening system that does approach the problem through the segmentation of features. The algorithm determines non-diseased retinal images from those with pathology based on textural features obtained using multiscale Amplitude Modulation-Frequency Modulation (AM-FM) decompositions. The decomposition is represented as features that are the inputs to a classifier. The algorithm achieves 0.88 area under the ROC curve (AROC) for a set of 280 images from the MESSIDOR database. The algorithm is then used to analyze the effects of image compression and degradation, which will be present in most actual clinical or screening environments. Results show that the algorithm is insensitive to illumination variations, but high rates of compression and large blurring effects degrade its performance.

  7. Neuro-fuzzy system modeling based on automatic fuzzy clustering

    Institute of Scientific and Technical Information of China (English)

    Yuangang TANG; Fuchun SUN; Zengqi SUN

    2005-01-01

    A neuro-fuzzy system model based on automatic fuzzy clustering is proposed.A hybrid model identification algorithm is also developed to decide the model structure and model parameters.The algorithm mainly includes three parts:1) Automatic fuzzy C-means (AFCM),which is applied to generate fuzzy rules automatically,and then fix on the size of the neuro-fuzzy network,by which the complexity of system design is reducesd greatly at the price of the fitting capability;2) Recursive least square estimation (RLSE).It is used to update the parameters of Takagi-Sugeno model,which is employed to describe the behavior of the system;3) Gradient descent algorithm is also proposed for the fuzzy values according to the back propagation algorithm of neural network.Finally,modeling the dynamical equation of the two-link manipulator with the proposed approach is illustrated to validate the feasibility of the method.

  8. Mathematical algorithm for the automatic recognition of intestinal parasites.

    Directory of Open Access Journals (Sweden)

    Alicia Alva

    Full Text Available Parasitic infections are generally diagnosed by professionals trained to recognize the morphological characteristics of the eggs in microscopic images of fecal smears. However, this laboratory diagnosis requires medical specialists which are lacking in many of the areas where these infections are most prevalent. In response to this public health issue, we developed a software based on pattern recognition analysis from microscopi digital images of fecal smears, capable of automatically recognizing and diagnosing common human intestinal parasites. To this end, we selected 229, 124, 217, and 229 objects from microscopic images of fecal smears positive for Taenia sp., Trichuris trichiura, Diphyllobothrium latum, and Fasciola hepatica, respectively. Representative photographs were selected by a parasitologist. We then implemented our algorithm in the open source program SCILAB. The algorithm processes the image by first converting to gray-scale, then applies a fourteen step filtering process, and produces a skeletonized and tri-colored image. The features extracted fall into two general categories: geometric characteristics and brightness descriptions. Individual characteristics were quantified and evaluated with a logistic regression to model their ability to correctly identify each parasite separately. Subsequently, all algorithms were evaluated for false positive cross reactivity with the other parasites studied, excepting Taenia sp. which shares very few morphological characteristics with the others. The principal result showed that our algorithm reached sensitivities between 99.10%-100% and specificities between 98.13%- 98.38% to detect each parasite separately. We did not find any cross-positivity in the algorithms for the three parasites evaluated. In conclusion, the results demonstrated the capacity of our computer algorithm to automatically recognize and diagnose Taenia sp., Trichuris trichiura, Diphyllobothrium latum, and Fasciola hepatica

  9. Mathematical algorithm for the automatic recognition of intestinal parasites.

    Science.gov (United States)

    Alva, Alicia; Cangalaya, Carla; Quiliano, Miguel; Krebs, Casey; Gilman, Robert H; Sheen, Patricia; Zimic, Mirko

    2017-01-01

    Parasitic infections are generally diagnosed by professionals trained to recognize the morphological characteristics of the eggs in microscopic images of fecal smears. However, this laboratory diagnosis requires medical specialists which are lacking in many of the areas where these infections are most prevalent. In response to this public health issue, we developed a software based on pattern recognition analysis from microscopi digital images of fecal smears, capable of automatically recognizing and diagnosing common human intestinal parasites. To this end, we selected 229, 124, 217, and 229 objects from microscopic images of fecal smears positive for Taenia sp., Trichuris trichiura, Diphyllobothrium latum, and Fasciola hepatica, respectively. Representative photographs were selected by a parasitologist. We then implemented our algorithm in the open source program SCILAB. The algorithm processes the image by first converting to gray-scale, then applies a fourteen step filtering process, and produces a skeletonized and tri-colored image. The features extracted fall into two general categories: geometric characteristics and brightness descriptions. Individual characteristics were quantified and evaluated with a logistic regression to model their ability to correctly identify each parasite separately. Subsequently, all algorithms were evaluated for false positive cross reactivity with the other parasites studied, excepting Taenia sp. which shares very few morphological characteristics with the others. The principal result showed that our algorithm reached sensitivities between 99.10%-100% and specificities between 98.13%- 98.38% to detect each parasite separately. We did not find any cross-positivity in the algorithms for the three parasites evaluated. In conclusion, the results demonstrated the capacity of our computer algorithm to automatically recognize and diagnose Taenia sp., Trichuris trichiura, Diphyllobothrium latum, and Fasciola hepatica with a high

  10. A Routing Algorithm for WiFi-Based Wireless Sensor Network and the Application in Automatic Meter Reading

    Directory of Open Access Journals (Sweden)

    Li Li

    2013-01-01

    Full Text Available The Automatic Meter Reading (AMR network for the next generation Smart Grid is required to possess many essential functions, such as data reading and writing, intelligent power transmission, and line damage detection. However, the traditional AMR network cannot meet the previous requirement. With the development of the WiFi sensor node in the low power cost, a new kind of wireless sensor network based on the WiFi technology can be used in application. In this paper, we have designed a new architecture of WiFi-based wireless sensor network, which is suitable for the next generation AMR system. We have also proposed a new routing algorithm called Energy Saving-Based Hybrid Wireless Mesh Protocol (E-HWMP on the premise of current algorithm, which can improve the energy saving of the HWMP and be suitable for the WiFi-based wireless sensor network. The simulation results show that the life cycle of network is extended.

  11. Automatic computer aided analysis algorithms and system for adrenal tumors on CT images.

    Science.gov (United States)

    Chai, Hanchao; Guo, Yi; Wang, Yuanyuan; Zhou, Guohui

    2017-12-04

    The adrenal tumor will disturb the secreting function of adrenocortical cells, leading to many diseases. Different kinds of adrenal tumors require different therapeutic schedules. In the practical diagnosis, it highly relies on the doctor's experience to judge the tumor type by reading the hundreds of CT images. This paper proposed an automatic computer aided analysis method for adrenal tumors detection and classification. It consisted of the automatic segmentation algorithms, the feature extraction and the classification algorithms. These algorithms were then integrated into a system and conducted on the graphic interface by using MATLAB Graphic user interface (GUI). The accuracy of the automatic computer aided segmentation and classification reached 90% on 436 CT images. The experiments proved the stability and reliability of this automatic computer aided analytic system.

  12. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    International Nuclear Information System (INIS)

    Qiu, J; Li, H. Harlod; Zhang, T; Yang, D; Ma, F

    2015-01-01

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The most important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools

  13. SU-E-J-16: Automatic Image Contrast Enhancement Based On Automatic Parameter Optimization for Radiation Therapy Setup Verification

    Energy Technology Data Exchange (ETDEWEB)

    Qiu, J [Taishan Medical University, Taian, Shandong (China); Washington University in St Louis, St Louis, MO (United States); Li, H. Harlod; Zhang, T; Yang, D [Washington University in St Louis, St Louis, MO (United States); Ma, F [Taishan Medical University, Taian, Shandong (China)

    2015-06-15

    Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The most important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.

  14. Automatic Curve Fitting Based on Radial Basis Functions and a Hierarchical Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    G. Trejo-Caballero

    2015-01-01

    Full Text Available Curve fitting is a very challenging problem that arises in a wide variety of scientific and engineering applications. Given a set of data points, possibly noisy, the goal is to build a compact representation of the curve that corresponds to the best estimate of the unknown underlying relationship between two variables. Despite the large number of methods available to tackle this problem, it remains challenging and elusive. In this paper, a new method to tackle such problem using strictly a linear combination of radial basis functions (RBFs is proposed. To be more specific, we divide the parameter search space into linear and nonlinear parameter subspaces. We use a hierarchical genetic algorithm (HGA to minimize a model selection criterion, which allows us to automatically and simultaneously determine the nonlinear parameters and then, by the least-squares method through Singular Value Decomposition method, to compute the linear parameters. The method is fully automatic and does not require subjective parameters, for example, smooth factor or centre locations, to perform the solution. In order to validate the efficacy of our approach, we perform an experimental study with several tests on benchmarks smooth functions. A comparative analysis with two successful methods based on RBF networks has been included.

  15. Towards Automatic Controller Design using Multi-Objective Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Pedersen, Gerulf

    of evolutionary computation, a choice was made to use multi-objective algorithms for the purpose of aiding in automatic controller design. More specifically, the choice was made to use the Non-dominated Sorting Genetic Algorithm II (NSGAII), which is one of the most potent algorithms currently in use...... for automatic controller design. However, because the field of evolutionary computation is relatively unknown in the field of control engineering, this thesis also includes a comprehensive introduction to the basic field of evolutionary computation as well as a description of how the field has previously been......In order to design the controllers of tomorrow, a need has risen for tools that can aid in the design of these. A desire to use evolutionary computation as a tool to achieve that goal is what gave inspiration for the work contained in this thesis. After having studied the foundations...

  16. Development of an Algorithm for Automatic Analysis of the Impedance Spectrum Based on a Measurement Model

    Science.gov (United States)

    Kobayashi, Kiyoshi; Suzuki, Tohru S.

    2018-03-01

    A new algorithm for the automatic estimation of an equivalent circuit and the subsequent parameter optimization is developed by combining the data-mining concept and complex least-squares method. In this algorithm, the program generates an initial equivalent-circuit model based on the sampling data and then attempts to optimize the parameters. The basic hypothesis is that the measured impedance spectrum can be reproduced by the sum of the partial-impedance spectra presented by the resistor, inductor, resistor connected in parallel to a capacitor, and resistor connected in parallel to an inductor. The adequacy of the model is determined by using a simple artificial-intelligence function, which is applied to the output function of the Levenberg-Marquardt module. From the iteration of model modifications, the program finds an adequate equivalent-circuit model without any user input to the equivalent-circuit model.

  17. Automatic metal parts inspection: Use of thermographic images and anomaly detection algorithms

    Science.gov (United States)

    Benmoussat, M. S.; Guillaume, M.; Caulier, Y.; Spinnler, K.

    2013-11-01

    A fully-automatic approach based on the use of induction thermography and detection algorithms is proposed to inspect industrial metallic parts containing different surface and sub-surface anomalies such as open cracks, open and closed notches with different sizes and depths. A practical experimental setup is developed, where lock-in and pulsed thermography (LT and PT, respectively) techniques are used to establish a dataset of thermal images for three different mockups. Data cubes are constructed by stacking up the temporal sequence of thermogram images. After the reduction of the data space dimension by means of denoising and dimensionality reduction methods; anomaly detection algorithms are applied on the reduced data cubes. The dimensions of the reduced data spaces are automatically calculated with arbitrary criterion. The results show that, when reduced data cubes are used, the anomaly detection algorithms originally developed for hyperspectral data, the well-known Reed and Xiaoli Yu detector (RX) and the regularized adaptive RX (RARX), give good detection performances for both surface and sub-surface defects in a non-supervised way.

  18. Automatic fault extraction using a modified ant-colony algorithm

    International Nuclear Information System (INIS)

    Zhao, Junsheng; Sun, Sam Zandong

    2013-01-01

    The basis of automatic fault extraction is seismic attributes, such as the coherence cube which is always used to identify a fault by the minimum value. The biggest challenge in automatic fault extraction is noise, including that of seismic data. However, a fault has a better spatial continuity in certain direction, which makes it quite different from noise. Considering this characteristic, a modified ant-colony algorithm is introduced into automatic fault identification and tracking, where the gradient direction and direction consistency are used as constraints. Numerical model test results show that this method is feasible and effective in automatic fault extraction and noise suppression. The application of field data further illustrates its validity and superiority. (paper)

  19. Development and implementation of an automatic control algorithm for the University of Utah nuclear reactor

    International Nuclear Information System (INIS)

    Crawford, Kevan C.; Sandquist, Gary M.

    1990-01-01

    The emphasis of this work is the development and implementation of an automatic control philosophy which uses the classical operational philosophies as a foundation. Three control algorithms were derived based on various simplifying assumptions. Two of the algorithms were tested in computer simulations. After realizing the insensitivity of the system to the simplifications, the most reduced form of the algorithms was implemented on the computer control system at the University of Utah (UNEL). Since the operational philosophies have a higher priority than automatic control, they determine when automatic control may be utilized. Unlike the operational philosophies, automatic control is not concerned with component failures. The object of this philosophy is the movement of absorber rods to produce a requested power. When the current power level is compared to the requested power level, an error may be detected which will require the movement of a control rod to correct the error. The automatic control philosophy adds another dimension to the classical operational philosophies. Using this philosophy, normal operator interactions with the computer would be limited only to run parameters such as power, period, and run time. This eliminates subjective judgements, objective judgements under pressure, and distractions to the operator and insures the reactor will be operated in a safe and controlled manner as well as providing reproducible operations

  20. Development of an automatic identification algorithm for antibiogram analysis

    OpenAIRE

    Costa, LFR; Eduardo Silva; Noronha, VT; Ivone Vaz-Moreira; Olga C Nunes; de Andrade, MM

    2015-01-01

    Routinely, diagnostic and microbiology laboratories perform antibiogram analysis which can present some difficulties leading to misreadings and intra and inter-reader deviations. An Automatic Identification Algorithm (AIA) has been proposed as a solution to overcome some issues associated with the disc diffusion method, which is the main goal of this work. ALA allows automatic scanning of inhibition zones obtained by antibiograms. More than 60 environmental isolates were tested using suscepti...

  1. Automatic QRS complex detection algorithm designed for a novel wearable, wireless electrocardiogram recording device

    DEFF Research Database (Denmark)

    Saadi, Dorthe Bodholt; Egstrup, Kenneth; Branebjerg, Jens

    2012-01-01

    We have designed and optimized an automatic QRS complex detection algorithm for electrocardiogram (ECG) signals recorded with the DELTA ePatch platform. The algorithm is able to automatically switch between single-channel and multi-channel analysis mode. This preliminary study includes data from ...

  2. Automatic Regionalization Algorithm for Distributed State Estimation in Power Systems: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Dexin; Yang, Liuqing; Florita, Anthony; Alam, S.M. Shafiul; Elgindy, Tarek; Hodge, Bri-Mathias

    2016-08-01

    The deregulation of the power system and the incorporation of generation from renewable energy sources recessitates faster state estimation in the smart grid. Distributed state estimation (DSE) has become a promising and scalable solution to this urgent demand. In this paper, we investigate the regionalization algorithms for the power system, a necessary step before distributed state estimation can be performed. To the best of the authors' knowledge, this is the first investigation on automatic regionalization (AR). We propose three spectral clustering based AR algorithms. Simulations show that our proposed algorithms outperform the two investigated manual regionalization cases. With the help of AR algorithms, we also show how the number of regions impacts the accuracy and convergence speed of the DSE and conclude that the number of regions needs to be chosen carefully to improve the convergence speed of DSEs.

  3. A study and implementation of algorithm for automatic ECT result comparison

    International Nuclear Information System (INIS)

    Jang, You Hyun; Nam, Min Woo; Kim, In Chul; Joo, Kyung Mun; Kim, Jong Seog

    2012-01-01

    Automatic ECT Result Comparison Algorithm was developed and implemented with computer language to remove the human error in manual comparison with many data. The structures of two ECT Program (Eddy net and ECT IDS) that have unique file structure were analyzed to open file and upload data in PC memory. Comparison algorithm was defined graphically for easy PC programming language conversion. Automatic Result Program was programmed with C language that is suitable for future code management and has object oriented programming structure and fast development potential. Automatic Result Program has MS Excel file exporting function that is useful to use external S/W for additional analysis and intuitive result visualization function with color mapping in user friendly fashion that helps analyze efficiently

  4. A study and implementation of algorithm for automatic ECT result comparison

    Energy Technology Data Exchange (ETDEWEB)

    Jang, You Hyun; Nam, Min Woo; Kim, In Chul; Joo, Kyung Mun; Kim, Jong Seog [Central Research Institute, Daejeon (Korea, Republic of)

    2012-10-15

    Automatic ECT Result Comparison Algorithm was developed and implemented with computer language to remove the human error in manual comparison with many data. The structures of two ECT Program (Eddy net and ECT IDS) that have unique file structure were analyzed to open file and upload data in PC memory. Comparison algorithm was defined graphically for easy PC programming language conversion. Automatic Result Program was programmed with C language that is suitable for future code management and has object oriented programming structure and fast development potential. Automatic Result Program has MS Excel file exporting function that is useful to use external S/W for additional analysis and intuitive result visualization function with color mapping in user friendly fashion that helps analyze efficiently.

  5. Automatic Recognition Method for Optical Measuring Instruments Based on Machine Vision

    Institute of Scientific and Technical Information of China (English)

    SONG Le; LIN Yuchi; HAO Liguo

    2008-01-01

    Based on a comprehensive study of various algorithms, the automatic recognition of traditional ocular optical measuring instruments is realized. Taking a universal tools microscope (UTM) lens view image as an example, a 2-layer automatic recognition model for data reading is established after adopting a series of pre-processing algorithms. This model is an optimal combination of the correlation-based template matching method and a concurrent back propagation (BP) neural network. Multiple complementary feature extraction is used in generating the eigenvectors of the concurrent network. In order to improve fault-tolerance capacity, rotation invariant features based on Zernike moments are extracted from digit characters and a 4-dimensional group of the outline features is also obtained. Moreover, the operating time and reading accuracy can be adjusted dynamically by setting the threshold value. The experimental result indicates that the newly developed algorithm has optimal recognition precision and working speed. The average reading ratio can achieve 97.23%. The recognition method can automatically obtain the results of optical measuring instruments rapidly and stably without modifying their original structure, which meets the application requirements.

  6. Automatic Laser Pointer Detection Algorithm for Environment Control Device Systems Based on Template Matching and Genetic Tuning of Fuzzy Rule-Based Systems

    Directory of Open Access Journals (Sweden)

    F.

    2012-04-01

    Full Text Available In this paper we propose a new approach for laser-based environment device control systems based on the automatic design of a Fuzzy Rule-Based System for laser pointer detection. The idea is to improve the success rate of the previous approaches decreasing as much as possible the false offs and increasing the success rate in images with laser spot, i.e., the detection of a false laser spot (since this could lead to dangerous situations. To this end, we propose to analyze both, the morphology and color of a laser spot image together, thus developing a new robust algorithm. Genetic Fuzzy Systems have also been employed to improve the laser spot system detection by means of a fine tuning of the involved membership functions thus reducing the system false offs, which is the main objective in this problem. The system presented in this paper, makes use of a Fuzzy Rule-Based System adjusted by a Genetic Algorithm, which, based on laser morphology and color analysis, shows a better success rate than previous approaches.

  7. Multiobjective Optimal Algorithm for Automatic Calibration of Daily Streamflow Forecasting Model

    Directory of Open Access Journals (Sweden)

    Yi Liu

    2016-01-01

    Full Text Available Single-objection function cannot describe the characteristics of the complicated hydrologic system. Consequently, it stands to reason that multiobjective functions are needed for calibration of hydrologic model. The multiobjective algorithms based on the theory of nondominate are employed to solve this multiobjective optimal problem. In this paper, a novel multiobjective optimization method based on differential evolution with adaptive Cauchy mutation and Chaos searching (MODE-CMCS is proposed to optimize the daily streamflow forecasting model. Besides, to enhance the diversity performance of Pareto solutions, a more precise crowd distance assigner is presented in this paper. Furthermore, the traditional generalized spread metric (SP is sensitive with the size of Pareto set. A novel diversity performance metric, which is independent of Pareto set size, is put forward in this research. The efficacy of the new algorithm MODE-CMCS is compared with the nondominated sorting genetic algorithm II (NSGA-II on a daily streamflow forecasting model based on support vector machine (SVM. The results verify that the performance of MODE-CMCS is superior to the NSGA-II for automatic calibration of hydrologic model.

  8. Semi-automatic mapping of linear-trending bedforms using 'Self-Organizing Maps' algorithm

    Science.gov (United States)

    Foroutan, M.; Zimbelman, J. R.

    2017-09-01

    Increased application of high resolution spatial data such as high resolution satellite or Unmanned Aerial Vehicle (UAV) images from Earth, as well as High Resolution Imaging Science Experiment (HiRISE) images from Mars, makes it necessary to increase automation techniques capable of extracting detailed geomorphologic elements from such large data sets. Model validation by repeated images in environmental management studies such as climate-related changes as well as increasing access to high-resolution satellite images underline the demand for detailed automatic image-processing techniques in remote sensing. This study presents a methodology based on an unsupervised Artificial Neural Network (ANN) algorithm, known as Self Organizing Maps (SOM), to achieve the semi-automatic extraction of linear features with small footprints on satellite images. SOM is based on competitive learning and is efficient for handling huge data sets. We applied the SOM algorithm to high resolution satellite images of Earth and Mars (Quickbird, Worldview and HiRISE) in order to facilitate and speed up image analysis along with the improvement of the accuracy of results. About 98% overall accuracy and 0.001 quantization error in the recognition of small linear-trending bedforms demonstrate a promising framework.

  9. Cryptographic protocol security analysis based on bounded constructing algorithm

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    An efficient approach to analyzing cryptographic protocols is to develop automatic analysis tools based on formal methods. However, the approach has encountered the high computational complexity problem due to reasons that participants of protocols are arbitrary, their message structures are complex and their executions are concurrent. We propose an efficient automatic verifying algorithm for analyzing cryptographic protocols based on the Cryptographic Protocol Algebra (CPA) model proposed recently, in which algebraic techniques are used to simplify the description of cryptographic protocols and their executions. Redundant states generated in the analysis processes are much reduced by introducing a new algebraic technique called Universal Polynomial Equation and the algorithm can be used to verify the correctness of protocols in the infinite states space. We have implemented an efficient automatic analysis tool for cryptographic protocols, called ACT-SPA, based on this algorithm, and used the tool to check more than 20 cryptographic protocols. The analysis results show that this tool is more efficient, and an attack instance not offered previously is checked by using this tool.

  10. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    Science.gov (United States)

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  11. [An automatic color correction algorithm for digital human body sections].

    Science.gov (United States)

    Zhuge, Bin; Zhou, He-qin; Tang, Lei; Lang, Wen-hui; Feng, Huan-qing

    2005-06-01

    To find a new approach to improve the uniformity of color parameters for images data of the serial sections of the human body. An auto-color correction algorithm in the RGB color space based on a standard CMYK color chart was proposed. The gray part of the color chart was auto-segmented from every original image, and fifteen gray values were attained. The transformation function between the measured gray value and the standard gray value of the color chart and the lookup table were obtained. In RGB color space, the colors of images were corrected according to the lookup table. The color of original Chinese Digital Human Girl No. 1 (CDH-G1) database was corrected by using the algorithm with Matlab 6.5, and it took 13.475 s to deal with one picture on a personal computer. Using the algorithm, the color of the original database is corrected automatically and quickly. The uniformity of color parameters for corrected dataset is improved.

  12. Differential evolution algorithm based automatic generation control for interconnected power systems with

    Directory of Open Access Journals (Sweden)

    Banaja Mohanty

    2014-09-01

    Full Text Available This paper presents the design and performance analysis of Differential Evolution (DE algorithm based Proportional–Integral (PI and Proportional–Integral–Derivative (PID controllers for Automatic Generation Control (AGC of an interconnected power system. Initially, a two area thermal system with governor dead-band nonlinearity is considered for the design and analysis purpose. In the proposed approach, the design problem is formulated as an optimization problem control and DE is employed to search for optimal controller parameters. Three different objective functions are used for the design purpose. The superiority of the proposed approach has been shown by comparing the results with a recently published Craziness based Particle Swarm Optimization (CPSO technique for the same interconnected power system. It is noticed that, the dynamic performance of DE optimized PI controller is better than CPSO optimized PI controllers. Additionally, controller parameters are tuned at different loading conditions so that an adaptive gain scheduling control strategy can be employed. The study is further extended to a more realistic network of two-area six unit system with different power generating units such as thermal, hydro, wind and diesel generating units considering boiler dynamics for thermal plants, Generation Rate Constraint (GRC and Governor Dead Band (GDB non-linearity.

  13. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    Directory of Open Access Journals (Sweden)

    Haijian Chen

    2015-01-01

    Full Text Available In recent years, Massive Open Online Courses (MOOCs are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate.

  14. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing

    OpenAIRE

    Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S.; Xian, Xuefeng; Wu, Jian; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality i...

  15. The design of control algorithm for automatic start-up model of HWRR

    International Nuclear Information System (INIS)

    Guo Wenqi

    1990-01-01

    The design of control algorithm for automatic start-up model of HWRR (Heavy Water Research Reactor), the calculation of μ value and the application of digital compensator are described. Finally The flow diagram of the automatic start-up and digital compensator program for HWRR are given

  16. Semi-Automatic Anatomical Tree Matching for Landmark-Based Elastic Registration of Liver Volumes

    Directory of Open Access Journals (Sweden)

    Klaus Drechsler

    2010-01-01

    Full Text Available One promising approach to register liver volume acquisitions is based on the branching points of the vessel trees as anatomical landmarks inherently available in the liver. Automated tree matching algorithms were proposed to automatically find pair-wise correspondences between two vessel trees. However, to the best of our knowledge, none of the existing automatic methods are completely error free. After a review of current literature and methodologies on the topic, we propose an efficient interaction method that can be employed to support tree matching algorithms with important pre-selected correspondences or after an automatic matching to manually correct wrongly matched nodes. We used this method in combination with a promising automatic tree matching algorithm also presented in this work. The proposed method was evaluated by 4 participants and a CT dataset that we used to derive multiple artificial datasets.

  17. Automated Vectorization of Decision-Based Algorithms

    Science.gov (United States)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  18. A novel algorithm for automatic localization of human eyes

    Institute of Scientific and Technical Information of China (English)

    Liang Tao (陶亮); Juanjuan Gu (顾涓涓); Zhenquan Zhuang (庄镇泉)

    2003-01-01

    Based on geometrical facial features and image segmentation, we present a novel algorithm for automatic localization of human eyes in grayscale or color still images with complex background. Firstly, a determination criterion of eye location is established by the prior knowledge of geometrical facial features. Secondly,a range of threshold values that would separate eye blocks from others in a segmented face image (I.e.,a binary image) are estimated. Thirdly, with the progressive increase of the threshold by an appropriate step in that range, once two eye blocks appear from the segmented image, they will be detected by the determination criterion of eye location. Finally, the 2D correlation coefficient is used as a symmetry similarity measure to check the factuality of the two detected eyes. To avoid the background interference, skin color segmentation can be applied in order to enhance the accuracy of eye detection. The experimental results demonstrate the high efficiency of the algorithm and correct localization rate.

  19. Comparison of feature and classifier algorithms for online automatic sleep staging based on a single EEG signal

    NARCIS (Netherlands)

    Radha, M.; Garcia Molina, G.; Poel, M.; Tononi, G.

    2014-01-01

    Automatic sleep staging on an online basis has recently emerged as a research topic motivated by fundamental sleep research. The aim of this paper is to find optimal signal processing methods and machine learning algorithms to achieve online sleep staging on the basis of a single EEG signal. The

  20. Modular Algorithm Testbed Suite (MATS): A Software Framework for Automatic Target Recognition

    Science.gov (United States)

    2017-01-01

    NAVAL SURFACE WARFARE CENTER PANAMA CITY DIVISION PANAMA CITY, FL 32407-7001 TECHNICAL REPORT NSWC PCD TR-2017-004 MODULAR ...31-01-2017 Technical Modular Algorithm Testbed Suite (MATS): A Software Framework for Automatic Target Recognition DR...flexible platform to facilitate the development and testing of ATR algorithms. To that end, NSWC PCD has created the Modular Algorithm Testbed Suite

  1. Automatic generation control of multi-area power systems with diverse energy sources using Teaching Learning Based Optimization algorithm

    Directory of Open Access Journals (Sweden)

    Rabindra Kumar Sahu

    2016-03-01

    Full Text Available This paper presents the design and analysis of Proportional-Integral-Double Derivative (PIDD controller for Automatic Generation Control (AGC of multi-area power systems with diverse energy sources using Teaching Learning Based Optimization (TLBO algorithm. At first, a two-area reheat thermal power system with appropriate Generation Rate Constraint (GRC is considered. The design problem is formulated as an optimization problem and TLBO is employed to optimize the parameters of the PIDD controller. The superiority of the proposed TLBO based PIDD controller has been demonstrated by comparing the results with recently published optimization technique such as hybrid Firefly Algorithm and Pattern Search (hFA-PS, Firefly Algorithm (FA, Bacteria Foraging Optimization Algorithm (BFOA, Genetic Algorithm (GA and conventional Ziegler Nichols (ZN for the same interconnected power system. Also, the proposed approach has been extended to two-area power system with diverse sources of generation like thermal, hydro, wind and diesel units. The system model includes boiler dynamics, GRC and Governor Dead Band (GDB non-linearity. It is observed from simulation results that the performance of the proposed approach provides better dynamic responses by comparing the results with recently published in the literature. Further, the study is extended to a three unequal-area thermal power system with different controllers in each area and the results are compared with published FA optimized PID controller for the same system under study. Finally, sensitivity analysis is performed by varying the system parameters and operating load conditions in the range of ±25% from their nominal values to test the robustness.

  2. Automatic boiling water reactor control rod pattern design using particle swarm optimization algorithm and local search

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Cheng-Der, E-mail: jdwang@iner.gov.tw [Nuclear Engineering Division, Institute of Nuclear Energy Research, No. 1000, Wenhua Rd., Jiaan Village, Longtan Township, Taoyuan County 32546, Taiwan, ROC (China); Lin, Chaung [National Tsing Hua University, Department of Engineering and System Science, 101, Section 2, Kuang Fu Road, Hsinchu 30013, Taiwan (China)

    2013-02-15

    Highlights: ► The PSO algorithm was adopted to automatically design a BWR CRP. ► The local search procedure was added to improve the result of PSO algorithm. ► The results show that the obtained CRP is the same good as that in the previous work. -- Abstract: This study developed a method for the automatic design of a boiling water reactor (BWR) control rod pattern (CRP) using the particle swarm optimization (PSO) algorithm. The PSO algorithm is more random compared to the rank-based ant system (RAS) that was used to solve the same BWR CRP design problem in the previous work. In addition, the local search procedure was used to make improvements after PSO, by adding the single control rod (CR) effect. The design goal was to obtain the CRP so that the thermal limits and shutdown margin would satisfy the design requirement and the cycle length, which is implicitly controlled by the axial power distribution, would be acceptable. The results showed that the same acceptable CRP found in the previous work could be obtained.

  3. Automatic speech signal segmentation based on the innovation adaptive filter

    Directory of Open Access Journals (Sweden)

    Makowski Ryszard

    2014-06-01

    Full Text Available Speech segmentation is an essential stage in designing automatic speech recognition systems and one can find several algorithms proposed in the literature. It is a difficult problem, as speech is immensely variable. The aim of the authors’ studies was to design an algorithm that could be employed at the stage of automatic speech recognition. This would make it possible to avoid some problems related to speech signal parametrization. Posing the problem in such a way requires the algorithm to be capable of working in real time. The only such algorithm was proposed by Tyagi et al., (2006, and it is a modified version of Brandt’s algorithm. The article presents a new algorithm for unsupervised automatic speech signal segmentation. It performs segmentation without access to information about the phonetic content of the utterances, relying exclusively on second-order statistics of a speech signal. The starting point for the proposed method is time-varying Schur coefficients of an innovation adaptive filter. The Schur algorithm is known to be fast, precise, stable and capable of rapidly tracking changes in second order signal statistics. A transfer from one phoneme to another in the speech signal always indicates a change in signal statistics caused by vocal track changes. In order to allow for the properties of human hearing, detection of inter-phoneme boundaries is performed based on statistics defined on the mel spectrum determined from the reflection coefficients. The paper presents the structure of the algorithm, defines its properties, lists parameter values, describes detection efficiency results, and compares them with those for another algorithm. The obtained segmentation results, are satisfactory.

  4. Subgrouping Automata: automatic sequence subgrouping using phylogenetic tree-based optimum subgrouping algorithm.

    Science.gov (United States)

    Seo, Joo-Hyun; Park, Jihyang; Kim, Eun-Mi; Kim, Juhan; Joo, Keehyoung; Lee, Jooyoung; Kim, Byung-Gee

    2014-02-01

    Sequence subgrouping for a given sequence set can enable various informative tasks such as the functional discrimination of sequence subsets and the functional inference of unknown sequences. Because an identity threshold for sequence subgrouping may vary according to the given sequence set, it is highly desirable to construct a robust subgrouping algorithm which automatically identifies an optimal identity threshold and generates subgroups for a given sequence set. To meet this end, an automatic sequence subgrouping method, named 'Subgrouping Automata' was constructed. Firstly, tree analysis module analyzes the structure of tree and calculates the all possible subgroups in each node. Sequence similarity analysis module calculates average sequence similarity for all subgroups in each node. Representative sequence generation module finds a representative sequence using profile analysis and self-scoring for each subgroup. For all nodes, average sequence similarities are calculated and 'Subgrouping Automata' searches a node showing statistically maximum sequence similarity increase using Student's t-value. A node showing the maximum t-value, which gives the most significant differences in average sequence similarity between two adjacent nodes, is determined as an optimum subgrouping node in the phylogenetic tree. Further analysis showed that the optimum subgrouping node from SA prevents under-subgrouping and over-subgrouping. Copyright © 2013. Published by Elsevier Ltd.

  5. GASPACHO: a generic automatic solver using proximal algorithms for convex huge optimization problems

    Science.gov (United States)

    Goossens, Bart; Luong, Hiêp; Philips, Wilfried

    2017-08-01

    Many inverse problems (e.g., demosaicking, deblurring, denoising, image fusion, HDR synthesis) share various similarities: degradation operators are often modeled by a specific data fitting function while image prior knowledge (e.g., sparsity) is incorporated by additional regularization terms. In this paper, we investigate automatic algorithmic techniques for evaluating proximal operators. These algorithmic techniques also enable efficient calculation of adjoints from linear operators in a general matrix-free setting. In particular, we study the simultaneous-direction method of multipliers (SDMM) and the parallel proximal algorithm (PPXA) solvers and show that the automatically derived implementations are well suited for both single-GPU and multi-GPU processing. We demonstrate this approach for an Electron Microscopy (EM) deconvolution problem.

  6. Automatic optimization of a nuclear reactor reload using the algorithm Ant-Q

    International Nuclear Information System (INIS)

    Machado, Liana; Schirru, Roberto

    2002-01-01

    The nuclear fuel reload optimization is a NP-Complete combinatorial optimization problem. For decades this problem was solved using an expert's knowledge. From the eighties, however there have been efforts to automatic fuel reload and the more recent ones show the Genetic Algorithm's (GA) efficiency on this problem. Following this trend, our aim is to optimization nuclear fuel reload using Ant-Q, artificial theory based algorithms. Ant-Q's results on the Traveling salesman Problem, which is conceptuality similar to fuel reload, are better than GA's. Ant-Q was tested in real application on the cycle 7 reload of Angra I. Comparing Ant-Q result with the GA's, it can be verified that, even without a local heuristics, the former algorithm, as it superiority comparing the GA in Angra I show. Is a valid technique to solve the nuclear fuel reload problem. (author)

  7. Automatic modulation classification principles, algorithms and applications

    CERN Document Server

    Zhu, Zhechen

    2014-01-01

    Automatic Modulation Classification (AMC) has been a key technology in many military, security, and civilian telecommunication applications for decades. In military and security applications, modulation often serves as another level of encryption; in modern civilian applications, multiple modulation types can be employed by a signal transmitter to control the data rate and link reliability. This book offers comprehensive documentation of AMC models, algorithms and implementations for successful modulation recognition. It provides an invaluable theoretical and numerical comparison of AMC algo

  8. Seizure detection algorithms based on EMG signals

    DEFF Research Database (Denmark)

    Conradsen, Isa

    Background: the currently used non-invasive seizure detection methods are not reliable. Muscle fibers are directly connected to the nerves, whereby electric signals are generated during activity. Therefore, an alarm system on electromyography (EMG) signals is a theoretical possibility. Objective...... on the amplitude of the signal. The other algorithm was based on information of the signal in the frequency domain, and it focused on synchronisation of the electrical activity in a single muscle during the seizure. Results: The amplitude-based algorithm reliably detected seizures in 2 of the patients, while...... the frequency-based algorithm was efficient for detecting the seizures in the third patient. Conclusion: Our results suggest that EMG signals could be used to develop an automatic seizuredetection system. However, different patients might require different types of algorithms /approaches....

  9. Automatic detection of ECG electrode misplacement: a tale of two algorithms

    International Nuclear Information System (INIS)

    Xia, Henian; Garcia, Gabriel A; Zhao, Xiaopeng

    2012-01-01

    Artifacts in an electrocardiogram (ECG) due to electrode misplacement can lead to wrong diagnoses. Various computer methods have been developed for automatic detection of electrode misplacement. Here we reviewed and compared the performance of two algorithms with the highest accuracies on several databases from PhysioNet. These algorithms were implemented into four models. For clean ECG records with clearly distinguishable waves, the best model produced excellent accuracies (> = 98.4%) for all misplacements except the LA/LL interchange (87.4%). However, the accuracies were significantly lower for records with noise and arrhythmias. Moreover, when the algorithms were tested on a database that was independent from the training database, the accuracies may be poor. For the worst scenario, the best accuracies for different types of misplacements ranged from 36.1% to 78.4%. A large number of ECGs of various qualities and pathological conditions are collected every day. To improve the quality of health care, the results of this paper call for more robust and accurate algorithms for automatic detection of electrode misplacement, which should be developed and tested using a database of extensive ECG records. (paper)

  10. A comparative analysis of DBSCAN, K-means, and quadratic variation algorithms for automatic identification of swallows from swallowing accelerometry signals.

    Science.gov (United States)

    Dudik, Joshua M; Kurosu, Atsuko; Coyle, James L; Sejdić, Ervin

    2015-04-01

    Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differentiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. AUTOMATIC MULTILEVEL IMAGE SEGMENTATION BASED ON FUZZY REASONING

    Directory of Open Access Journals (Sweden)

    Liang Tang

    2011-05-01

    Full Text Available An automatic multilevel image segmentation method based on sup-star fuzzy reasoning (SSFR is presented. Using the well-known sup-star fuzzy reasoning technique, the proposed algorithm combines the global statistical information implied in the histogram with the local information represented by the fuzzy sets of gray-levels, and aggregates all the gray-levels into several classes characterized by the local maximum values of the histogram. The presented method has the merits of determining the number of the segmentation classes automatically, and avoiding to calculating thresholds of segmentation. Emulating and real image segmentation experiments demonstrate that the SSFR is effective.

  12. Model-Based Reasoning in Humans Becomes Automatic with Training.

    Directory of Open Access Journals (Sweden)

    Marcos Economides

    2015-09-01

    Full Text Available Model-based and model-free reinforcement learning (RL have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.

  13. Automatic Generation of Setup for CNC Spring Coiler Based on Case-based Reasoning

    Institute of Scientific and Technical Information of China (English)

    KU Xiangchen; WANG Runxiao; LI Jishun; WANG Dongbo

    2006-01-01

    When producing special-shape spring in CNC spring coiler, the setup of the coiler is often a manual work using a trial-and-error method. As a result, the setup of coiler consumes so much time and becomes the bottleneck of the spring production process. In order to cope with this situation, this paper proposes an automatic generation system of setup for CNC spring coiler using case-based reasoning (CBR). The core of the study contains: (1) integrated reasoning model of CBR system;(2) spatial shape describe of special-shape spring based on feature;(3) coiling case representation using shape feature matrix; and (4) case similarity measure algorithm. The automatic generation system has implemented with C++ Builder 6.0 and is helpful in improving the automaticity and efficiency of spring coiler.

  14. Automatic Derivation of Statistical Data Analysis Algorithms: Planetary Nebulae and Beyond

    Science.gov (United States)

    Fischer, Bernd; Hajian, Arsen; Knuth, Kevin; Schumann, Johann

    2004-04-01

    AUTOBAYES is a fully automatic program synthesis system for the data analysis domain. Its input is a declarative problem description in form of a statistical model; its output is documented and optimized C/C++ code. The synthesis process relies on the combination of three key techniques. Bayesian networks are used as a compact internal representation mechanism which enables problem decompositions and guides the algorithm derivation. Program schemas are used as independently composable building blocks for the algorithm construction; they can encapsulate advanced algorithms and data structures. A symbolic-algebraic system is used to find closed-form solutions for problems and emerging subproblems. In this paper, we describe the application of AUTOBAYES to the analysis of planetary nebulae images taken by the Hubble Space Telescope. We explain the system architecture, and present in detail the automatic derivation of the scientists' original analysis as well as a refined analysis using clustering models. This study demonstrates that AUTOBAYES is now mature enough so that it can be applied to realistic scientific data analysis tasks.

  15. Automatic mesh refinement and parallel load balancing for Fokker-Planck-DSMC algorithm

    Science.gov (United States)

    Küchlin, Stephan; Jenny, Patrick

    2018-06-01

    Recently, a parallel Fokker-Planck-DSMC algorithm for rarefied gas flow simulation in complex domains at all Knudsen numbers was developed by the authors. Fokker-Planck-DSMC (FP-DSMC) is an augmentation of the classical DSMC algorithm, which mitigates the near-continuum deficiencies in terms of computational cost of pure DSMC. At each time step, based on a local Knudsen number criterion, the discrete DSMC collision operator is dynamically switched to the Fokker-Planck operator, which is based on the integration of continuous stochastic processes in time, and has fixed computational cost per particle, rather than per collision. In this contribution, we present an extension of the previous implementation with automatic local mesh refinement and parallel load-balancing. In particular, we show how the properties of discrete approximations to space-filling curves enable an efficient implementation. Exemplary numerical studies highlight the capabilities of the new code.

  16. Electroporation-based treatment planning for deep-seated tumors based on automatic liver segmentation of MRI images.

    Science.gov (United States)

    Pavliha, Denis; Mušič, Maja M; Serša, Gregor; Miklavčič, Damijan

    2013-01-01

    Electroporation is the phenomenon that occurs when a cell is exposed to a high electric field, which causes transient cell membrane permeabilization. A paramount electroporation-based application is electrochemotherapy, which is performed by delivering high-voltage electric pulses that enable the chemotherapeutic drug to more effectively destroy the tumor cells. Electrochemotherapy can be used for treating deep-seated metastases (e.g. in the liver, bone, brain, soft tissue) using variable-geometry long-needle electrodes. To treat deep-seated tumors, patient-specific treatment planning of the electroporation-based treatment is required. Treatment planning is based on generating a 3D model of the organ and target tissue subject to electroporation (i.e. tumor nodules). The generation of the 3D model is done by segmentation algorithms. We implemented and evaluated three automatic liver segmentation algorithms: region growing, adaptive threshold, and active contours (snakes). The algorithms were optimized using a seven-case dataset manually segmented by the radiologist as a training set, and finally validated using an additional four-case dataset that was previously not included in the optimization dataset. The presented results demonstrate that patient's medical images that were not included in the training set can be successfully segmented using our three algorithms. Besides electroporation-based treatments, these algorithms can be used in applications where automatic liver segmentation is required.

  17. Automatic J–A Model Parameter Tuning Algorithm for High Accuracy Inrush Current Simulation

    Directory of Open Access Journals (Sweden)

    Xishan Wen

    2017-04-01

    Full Text Available Inrush current simulation plays an important role in many tasks of the power system, such as power transformer protection. However, the accuracy of the inrush current simulation can hardly be ensured. In this paper, a Jiles–Atherton (J–A theory based model is proposed to simulate the inrush current of power transformers. The characteristics of the inrush current curve are analyzed and results show that the entire inrush current curve can be well featured by the crest value of the first two cycles. With comprehensive consideration of both of the features of the inrush current curve and the J–A parameters, an automatic J–A parameter estimation algorithm is proposed. The proposed algorithm can obtain more reasonable J–A parameters, which improve the accuracy of simulation. Experimental results have verified the efficiency of the proposed algorithm.

  18. A Clustering-Based Automatic Transfer Function Design for Volume Visualization

    Directory of Open Access Journals (Sweden)

    Tianjin Zhang

    2016-01-01

    Full Text Available The two-dimensional transfer functions (TFs designed based on intensity-gradient magnitude (IGM histogram are effective tools for the visualization and exploration of 3D volume data. However, traditional design methods usually depend on multiple times of trial-and-error. We propose a novel method for the automatic generation of transfer functions by performing the affinity propagation (AP clustering algorithm on the IGM histogram. Compared with previous clustering algorithms that were employed in volume visualization, the AP clustering algorithm has much faster convergence speed and can achieve more accurate clustering results. In order to obtain meaningful clustering results, we introduce two similarity measurements: IGM similarity and spatial similarity. These two similarity measurements can effectively bring the voxels of the same tissue together and differentiate the voxels of different tissues so that the generated TFs can assign different optical properties to different tissues. Before performing the clustering algorithm on the IGM histogram, we propose to remove noisy voxels based on the spatial information of voxels. Our method does not require users to input the number of clusters, and the classification and visualization process is automatic and efficient. Experiments on various datasets demonstrate the effectiveness of the proposed method.

  19. Automatic Detection and Quantification of WBCs and RBCs Using Iterative Structured Circle Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Yazan M. Alomari

    2014-01-01

    Full Text Available Segmentation and counting of blood cells are considered as an important step that helps to extract features to diagnose some specific diseases like malaria or leukemia. The manual counting of white blood cells (WBCs and red blood cells (RBCs in microscopic images is an extremely tedious, time consuming, and inaccurate process. Automatic analysis will allow hematologist experts to perform faster and more accurately. The proposed method uses an iterative structured circle detection algorithm for the segmentation and counting of WBCs and RBCs. The separation of WBCs from RBCs was achieved by thresholding, and specific preprocessing steps were developed for each cell type. Counting was performed for each image using the proposed method based on modified circle detection, which automatically counted the cells. Several modifications were made to the basic (RCD algorithm to solve the initialization problem, detecting irregular circles (cells, selecting the optimal circle from the candidate circles, determining the number of iterations in a fully dynamic way to enhance algorithm detection, and running time. The validation method used to determine segmentation accuracy was a quantitative analysis that included Precision, Recall, and F-measurement tests. The average accuracy of the proposed method was 95.3% for RBCs and 98.4% for WBCs.

  20. TCSC based automatic generation control of deregulated power system using quasi-oppositional harmony search algorithm

    Directory of Open Access Journals (Sweden)

    Mahendra Nandi

    2017-08-01

    Full Text Available In present aspect, automatic generation control (AGC of deregulated power system with thyristor controlled series compensator (TCSC device is investigated. The objective is to discuss bilateral power transaction issue with the TCSC effect. A deregulated two-area power system model having two thermal units in each control area is considered for this act. A quasi-oppositional harmony search (QOHS algorithm is being applied for the constrained optimization problem. Three cases, commonly studied in deregulation, are discussed for the effectiveness of the proposed technique. Further, sensitivity analysis is studied by varying the test system parameters up to ±25% from their rated values. The obtained simulation plots are analytically discussed with the calculation of oscillatory modes, transient details and the studied performance indices. Sugeno fuzzy logic control technique is also investigated to the studied test system. The simulation results show that the proposed QOHS based TCSC controller is quite effective in deregulated environment.

  1. Automatic design of decision-tree induction algorithms tailored to flexible-receptor docking data.

    Science.gov (United States)

    Barros, Rodrigo C; Winck, Ana T; Machado, Karina S; Basgalupp, Márcio P; de Carvalho, André C P L F; Ruiz, Duncan D; de Souza, Osmar Norberto

    2012-11-21

    This paper addresses the prediction of the free energy of binding of a drug candidate with enzyme InhA associated with Mycobacterium tuberculosis. This problem is found within rational drug design, where interactions between drug candidates and target proteins are verified through molecular docking simulations. In this application, it is important not only to correctly predict the free energy of binding, but also to provide a comprehensible model that could be validated by a domain specialist. Decision-tree induction algorithms have been successfully used in drug-design related applications, specially considering that decision trees are simple to understand, interpret, and validate. There are several decision-tree induction algorithms available for general-use, but each one has a bias that makes it more suitable for a particular data distribution. In this article, we propose and investigate the automatic design of decision-tree induction algorithms tailored to particular drug-enzyme binding data sets. We investigate the performance of our new method for evaluating binding conformations of different drug candidates to InhA, and we analyze our findings with respect to decision tree accuracy, comprehensibility, and biological relevance. The empirical analysis indicates that our method is capable of automatically generating decision-tree induction algorithms that significantly outperform the traditional C4.5 algorithm with respect to both accuracy and comprehensibility. In addition, we provide the biological interpretation of the rules generated by our approach, reinforcing the importance of comprehensible predictive models in this particular bioinformatics application. We conclude that automatically designing a decision-tree algorithm tailored to molecular docking data is a promising alternative for the prediction of the free energy from the binding of a drug candidate with a flexible-receptor.

  2. Automatic brightness control algorithms and their effect on fluoroscopic imaging

    International Nuclear Information System (INIS)

    Quinn, P.W.; Gagne, R.M.

    1989-01-01

    This paper reports a computer model used to investigate the effect on dose and image quality of three automatic brightness control (ABC) algorithms used in the imaging of barium during general-purpose fluoroscopy. A model incorporating all aspects of image formation - i.e., x- ray production, phantom attenuation, and energy absorption in the CSI phosphor - was driven according to each ABC algorithm as a function of patient thickness. The energy absorbed in the phosphor was kept constant, while the changes in exposure, integral dose, organ dose, and contrast were monitored

  3. Algorithm for Automatic Generation of Curved and Compound Twills

    Institute of Scientific and Technical Information of China (English)

    WANG Mei-zhen; WANG Fu-mei; WANG Shan-yuan

    2005-01-01

    A new arithmetic using matrix left-shift functions for the quicker generation of curved and compound twills is introduced in this paper. A matrix model for the generation of regular, curved and compound twill structures is established and its computing simulation realization are elaborated. Examples of the algorithm applying in the simulation and the automatic generation of curved and compound twills in fabric CAD are obtained.

  4. An Alternative to Chaid Segmentation Algorithm Based on Entropy.

    Directory of Open Access Journals (Sweden)

    María Purificación Galindo Villardón

    2010-07-01

    Full Text Available The CHAID (Chi-Squared Automatic Interaction Detection treebased segmentation technique has been found to be an effective approach for obtaining meaningful segments that are predictive of a K-category (nominal or ordinal criterion variable. CHAID was designed to detect, in an automatic way, the  nteraction between several categorical or ordinal predictors in explaining a categorical response, but, this may not be true when Simpson’s paradox is present. This is due to the fact that CHAID is a forward selection algorithm based on the marginal counts. In this paper we propose a backwards elimination algorithm that starts with the full set of predictors (or full tree and eliminates predictors progressively. The elimination procedure is based on Conditional Independence contrasts using the concept of entropy. The proposed procedure is compared to CHAID.

  5. CAnat: An algorithm for the automatic segmentation of anatomy of medical images

    International Nuclear Information System (INIS)

    Caon, M.; Gobert, L.; Mariusz, B.

    2011-01-01

    Full text: To develop a method to automatically categorise organs and tissues displayed in medical images. Dosimetry calculations using Monte Carlo methods require a mathematical representation of human anatomy e.g. a voxel phantom. For a whole body, their construction involves processing several hundred images to identify each organ and tissue-the process is very time-consuming. This project is developing a Computational Anatomy (CAnat) algorithm to automatically recognise and classify the different tissue in a tomographic image. Methods The algorithm utilizes the Statistical Region Merging technique (SRM). The SRM depends on one estimated parameter. The parameter is a measure of statistical complexity of the image and can be automatically adjusted to suit individual image features. This allows for automatic tuning of coarseness of the overall segmentation as well as object specific selection for further tasks. CAnat is tested on two CT images selected to represent different anatomical complexities. In the mid-thigh image, tissues/. regions of interest are air, fat, muscle, bone marrow and compact bone. In the pelvic image, fat, urinary bladder and anus/colon, muscle, cancellous bone, and compact bone. Segmentation results were evaluated using the Jaccard index which is a measure of set agreement. An index of one indicates perfect agreement between CAnat and manual segmentation. The Jaccard indices for the mid-thigh CT were 0.99, 0.89, 0.97, 0.63 and 0.88, respectively and for the pelvic CT were 0.99, 0.81, 0.77, 0.93, 0.53, 0.76, respectively. Conclusion The high accuracy preliminary segmentation results demonstrate the feasibility of the CAnat algorithm.

  6. Automatic macroscopic characterization of diesel sprays by means of a new image processing algorithm

    Science.gov (United States)

    Rubio-Gómez, Guillermo; Martínez-Martínez, S.; Rua-Mojica, Luis F.; Gómez-Gordo, Pablo; de la Garza, Oscar A.

    2018-05-01

    A novel algorithm is proposed for the automatic segmentation of diesel spray images and the calculation of their macroscopic parameters. The algorithm automatically detects each spray present in an image, and therefore it is able to work with diesel injectors with a different number of nozzle holes without any modification. The main characteristic of the algorithm is that it splits each spray into three different regions and then segments each one with an individually calculated binarization threshold. Each threshold level is calculated from the analysis of a representative luminosity profile of each region. This approach makes it robust to irregular light distribution along a single spray and between different sprays of an image. Once the sprays are segmented, the macroscopic parameters of each one are calculated. The algorithm is tested with two sets of diesel spray images taken under normal and irregular illumination setups.

  7. Robustness and precision of an automatic marker detection algorithm for online prostate daily targeting using a standard V-EPID.

    Science.gov (United States)

    Aubin, S; Beaulieu, L; Pouliot, S; Pouliot, J; Roy, R; Girouard, L M; Martel-Brisson, N; Vigneault, E; Laverdière, J

    2003-07-01

    An algorithm for the daily localization of the prostate using implanted markers and a standard video-based electronic portal imaging device (V-EPID) has been tested. Prior to planning, three gold markers were implanted in the prostate of seven patients. The clinical images were acquired with a BeamViewPlus 2.1 V-EPID for each field during the normal course radiotherapy treatment and are used off-line to determine the ability of the automatic marker detection algorithm to adequately and consistently detect the markers. Clinical images were obtained with various dose levels from ranging 2.5 to 75 MU. The algorithm is based on marker attenuation characterization in the portal image and spatial distribution. A total of 1182 clinical images were taken. The results show an average efficiency of 93% for the markers detected individually and 85% for the group of markers. This algorithm accomplishes the detection and validation in 0.20-0.40 s. When the center of mass of the group of implanted markers is used, then all displacements can be corrected to within 1.0 mm in 84% of the cases and within 1.5 mm in 97% of cases. The standard video-based EPID tested provides excellent marker detection capability even with low dose levels. The V-EPID can be used successfully with radiopaque markers and the automatic detection algorithm to track and correct the daily setup deviations due to organ motions.

  8. Semi-supervised learning based probabilistic latent semantic analysis for automatic image annotation

    Institute of Scientific and Technical Information of China (English)

    Tian Dongping

    2017-01-01

    In recent years, multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas, especially for automatic image annotation, whose purpose is to provide an efficient and effective searching environment for users to query their images more easily.In this paper, a semi-supervised learning based probabilistic latent semantic analysis ( PL-SA) model for automatic image annotation is presenred.Since it' s often hard to obtain or create la-beled images in large quantities while unlabeled ones are easier to collect, a transductive support vector machine ( TSVM) is exploited to enhance the quality of the training image data.Then, differ-ent image features with different magnitudes will result in different performance for automatic image annotation.To this end, a Gaussian normalization method is utilized to normalize different features extracted from effective image regions segmented by the normalized cuts algorithm so as to reserve the intrinsic content of images as complete as possible.Finally, a PLSA model with asymmetric mo-dalities is constructed based on the expectation maximization( EM) algorithm to predict a candidate set of annotations with confidence scores.Extensive experiments on the general-purpose Corel5k dataset demonstrate that the proposed model can significantly improve performance of traditional PL-SA for the task of automatic image annotation.

  9. Solution Approach to Automatic Generation Control Problem Using Hybridized Gravitational Search Algorithm Optimized PID and FOPID Controllers

    Directory of Open Access Journals (Sweden)

    DAHIYA, P.

    2015-05-01

    Full Text Available This paper presents the application of hybrid opposition based disruption operator in gravitational search algorithm (DOGSA to solve automatic generation control (AGC problem of four area hydro-thermal-gas interconnected power system. The proposed DOGSA approach combines the advantages of opposition based learning which enhances the speed of convergence and disruption operator which has the ability to further explore and exploit the search space of standard gravitational search algorithm (GSA. The addition of these two concepts to GSA increases its flexibility for solving the complex optimization problems. This paper addresses the design and performance analysis of DOGSA based proportional integral derivative (PID and fractional order proportional integral derivative (FOPID controllers for automatic generation control problem. The proposed approaches are demonstrated by comparing the results with the standard GSA, opposition learning based GSA (OGSA and disruption based GSA (DGSA. The sensitivity analysis is also carried out to study the robustness of DOGSA tuned controllers in order to accommodate variations in operating load conditions, tie-line synchronizing coefficient, time constants of governor and turbine. Further, the approaches are extended to a more realistic power system model by considering the physical constraints such as thermal turbine generation rate constraint, speed governor dead band and time delay.

  10. Automatic trend estimation

    CERN Document Server

    Vamos¸, C˘alin

    2013-01-01

    Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.

  11. Identification of Forested Landslides Using LiDar Data, Object-based Image Analysis, and Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Xianju Li

    2015-07-01

    Full Text Available For identification of forested landslides, most studies focus on knowledge-based and pixel-based analysis (PBA of LiDar data, while few studies have examined (semi- automated methods and object-based image analysis (OBIA. Moreover, most of them are focused on soil-covered areas with gentle hillslopes. In bedrock-covered mountains with steep and rugged terrain, it is so difficult to identify landslides that there is currently no research on whether combining semi-automated methods and OBIA with only LiDar derivatives could be more effective. In this study, a semi-automatic object-based landslide identification approach was developed and implemented in a forested area, the Three Gorges of China. Comparisons of OBIA and PBA, two different machine learning algorithms and their respective sensitivity to feature selection (FS, were first investigated. Based on the classification result, the landslide inventory was finally obtained according to (1 inclusion of holes encircled by the landslide body; (2 removal of isolated segments, and (3 delineation of closed envelope curves for landslide objects by manual digitizing operation. The proposed method achieved the following: (1 the filter features of surface roughness were first applied for calculating object features, and proved useful; (2 FS improved classification accuracy and reduced features; (3 the random forest algorithm achieved higher accuracy and was less sensitive to FS than a support vector machine; (4 compared to PBA, OBIA was more sensitive to FS, remarkably reduced computing time, and depicted more contiguous terrain segments; (5 based on the classification result with an overall accuracy of 89.11% ± 0.03%, the obtained inventory map was consistent with the referenced landslide inventory map, with a position mismatch value of 9%. The outlined approach would be helpful for forested landslide identification in steep and rugged terrain.

  12. Parameter optimization of differential evolution algorithm for automatic playlist generation problem

    Science.gov (United States)

    Alamag, Kaye Melina Natividad B.; Addawe, Joel M.

    2017-11-01

    With the digitalization of music, the number of collection of music increased largely and there is a need to create lists of music that filter the collection according to user preferences, thus giving rise to the Automatic Playlist Generation Problem (APGP). Previous attempts to solve this problem include the use of search and optimization algorithms. If a music database is very large, the algorithm to be used must be able to search the lists thoroughly taking into account the quality of the playlist given a set of user constraints. In this paper we perform an evolutionary meta-heuristic optimization algorithm, Differential Evolution (DE) using different combination of parameter values and select the best performing set when used to solve four standard test functions. Performance of the proposed algorithm is then compared with normal Genetic Algorithm (GA) and a hybrid GA with Tabu Search. Numerical simulations are carried out to show better results from Differential Evolution approach with the optimized parameter values.

  13. Automatic comic page image understanding based on edge segment analysis

    Science.gov (United States)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  14. Automatic Mexico Gulf Oil Spill Detection from Radarsat-2 SAR Satellite Data Using Genetic Algorithm

    Science.gov (United States)

    Marghany, Maged

    2016-10-01

    In this work, a genetic algorithm is exploited for automatic detection of oil spills of small and large size. The route is achieved using arrays of RADARSAT-2 SAR ScanSAR Narrow single beam data obtained in the Gulf of Mexico. The study shows that genetic algorithm has automatically segmented the dark spot patches related to small and large oil spill pixels. This conclusion is confirmed by the receiveroperating characteristic (ROC) curve and ground data which have been documented. The ROC curve indicates that the existence of oil slick footprints can be identified with the area under the curve between the ROC curve and the no-discrimination line of 90%, which is greater than that of other surrounding environmental features. The small oil spill sizes represented 30% of the discriminated oil spill pixels in ROC curve. In conclusion, the genetic algorithm can be used as a tool for the automatic detection of oil spills of either small or large size and the ScanSAR Narrow single beam mode serves as an excellent sensor for oil spill patterns detection and surveying in the Gulf of Mexico.

  15. A Novel Automatic Detection System for ECG Arrhythmias Using Maximum Margin Clustering with Immune Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Bohui Zhu

    2013-01-01

    Full Text Available This paper presents a novel maximum margin clustering method with immune evolution (IEMMC for automatic diagnosis of electrocardiogram (ECG arrhythmias. This diagnostic system consists of signal processing, feature extraction, and the IEMMC algorithm for clustering of ECG arrhythmias. First, raw ECG signal is processed by an adaptive ECG filter based on wavelet transforms, and waveform of the ECG signal is detected; then, features are extracted from ECG signal to cluster different types of arrhythmias by the IEMMC algorithm. Three types of performance evaluation indicators are used to assess the effect of the IEMMC method for ECG arrhythmias, such as sensitivity, specificity, and accuracy. Compared with K-means and iterSVR algorithms, the IEMMC algorithm reflects better performance not only in clustering result but also in terms of global search ability and convergence ability, which proves its effectiveness for the detection of ECG arrhythmias.

  16. Modified automatic term selection v2: A faster algorithm to calculate inelastic scattering cross-sections

    Energy Technology Data Exchange (ETDEWEB)

    Rusz, Ján, E-mail: jan.rusz@fysik.uu.se

    2017-06-15

    Highlights: • New algorithm for calculating double differential scattering cross-section. • Shown good convergence properties. • Outperforms older MATS algorithm, particularly in zone axis calculations. - Abstract: We present a new algorithm for calculating inelastic scattering cross-section for fast electrons. Compared to the previous Modified Automatic Term Selection (MATS) algorithm (Rusz et al. [18]), it has far better convergence properties in zone axis calculations and it allows to identify contributions of individual atoms. One can think of it as a blend of MATS algorithm and a method described by Weickenmeier and Kohl [10].

  17. Automatic contact algorithm in ppercase[dyna3d] for crashworthiness and impact problems

    International Nuclear Information System (INIS)

    Whirley, Robert G.; Engelmann, Bruce E.

    1994-01-01

    This paper presents a new approach for the automatic definition and treatment of mechanical contact in explicit non-linear finite element analysis. Automatic contact offers the benefits of significantly reduced model construction time and fewer opportunities for user error, but faces significant challenges in reliability and computational costs. Key aspects of the proposed new method include automatic identification of adjacent and opposite surfaces in the global search phase, and the use of a well-defined surface normal which allows a consistent treatment of shell intersection and corner contact conditions without adhoc rules. The paper concludes with three examples which illustrate the performance of the newly proposed algorithm in the public ppercase[dyna3d] code. ((orig.))

  18. Computationally Efficient DOA Tracking Algorithm in Monostatic MIMO Radar with Automatic Association

    Directory of Open Access Journals (Sweden)

    Huaxin Yu

    2014-01-01

    Full Text Available We consider the problem of tracking the direction of arrivals (DOA of multiple moving targets in monostatic multiple-input multiple-output (MIMO radar. A low-complexity DOA tracking algorithm in monostatic MIMO radar is proposed. The proposed algorithm obtains DOA estimation via the difference between previous and current covariance matrix of the reduced-dimension transformation signal, and it reduces the computational complexity and realizes automatic association in DOA tracking. Error analysis and Cramér-Rao lower bound (CRLB of DOA tracking are derived in the paper. The proposed algorithm not only can be regarded as an extension of array-signal-processing DOA tracking algorithm in (Zhang et al. (2008, but also is an improved version of the DOA tracking algorithm in (Zhang et al. (2008. Furthermore, the proposed algorithm has better DOA tracking performance than the DOA tracking algorithm in (Zhang et al. (2008. The simulation results demonstrate effectiveness of the proposed algorithm. Our work provides the technical support for the practical application of MIMO radar.

  19. Hepatic vessel segmentation for 3D planning of liver surgery experimental evaluation of a new fully automatic algorithm.

    Science.gov (United States)

    Conversano, Francesco; Franchini, Roberto; Demitri, Christian; Massoptier, Laurent; Montagna, Francesco; Maffezzoli, Alfonso; Malvasi, Antonio; Casciaro, Sergio

    2011-04-01

    The aim of this study was to identify the optimal parameter configuration of a new algorithm for fully automatic segmentation of hepatic vessels, evaluating its accuracy in view of its use in a computer system for three-dimensional (3D) planning of liver surgery. A phantom reproduction of a human liver with vessels up to the fourth subsegment order, corresponding to a minimum diameter of 0.2 mm, was realized through stereolithography, exploiting a 3D model derived from a real human computed tomographic data set. Algorithm parameter configuration was experimentally optimized, and the maximum achievable segmentation accuracy was quantified for both single two-dimensional slices and 3D reconstruction of the vessel network, through an analytic comparison of the automatic segmentation performed on contrast-enhanced computed tomographic phantom images with actual model features. The optimal algorithm configuration resulted in a vessel detection sensitivity of 100% for vessels > 1 mm in diameter, 50% in the range 0.5 to 1 mm, and 14% in the range 0.2 to 0.5 mm. An average area overlap of 94.9% was obtained between automatically and manually segmented vessel sections, with an average difference of 0.06 mm(2). The average values of corresponding false-positive and false-negative ratios were 7.7% and 2.3%, respectively. A robust and accurate algorithm for automatic extraction of the hepatic vessel tree from contrast-enhanced computed tomographic volume images was proposed and experimentally assessed on a liver model, showing unprecedented sensitivity in vessel delineation. This automatic segmentation algorithm is promising for supporting liver surgery planning and for guiding intraoperative resections. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.

  20. Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images

    Science.gov (United States)

    Fischer, Bernd

    2004-01-01

    Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems

  1. Automatic quantification of defect size using normal templates: a comparative clinical study of three commercially available algorithms

    International Nuclear Information System (INIS)

    Sutter, J. de; Wiele, C. van de; Bondt, P. de; Dierckx, R.; D'Asseler, Y.; Backer, G. de; Rigo, P.

    2000-01-01

    Infarct size assessed by myocardial single-photon emission tomography (SPET) imaging is an important prognostic parameter after myocardial infarction (MI). We compared three commercially available automatic quantification algorithms that make use of normal templates for the evaluation of infarct extent and severity in a large population of patients with remote MI. We studied 100 consecutive patients (80 men, mean age 63±11 years, mean LVEF 47%±15%) with a remote MI who underwent resting technetium-99m tetrofosmin gated SPET study for infarct extent and severity quantification. The quantification algorithms used for comparison were a short-axis algorithm (Cedars-Emory quantitative analysis software, CEqual), a vertical long-axis algorithm (VLAX) and a three-dimensional fitting algorithm (Perfit). Semiquantitative visual infarct extent and severity assessment using a 20-segment model with a 5-point score and the relation of infarct extent and severity with rest LVEF determined by quantitative gated SPET (QGS) were used as standards to compare the different algorithms. Mean infarct extent was similar for visual analysis (30%±21%) and the VLAX algorithm (25%±17%), but CEqual (15%±11%) and Perfit (5%±6%) mean infarct extents were significantly lower compared with visual analysis and the VLAX algorithm. Moreover, infarct extent determined by Perfit was significantly lower than infarct extent determined by CEqual. Correlations between automatic and visual infarct extent and severity evaluations were moderate (r=0.47, P 2 , n=32) compared with anterior infarctions and non-obese patients for all three algorithms. In this large series of post-MI patients, results of infarct extent and severity determination by automatic quantification algorithms that make use of normal templates were not interchangeable and correlated only moderately with semiquantitative visual analysis and LVEF. (orig.)

  2. Dynamic Optimization of Feedforward Automatic Gauge Control Based on Extended Kalman Filter

    Institute of Scientific and Technical Information of China (English)

    YANG Bin-hu; YANG Wei-dong; CHEN Lian-gui; QU Lei

    2008-01-01

    Automatic gauge control is an essentially nonlinear process varying with time delay, and stochastically varying input and process noise always influence the target gauge control accuracy. To improve the control capability of feedforward automatic gauge control, Kalman filter was employed to filter the noise signal transferred from one stand to another. The linearized matrix that the Kalman filter algorithm needed was concluded; thus, the feedforward automatic gauge control architecture was dynamically optimized. The theoretical analyses and simulation show that the proposed algorithm is reasonable and effective.

  3. A comparative study of automatic image segmentation algorithms for target tracking in MR‐IGRT

    Science.gov (United States)

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J.; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa

    2016-01-01

    On‐board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real‐time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image‐guided radiotherapy (MR‐IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k‐means (FKM), k‐harmonic means (KHM), and reaction‐diffusion level set evolution (RD‐LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR‐TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR‐TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD‐LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP‐TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high‐contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR‐TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and

  4. A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.

    Science.gov (United States)

    Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle

    2016-03-01

    On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different

  5. CAS algorithm-based optimum design of PID controller in AVR system

    International Nuclear Information System (INIS)

    Zhu Hui; Li Lixiang; Zhao Ying; Guo Yu; Yang Yixian

    2009-01-01

    This paper presents a novel design method for determining the optimal PID controller parameters of an automatic voltage regulator (AVR) system using the chaotic ant swarm (CAS) algorithm. In the tuning process of parameters, the CAS algorithm is iterated to give the optimal parameters of the PID controller based on the fitness theory, where the position vector of each ant in the CAS algorithm corresponds to the parameter vector of the PID controller. The proposed CAS-PID controllers can ensure better control system performance with respect to the reference input in comparison with GA-PID controllers. Numerical simulations are provided to verify the effectiveness and feasibility of PID controller based on CAS algorithm.

  6. Preservation of memory-based automaticity in reading for older adults.

    Science.gov (United States)

    Rawson, Katherine A; Touron, Dayna R

    2015-12-01

    Concerning age-related effects on cognitive skill acquisition, the modal finding is that older adults do not benefit from practice to the same extent as younger adults in tasks that afford a shift from slower algorithmic processing to faster memory-based processing. In contrast, Rawson and Touron (2009) demonstrated a relatively rapid shift to memory-based processing in the context of a reading task. The current research extended beyond this initial study to provide more definitive evidence for relative preservation of memory-based automaticity in reading tasks for older adults. Younger and older adults read short stories containing unfamiliar noun phrases (e.g., skunk mud) followed by disambiguating information indicating the combination's meaning (either the normatively dominant meaning or an alternative subordinate meaning). Stories were repeated across practice blocks, and then the noun phrases were presented in novel sentence frames in a transfer task. Both age groups shifted from computation to retrieval after relatively few practice trials (as evidenced by convergence of reading times for dominant and subordinate items). Most important, both age groups showed strong evidence for memory-based processing of the noun phrases in the transfer task. In contrast, older adults showed minimal shifting to retrieval in an alphabet arithmetic task, indicating that the preservation of memory-based automaticity in reading was task-specific. Discussion focuses on important implications for theories of memory-based automaticity in general and for specific theoretical accounts of age effects on memory-based automaticity, as well as fruitful directions for future research. (c) 2015 APA, all rights reserved).

  7. Analog Group Delay Equalizers Design Based on Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    M. Laipert

    2006-04-01

    Full Text Available This paper deals with a design method of the analog all-pass filter designated for equalization of the group delay frequency response of the analog filter. This method is based on usage of evolutionary algorithm, the Differential Evolution algorithm in particular. We are able to design such equalizers to be obtained equal-ripple group delay frequency response in the pass-band of the low-pass filter. The procedure works automatically without an input estimation. The method is presented on solving practical examples.

  8. Evaluating and Improving Automatic Sleep Spindle Detection by Using Multi-Objective Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    Min-Yin Liu

    2017-05-01

    Full Text Available Sleep spindles are brief bursts of brain activity in the sigma frequency range (11–16 Hz measured by electroencephalography (EEG mostly during non-rapid eye movement (NREM stage 2 sleep. These oscillations are of great biological and clinical interests because they potentially play an important role in identifying and characterizing the processes of various neurological disorders. Conventionally, sleep spindles are identified by expert sleep clinicians via visual inspection of EEG signals. The process is laborious and the results are inconsistent among different experts. To resolve the problem, numerous computerized methods have been developed to automate the process of sleep spindle identification. Still, the performance of these automated sleep spindle detection methods varies inconsistently from study to study. There are two reasons: (1 the lack of common benchmark databases, and (2 the lack of commonly accepted evaluation metrics. In this study, we focus on tackling the second problem by proposing to evaluate the performance of a spindle detector in a multi-objective optimization context and hypothesize that using the resultant Pareto fronts for deriving evaluation metrics will improve automatic sleep spindle detection. We use a popular multi-objective evolutionary algorithm (MOEA, the Strength Pareto Evolutionary Algorithm (SPEA2, to optimize six existing frequency-based sleep spindle detection algorithms. They include three Fourier, one continuous wavelet transform (CWT, and two Hilbert-Huang transform (HHT based algorithms. We also explore three hybrid approaches. Trained and tested on open-access DREAMS and MASS databases, two new hybrid methods of combining Fourier with HHT algorithms show significant performance improvement with F1-scores of 0.726–0.737.

  9. Automatic motor task selection via a bandit algorithm for a brain-controlled button

    Science.gov (United States)

    Fruitet, Joan; Carpentier, Alexandra; Munos, Rémi; Clerc, Maureen

    2013-02-01

    Objective. Brain-computer interfaces (BCIs) based on sensorimotor rhythms use a variety of motor tasks, such as imagining moving the right or left hand, the feet or the tongue. Finding the tasks that yield best performance, specifically to each user, is a time-consuming preliminary phase to a BCI experiment. This study presents a new adaptive procedure to automatically select (online) the most promising motor task for an asynchronous brain-controlled button. Approach. We develop for this purpose an adaptive algorithm UCB-classif based on the stochastic bandit theory and design an EEG experiment to test our method. We compare (offline) the adaptive algorithm to a naïve selection strategy which uses uniformly distributed samples from each task. We also run the adaptive algorithm online to fully validate the approach. Main results. By not wasting time on inefficient tasks, and focusing on the most promising ones, this algorithm results in a faster task selection and a more efficient use of the BCI training session. More precisely, the offline analysis reveals that the use of this algorithm can reduce the time needed to select the most appropriate task by almost half without loss in precision, or alternatively, allow us to investigate twice the number of tasks within a similar time span. Online tests confirm that the method leads to an optimal task selection. Significance. This study is the first one to optimize the task selection phase by an adaptive procedure. By increasing the number of tasks that can be tested in a given time span, the proposed method could contribute to reducing ‘BCI illiteracy’.

  10. Automatic boiling water reactor loading pattern design using ant colony optimization algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Wang, C.-D. [Department of Engineering and System Science, National Tsing Hua University, 101, Section 2 Kuang Fu Road, Hsinchu 30013, Taiwan (China); Nuclear Engineering Division, Institute of Nuclear Energy Research, No. 1000, Wenhua Rd., Jiaan Village, Longtan Township, Taoyuan County 32546, Taiwan (China)], E-mail: jdwang@iner.gov.tw; Lin Chaung [Department of Engineering and System Science, National Tsing Hua University, 101, Section 2 Kuang Fu Road, Hsinchu 30013, Taiwan (China)

    2009-08-15

    An automatic boiling water reactor (BWR) loading pattern (LP) design methodology was developed using the rank-based ant system (RAS), which is a variant of the ant colony optimization (ACO) algorithm. To reduce design complexity, only the fuel assemblies (FAs) of one eight-core positions were determined using the RAS algorithm, and then the corresponding FAs were loaded into the other parts of the core. Heuristic information was adopted to exclude the selection of the inappropriate FAs which will reduce search space, and thus, the computation time. When the LP was determined, Haling cycle length, beginning of cycle (BOC) shutdown margin (SDM), and Haling end of cycle (EOC) maximum fraction of limit for critical power ratio (MFLCPR) were calculated using SIMULATE-3 code, which were used to evaluate the LP for updating pheromone of RAS. The developed design methodology was demonstrated using FAs of a reference cycle of the BWR6 nuclear power plant. The results show that, the designed LP can be obtained within reasonable computation time, and has a longer cycle length than that of the original design.

  11. Adaptive and automatic red blood cell counting method based on microscopic hyperspectral imaging technology

    Science.gov (United States)

    Liu, Xi; Zhou, Mei; Qiu, Song; Sun, Li; Liu, Hongying; Li, Qingli; Wang, Yiting

    2017-12-01

    Red blood cell counting, as a routine examination, plays an important role in medical diagnoses. Although automated hematology analyzers are widely used, manual microscopic examination by a hematologist or pathologist is still unavoidable, which is time-consuming and error-prone. This paper proposes a full-automatic red blood cell counting method which is based on microscopic hyperspectral imaging of blood smears and combines spatial and spectral information to achieve high precision. The acquired hyperspectral image data of the blood smear in the visible and near-infrared spectral range are firstly preprocessed, and then a quadratic blind linear unmixing algorithm is used to get endmember abundance images. Based on mathematical morphological operation and an adaptive Otsu’s method, a binaryzation process is performed on the abundance images. Finally, the connected component labeling algorithm with magnification-based parameter setting is applied to automatically select the binary images of red blood cell cytoplasm. Experimental results show that the proposed method can perform well and has potential for clinical applications.

  12. Automatic Parking of Self-Driving CAR Based on LIDAR

    Science.gov (United States)

    Lee, B.; Wei, Y.; Guo, I. Y.

    2017-09-01

    To overcome the deficiency of ultrasonic sensor and camera, this paper proposed a method of autonomous parking based on the self-driving car, using HDL-32E LiDAR. First the 3-D point cloud data was preprocessed. Then we calculated the minimum size of parking space according to the dynamic theories of vehicle. Second the rapidly-exploring random tree algorithm (RRT) algorithm was improved in two aspects based on the moving characteristic of autonomous car. And we calculated the parking path on the basis of the vehicle's dynamics and collision constraints. Besides, we used the fuzzy logic controller to control the brake and accelerator in order to realize the stably of speed. At last the experiments were conducted in an autonomous car, and the results show that the proposed automatic parking system is feasible and effective.

  13. AUTOMATIC PARKING OF SELF-DRIVING CAR BASED ON LIDAR

    Directory of Open Access Journals (Sweden)

    B. Lee

    2017-09-01

    Full Text Available To overcome the deficiency of ultrasonic sensor and camera, this paper proposed a method of autonomous parking based on the self-driving car, using HDL-32E LiDAR. First the 3-D point cloud data was preprocessed. Then we calculated the minimum size of parking space according to the dynamic theories of vehicle. Second the rapidly-exploring random tree algorithm (RRT algorithm was improved in two aspects based on the moving characteristic of autonomous car. And we calculated the parking path on the basis of the vehicle’s dynamics and collision constraints. Besides, we used the fuzzy logic controller to control the brake and accelerator in order to realize the stably of speed. At last the experiments were conducted in an autonomous car, and the results show that the proposed automatic parking system is feasible and effective.

  14. Automatic discrimination between safe and unsafe swallowing using a reputation-based classifier

    Directory of Open Access Journals (Sweden)

    Nikjoo Mohammad S

    2011-11-01

    Full Text Available Abstract Background Swallowing accelerometry has been suggested as a potential non-invasive tool for bedside dysphagia screening. Various vibratory signal features and complementary measurement modalities have been put forth in the literature for the potential discrimination between safe and unsafe swallowing. To date, automatic classification of swallowing accelerometry has exclusively involved a single-axis of vibration although a second axis is known to contain additional information about the nature of the swallow. Furthermore, the only published attempt at automatic classification in adult patients has been based on a small sample of swallowing vibrations. Methods In this paper, a large corpus of dual-axis accelerometric signals were collected from 30 older adults (aged 65.47 ± 13.4 years, 15 male referred to videofluoroscopic examination on the suspicion of dysphagia. We invoked a reputation-based classifier combination to automatically categorize the dual-axis accelerometric signals into safe and unsafe swallows, as labeled via videofluoroscopic review. From these participants, a total of 224 swallowing samples were obtained, 164 of which were labeled as unsafe swallows (swallows where the bolus entered the airway and 60 as safe swallows. Three separate support vector machine (SVM classifiers and eight different features were selected for classification. Results With selected time, frequency and information theoretic features, the reputation-based algorithm distinguished between safe and unsafe swallowing with promising accuracy (80.48 ± 5.0%, high sensitivity (97.1 ± 2% and modest specificity (64 ± 8.8%. Interpretation of the most discriminatory features revealed that in general, unsafe swallows had lower mean vibration amplitude and faster autocorrelation decay, suggestive of decreased hyoid excursion and compromised coordination, respectively. Further, owing to its performance-based weighting of component classifiers, the static

  15. Automatic learning algorithm for the MD-logic artificial pancreas system.

    Science.gov (United States)

    Miller, Shahar; Nimri, Revital; Atlas, Eran; Grunberg, Eli A; Phillip, Moshe

    2011-10-01

    Applying real-time learning into an artificial pancreas system could effectively track the unpredictable behavior of glucose-insulin dynamics and adjust insulin treatment accordingly. We describe a novel learning algorithm and its performance when integrated into the MD-Logic Artificial Pancreas (MDLAP) system developed by the Diabetes Technology Center, Schneider Children's Medical Center of Israel, Petah Tikva, Israel. The algorithm was designed to establish an initial patient profile using open-loop data (Initial Learning Algorithm component) and then make periodic adjustments during closed-loop operation (Runtime Learning Algorithm component). The MDLAP system, integrated with the learning algorithm, was tested in seven different experiments using the University of Virginia/Padova simulator, comprising adults, adolescents, and children. The experiments included simulations using the open-loop and closed-loop control strategy under nominal and varying insulin sensitivity conditions. The learning algorithm was automatically activated at the end of the open-loop segment and after every day of the closed-loop operation. Metabolic control parameters achieved at selected time points were compared. The percentage of time glucose levels were maintained within 70-180 mg/dL for children and adolescents significantly improved when open-loop was compared with day 6 of closed-loop control (Psignificantly reduced by approximately sevenfold (Psignificant reduction in the Low Blood Glucose Index (P<0.001). The new algorithm was effective in characterizing the patient profiles from open-loop data and in adjusting treatment to provide better glycemic control during closed-loop control in both conditions. These findings warrant corroboratory clinical trials.

  16. KM-FCM: A fuzzy clustering optimization algorithm based on Mahalanobis distance

    Directory of Open Access Journals (Sweden)

    Zhiwen ZU

    2018-04-01

    Full Text Available The traditional fuzzy clustering algorithm uses Euclidean distance as the similarity criterion, which is disadvantageous to the multidimensional data processing. In order to solve this situation, Mahalanobis distance is used instead of the traditional Euclidean distance, and the optimization of fuzzy clustering algorithm based on Mahalanobis distance is studied to enhance the clustering effect and ability. With making the initialization means by Heuristic search algorithm combined with k-means algorithm, and in terms of the validity function which could automatically adjust the optimal clustering number, an optimization algorithm KM-FCM is proposed. The new algorithm is compared with FCM algorithm, FCM-M algorithm and M-FCM algorithm in three standard data sets. The experimental results show that the KM-FCM algorithm is effective. It has higher clustering accuracy than FCM, FCM-M and M-FCM, recognizing high-dimensional data clustering well. It has global optimization effect, and the clustering number has no need for setting in advance. The new algorithm provides a reference for the optimization of fuzzy clustering algorithm based on Mahalanobis distance.

  17. Automatic bounding estimation in modified NLMS algorithm

    International Nuclear Information System (INIS)

    Shahtalebi, K.; Doost-Hoseini, A.M.

    2002-01-01

    Modified Normalized Least Mean Square algorithm, which is a sign form of Nlm based on set-membership (S M) theory in the class of optimal bounding ellipsoid (OBE) algorithms, requires a priori knowledge of error bounds that is unknown in most applications. In a special but popular case of measurement noise, a simple algorithm has been proposed. With some simulation examples the performance of algorithm is compared with Modified Normalized Least Mean Square

  18. An improved algorithm for automatic detection of saccades in eye movement data and for calculating saccade parameters.

    Science.gov (United States)

    Behrens, F; Mackeben, M; Schröder-Preikschat, W

    2010-08-01

    This analysis of time series of eye movements is a saccade-detection algorithm that is based on an earlier algorithm. It achieves substantial improvements by using an adaptive-threshold model instead of fixed thresholds and using the eye-movement acceleration signal. This has four advantages: (1) Adaptive thresholds are calculated automatically from the preceding acceleration data for detecting the beginning of a saccade, and thresholds are modified during the saccade. (2) The monotonicity of the position signal during the saccade, together with the acceleration with respect to the thresholds, is used to reliably determine the end of the saccade. (3) This allows differentiation between saccades following the main-sequence and non-main-sequence saccades. (4) Artifacts of various kinds can be detected and eliminated. The algorithm is demonstrated by applying it to human eye movement data (obtained by EOG) recorded during driving a car. A second demonstration of the algorithm detects microsleep episodes in eye movement data.

  19. Automatic Derivation of Statistical Data Analysis Algorithms: Planetary Nebulae and Beyond

    OpenAIRE

    Fischer, Bernd; Knuth, Kevin; Hajian, Arsen; Schumann, Johann

    2004-01-01

    AUTOBAYES is a fully automatic program synthesis system for the data analysis domain. Its input is a declarative problem description in form of a statistical model; its output is documented and optimized C/C++ code. The synthesis process relies on the combination of three key techniques. Bayesian networks are used as a compact internal representation mechanism which enables problem decompositions and guides the algorithm derivation. Program schemas are used as independently composable buildin...

  20. Design of Low Power Algorithms for Automatic Embedded Analysis of Patch ECG Signals

    DEFF Research Database (Denmark)

    Saadi, Dorthe Bodholt

    , several different cable-free wireless patch-type ECG recorders have recently reached the market. One of these recorders is the ePatch designed by the Danish company DELTA. The extended monitoring period available with the patch recorders has demonstrated to increase the diagnostic yield of outpatient ECG....... Such algorithms could allow the real-time transmission of clinically relevant information to a central monitoring station. The first step in embedded ECG interpretation is the automatic detection of each individual heartbeat. An important part of this project was therefore to design a novel algorithm...

  1. A 1DVAR-based snowfall rate retrieval algorithm for passive microwave radiometers

    Science.gov (United States)

    Meng, Huan; Dong, Jun; Ferraro, Ralph; Yan, Banghua; Zhao, Limin; Kongoli, Cezar; Wang, Nai-Yu; Zavodsky, Bradley

    2017-06-01

    Snowfall rate retrieval from spaceborne passive microwave (PMW) radiometers has gained momentum in recent years. PMW can be so utilized because of its ability to sense in-cloud precipitation. A physically based, overland snowfall rate (SFR) algorithm has been developed using measurements from the Advanced Microwave Sounding Unit-A/Microwave Humidity Sounder sensor pair and the Advanced Technology Microwave Sounder. Currently, these instruments are aboard five polar-orbiting satellites, namely, NOAA-18, NOAA-19, Metop-A, Metop-B, and Suomi-NPP. The SFR algorithm relies on a separate snowfall detection algorithm that is composed of a satellite-based statistical model and a set of numerical weather prediction model-based filters. There are four components in the SFR algorithm itself: cloud properties retrieval, computation of ice particle terminal velocity, ice water content adjustment, and the determination of snowfall rate. The retrieval of cloud properties is the foundation of the algorithm and is accomplished using a one-dimensional variational (1DVAR) model. An existing model is adopted to derive ice particle terminal velocity. Since no measurement of cloud ice distribution is available when SFR is retrieved in near real time, such distribution is implicitly assumed by deriving an empirical function that adjusts retrieved SFR toward radar snowfall estimates. Finally, SFR is determined numerically from a complex integral. The algorithm has been validated against both radar and ground observations of snowfall events from the contiguous United States with satisfactory results. Currently, the SFR product is operationally generated at the National Oceanic and Atmospheric Administration and can be obtained from that organization.

  2. Automatic re-contouring in 4D radiotherapy

    International Nuclear Information System (INIS)

    Lu, Weiguo; Olivera, Gustavo H; Chen, Quan; Chen, Ming-Li; Ruchala, Kenneth J

    2006-01-01

    Delineating regions of interest (ROIs) on each phase of four-dimensional (4D) computed tomography (CT) images is an essential step for 4D radiotherapy. The requirement of manual phase-by-phase contouring prohibits the routine use of 4D radiotherapy. This paper develops an automatic re-contouring algorithm that combines techniques of deformable registration and surface construction. ROIs are manually contoured slice-by-slice in the reference phase image. A reference surface is constructed based on these reference contours using a triangulated surface construction technique. The deformable registration technique provides the voxel-to-voxel mapping between the reference phase and the test phase. The vertices of the reference surface are displaced in accordance with the deformation map, resulting in a deformed surface. The new contours are reconstructed by cutting the deformed surface slice-by-slice along the transversal, sagittal or coronal direction. Since both the inputs and outputs of our automatic re-contouring algorithm are contours, it is relatively easy to cope with any treatment planning system. We tested our automatic re-contouring algorithm using a deformable phantom and 4D CT images of six lung cancer patients. The proposed algorithm is validated by visual inspections and quantitative comparisons of the automatic re-contours with both the gold standard segmentations and the manual contours. Based on the automatic delineated ROIs, changes of tumour and sensitive structures during respiration are quantitatively analysed. This algorithm could also be used to re-contour daily images for treatment evaluation and adaptive radiotherapy

  3. Fully automatic algorithm for segmenting full human diaphragm in non-contrast CT Images

    Science.gov (United States)

    Karami, Elham; Gaede, Stewart; Lee, Ting-Yim; Samani, Abbas

    2015-03-01

    The diaphragm is a sheet of muscle which separates the thorax from the abdomen and it acts as the most important muscle of the respiratory system. As such, an accurate segmentation of the diaphragm, not only provides key information for functional analysis of the respiratory system, but also can be used for locating other abdominal organs such as the liver. However, diaphragm segmentation is extremely challenging in non-contrast CT images due to the diaphragm's similar appearance to other abdominal organs. In this paper, we present a fully automatic algorithm for diaphragm segmentation in non-contrast CT images. The method is mainly based on a priori knowledge about the human diaphragm anatomy. The diaphragm domes are in contact with the lungs and the heart while its circumference runs along the lumbar vertebrae of the spine as well as the inferior border of the ribs and sternum. As such, the diaphragm can be delineated by segmentation of these organs followed by connecting relevant parts of their outline properly. More specifically, the bottom surface of the lungs and heart, the spine borders and the ribs are delineated, leading to a set of scattered points which represent the diaphragm's geometry. Next, a B-spline filter is used to find the smoothest surface which pass through these points. This algorithm was tested on a noncontrast CT image of a lung cancer patient. The results indicate that there is an average Hausdorff distance of 2.96 mm between the automatic and manually segmented diaphragms which implies a favourable accuracy.

  4. Improvement in PWR automatic optimization reloading methods using genetic algorithm

    International Nuclear Information System (INIS)

    Levine, S.H.; Ivanov, K.; Feltus, M.

    1996-01-01

    The objective of using automatic optimized reloading methods is to provide the Nuclear Engineer with an efficient method for reloading a nuclear reactor which results in superior core configurations that minimize fuel costs. Previous methods developed by Levine et al required a large effort to develop the initial core loading using a priority loading scheme. Subsequent modifications to this core configuration were made using expert rules to produce the final core design. Improvements in this technique have been made by using a genetic algorithm to produce improved core reload designs for PWRs more efficiently (authors)

  5. Improvement in PWR automatic optimization reloading methods using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Levine, S H; Ivanov, K; Feltus, M [Pennsylvania State Univ., University Park, PA (United States)

    1996-12-01

    The objective of using automatic optimized reloading methods is to provide the Nuclear Engineer with an efficient method for reloading a nuclear reactor which results in superior core configurations that minimize fuel costs. Previous methods developed by Levine et al required a large effort to develop the initial core loading using a priority loading scheme. Subsequent modifications to this core configuration were made using expert rules to produce the final core design. Improvements in this technique have been made by using a genetic algorithm to produce improved core reload designs for PWRs more efficiently (authors).

  6. Automatic segmentation of the right ventricle from cardiac MRI using a learning-based approach.

    Science.gov (United States)

    Avendi, Michael R; Kheradvar, Arash; Jafarkhani, Hamid

    2017-12-01

    This study aims to accurately segment the right ventricle (RV) from cardiac MRI using a fully automatic learning-based method. The proposed method uses deep learning algorithms, i.e., convolutional neural networks and stacked autoencoders, for automatic detection and initial segmentation of the RV chamber. The initial segmentation is then combined with the deformable models to improve the accuracy and robustness of the process. We trained our algorithm using 16 cardiac MRI datasets of the MICCAI 2012 RV Segmentation Challenge database and validated our technique using the rest of the dataset (32 subjects). An average Dice metric of 82.5% along with an average Hausdorff distance of 7.85 mm were achieved for all the studied subjects. Furthermore, a high correlation and level of agreement with the ground truth contours for end-diastolic volume (0.98), end-systolic volume (0.99), and ejection fraction (0.93) were observed. Our results show that deep learning algorithms can be effectively used for automatic segmentation of the RV. Computed quantitative metrics of our method outperformed that of the existing techniques participated in the MICCAI 2012 challenge, as reported by the challenge organizers. Magn Reson Med 78:2439-2448, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  7. Development and Beam Tests of an Automatic Algorithm for Alignment of LHC Collimators with Embedded BPMs

    CERN Document Server

    Valentino, G; Gasior, M; Mirarchi, D; Nosych, A A; Redaelli, S; Salvachua, B; Assmann, R W; Sammut, N

    2013-01-01

    Collimators with embedded Beam Position Monitor (BPM) buttons will be installed in the LHC during the upcoming long shutdown period. During the subsequent operation, the BPMs will allow the collimator jaws to be kept centered around the beam trajectory. In this manner, the best possible beam cleaning efficiency and machine protection can be provided at unprecedented higher beam energies and intensities. A collimator alignment algorithm is proposed to center the jaws automatically around the beam. The algorithm is based on successive approximation, as the BPM measurements are affected by non-linearities, which vary with the distance between opposite buttons, as well as the difference between the beam and the jaw centers. The successful test results, as well as some considerations for eventual operation in the LHC are also presented.

  8. WiseScaffolder: an algorithm for the semi-automatic scaffolding of Next Generation Sequencing data.

    Science.gov (United States)

    Farrant, Gregory K; Hoebeke, Mark; Partensky, Frédéric; Andres, Gwendoline; Corre, Erwan; Garczarek, Laurence

    2015-09-03

    The sequencing depth provided by high-throughput sequencing technologies has allowed a rise in the number of de novo sequenced genomes that could potentially be closed without further sequencing. However, genome scaffolding and closure require costly human supervision that often results in genomes being published as drafts. A number of automatic scaffolders were recently released, which improved the global quality of genomes published in the last few years. Yet, none of them reach the efficiency of manual scaffolding. Here, we present an innovative semi-automatic scaffolder that additionally helps with chimerae resolution and generates valuable contig maps and outputs for manual improvement of the automatic scaffolding. This software was tested on the newly sequenced marine cyanobacterium Synechococcus sp. WH8103 as well as two reference datasets used in previous studies, Rhodobacter sphaeroides and Homo sapiens chromosome 14 (http://gage.cbcb.umd.edu/). The quality of resulting scaffolds was compared to that of three other stand-alone scaffolders: SSPACE, SOPRA and SCARPA. For all three model organisms, WiseScaffolder produced better results than other scaffolders in terms of contiguity statistics (number of genome fragments, N50, LG50, etc.) and, in the case of WH8103, the reliability of the scaffolds was confirmed by whole genome alignment against a closely related reference genome. We also propose an efficient computer-assisted strategy for manual improvement of the scaffolding, using outputs generated by WiseScaffolder, as well as for genome finishing that in our hands led to the circularization of the WH8103 genome. Altogether, WiseScaffolder proved more efficient than three other scaffolders for both prokaryotic and eukaryotic genomes and is thus likely applicable to most genome projects. The scaffolding pipeline described here should be of particular interest to biologists wishing to take advantage of the high added value of complete genomes.

  9. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Science.gov (United States)

    Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia

    2016-06-01

    Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  10. Bio-robots automatic navigation with graded electric reward stimulation based on Reinforcement Learning.

    Science.gov (United States)

    Zhang, Chen; Sun, Chao; Gao, Liqiang; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2013-01-01

    Bio-robots based on brain computer interface (BCI) suffer from the lack of considering the characteristic of the animals in navigation. This paper proposed a new method for bio-robots' automatic navigation combining the reward generating algorithm base on Reinforcement Learning (RL) with the learning intelligence of animals together. Given the graded electrical reward, the animal e.g. the rat, intends to seek the maximum reward while exploring an unknown environment. Since the rat has excellent spatial recognition, the rat-robot and the RL algorithm can convergent to an optimal route by co-learning. This work has significant inspiration for the practical development of bio-robots' navigation with hybrid intelligence.

  11. Model-based automatic generation of grasping regions

    Science.gov (United States)

    Bloss, David A.

    1993-01-01

    The problem of automatically generating stable regions for a robotic end effector on a target object, given a model of the end effector and the object is discussed. In order to generate grasping regions, an initial valid grasp transformation from the end effector to the object is obtained based on form closure requirements, and appropriate rotational and translational symmetries are associated with that transformation in order to construct a valid, continuous grasping region. The main result of this algorithm is a list of specific, valid grasp transformations of the end effector to the target object, and the appropriate combinations of translational and rotational symmetries associated with each specific transformation in order to produce a continuous grasp region.

  12. HOW MANY HIPPOS (HOMHIP: ALGORITHM FOR AUTOMATIC COUNTS OF ANIMALS WITH INFRA-RED THERMAL IMAGERY FROM UAV

    Directory of Open Access Journals (Sweden)

    S. Lhoest

    2015-08-01

    Full Text Available The common hippopotamus (Hippopotamus amphibius L. is part of the animal species endangered because of multiple human pressures. Monitoring of species for conservation is then essential, and the development of census protocols has to be chased. UAV technology is considering as one of the new perspectives for wildlife survey. Indeed, this technique has many advantages but its main drawback is the generation of a huge amount of data to handle. This study aims at developing an algorithm for automatic count of hippos, by exploiting thermal infrared aerial images acquired from UAV. This attempt is the first known for automatic detection of this species. Images taken at several flight heights can be used as inputs of the algorithm, ranging from 38 to 155 meters above ground level. A Graphical User Interface has been created in order to facilitate the use of the application. Three categories of animals have been defined following their position in water. The mean error of automatic counts compared with manual delineations is +2.3% and shows that the estimation is unbiased. Those results show great perspectives for the use of the algorithm in populations monitoring after some technical improvements and the elaboration of statistically robust inventories protocols.

  13. An Adaptive Sweep-Circle Spatial Clustering Algorithm Based on Gestalt

    Directory of Open Access Journals (Sweden)

    Qingming Zhan

    2017-08-01

    Full Text Available An adaptive spatial clustering (ASC algorithm is proposed in this present study, which employs sweep-circle techniques and a dynamic threshold setting based on the Gestalt theory to detect spatial clusters. The proposed algorithm can automatically discover clusters in one pass, rather than through the modification of the initial model (for example, a minimal spanning tree, Delaunay triangulation, or Voronoi diagram. It can quickly identify arbitrarily-shaped clusters while adapting efficiently to non-homogeneous density characteristics of spatial data, without the need for prior knowledge or parameters. The proposed algorithm is also ideal for use in data streaming technology with dynamic characteristics flowing in the form of spatial clustering in large data sets.

  14. IMPLEMENTATION OF INCIDENT DETECTION ALGORITHM BASED ON FUZZY LOGIC IN PTV VISSIM

    Directory of Open Access Journals (Sweden)

    Andrey Borisovich Nikolaev

    2017-05-01

    Full Text Available Traffic incident management is a major challenge in the management of movement, requiring constant attention and significant investment, as well as fast and accurate solutions in order to re-establish normal traffic conditions. Automatic control methods are becoming an important factor for the reduction of traffic congestion caused by an arising incident. In this paper, the algorithm of automatic detection incident based on fuzzy logic is implemented in the software PTV VISSIM. 9 different types of tests were conducted on the two lane road section segment with changing traffic conditions: the location of the road accident, loading of traffic. The main conclusion of the research is that the proposed algorithm for the incidents detection demonstrates good performance in the time of detection and false alarms

  15. An Efficient Approach to Mining Maximal Contiguous Frequent Patterns from Large DNA Sequence Databases

    Directory of Open Access Journals (Sweden)

    Md. Rezaul Karim

    2012-03-01

    Full Text Available Mining interesting patterns from DNA sequences is one of the most challenging tasks in bioinformatics and computational biology. Maximal contiguous frequent patterns are preferable for expressing the function and structure of DNA sequences and hence can capture the common data characteristics among related sequences. Biologists are interested in finding frequent orderly arrangements of motifs that are responsible for similar expression of a group of genes. In order to reduce mining time and complexity, however, most existing sequence mining algorithms either focus on finding short DNA sequences or require explicit specification of sequence lengths in advance. The challenge is to find longer sequences without specifying sequence lengths in advance. In this paper, we propose an efficient approach to mining maximal contiguous frequent patterns from large DNA sequence datasets. The experimental results show that our proposed approach is memory-efficient and mines maximal contiguous frequent patterns within a reasonable time.

  16. An automatic scaling method for obtaining the trace and parameters from oblique ionogram based on hybrid genetic algorithm

    Science.gov (United States)

    Song, Huan; Hu, Yaogai; Jiang, Chunhua; Zhou, Chen; Zhao, Zhengyu; Zou, Xianjian

    2016-12-01

    Scaling oblique ionogram plays an important role in obtaining ionospheric structure at the midpoint of oblique sounding path. The paper proposed an automatic scaling method to extract the trace and parameters of oblique ionogram based on hybrid genetic algorithm (HGA). The extracted 10 parameters come from F2 layer and Es layer, such as maximum observation frequency, critical frequency, and virtual height. The method adopts quasi-parabolic (QP) model to describe F2 layer's electron density profile that is used to synthesize trace. And it utilizes secant theorem, Martyn's equivalent path theorem, image processing technology, and echoes' characteristics to determine seven parameters' best fit values, and three parameter's initial values in QP model to set up their searching spaces which are the needed input data of HGA. Then HGA searches the three parameters' best fit values from their searching spaces based on the fitness between the synthesized trace and the real trace. In order to verify the performance of the method, 240 oblique ionograms are scaled and their results are compared with manual scaling results and the inversion results of the corresponding vertical ionograms. The comparison results show that the scaling results are accurate or at least adequate 60-90% of the time.

  17. A test sheet generating algorithm based on intelligent genetic algorithm and hierarchical planning

    Science.gov (United States)

    Gu, Peipei; Niu, Zhendong; Chen, Xuting; Chen, Wei

    2013-03-01

    In recent years, computer-based testing has become an effective method to evaluate students' overall learning progress so that appropriate guiding strategies can be recommended. Research has been done to develop intelligent test assembling systems which can automatically generate test sheets based on given parameters of test items. A good multisubject test sheet depends on not only the quality of the test items but also the construction of the sheet. Effective and efficient construction of test sheets according to multiple subjects and criteria is a challenging problem. In this paper, a multi-subject test sheet generation problem is formulated and a test sheet generating approach based on intelligent genetic algorithm and hierarchical planning (GAHP) is proposed to tackle this problem. The proposed approach utilizes hierarchical planning to simplify the multi-subject testing problem and adopts genetic algorithm to process the layered criteria, enabling the construction of good test sheets according to multiple test item requirements. Experiments are conducted and the results show that the proposed approach is capable of effectively generating multi-subject test sheets that meet specified requirements and achieve good performance.

  18. TU-H-CAMPUS-JeP1-02: Fully Automatic Verification of Automatically Contoured Normal Tissues in the Head and Neck

    Energy Technology Data Exchange (ETDEWEB)

    McCarroll, R [UT MD Anderson Cancer Center, Houston, TX (United States); UT Health Science Center, Graduate School of Biomedical Sciences, Houston, TX (United States); Beadle, B; Yang, J; Zhang, L; Kisling, K; Balter, P; Stingo, F; Nelson, C; Followill, D; Court, L [UT MD Anderson Cancer Center, Houston, TX (United States); Mejia, M [University of Santo Tomas Hospital, Manila, Metro Manila (Philippines)

    2016-06-15

    Purpose: To investigate and validate the use of an independent deformable-based contouring algorithm for automatic verification of auto-contoured structures in the head and neck towards fully automated treatment planning. Methods: Two independent automatic contouring algorithms [(1) Eclipse’s Smart Segmentation followed by pixel-wise majority voting, (2) an in-house multi-atlas based method] were used to create contours of 6 normal structures of 10 head-and-neck patients. After rating by a radiation oncologist, the higher performing algorithm was selected as the primary contouring method, the other used for automatic verification of the primary. To determine the ability of the verification algorithm to detect incorrect contours, contours from the primary method were shifted from 0.5 to 2cm. Using a logit model the structure-specific minimum detectable shift was identified. The models were then applied to a set of twenty different patients and the sensitivity and specificity of the models verified. Results: Per physician rating, the multi-atlas method (4.8/5 point scale, with 3 rated as generally acceptable for planning purposes) was selected as primary and the Eclipse-based method (3.5/5) for verification. Mean distance to agreement and true positive rate were selected as covariates in an optimized logit model. These models, when applied to a group of twenty different patients, indicated that shifts could be detected at 0.5cm (brain), 0.75cm (mandible, cord), 1cm (brainstem, cochlea), or 1.25cm (parotid), with sensitivity and specificity greater than 0.95. If sensitivity and specificity constraints are reduced to 0.9, detectable shifts of mandible and brainstem were reduced by 0.25cm. These shifts represent additional safety margins which might be considered if auto-contours are used for automatic treatment planning without physician review. Conclusion: Automatically contoured structures can be automatically verified. This fully automated process could be used to

  19. Earthquake—explosion discrimination using genetic algorithm-based boosting approach

    Science.gov (United States)

    Orlic, Niksa; Loncaric, Sven

    2010-02-01

    An important and challenging problem in seismic data processing is to discriminate between natural seismic events such as earthquakes and artificial seismic events such as explosions. Many automatic techniques for seismogram classification have been proposed in the literature. Most of these methods have a similar approach to seismogram classification: a predefined set of features based on ad-hoc feature selection criteria is extracted from the seismogram waveform or spectral data and these features are used for signal classification. In this paper we propose a novel approach for seismogram classification. A specially formulated genetic algorithm has been employed to automatically search for a near-optimal seismogram feature set, instead of using ad-hoc feature selection criteria. A boosting method is added to the genetic algorithm when searching for multiple features in order to improve classification performance. A learning set of seismogram data is used by the genetic algorithm to discover a near-optimal feature set. The feature set identified by the genetic algorithm is then used for seismogram classification. The described method is developed to classify seismograms in two groups, whereas a brief overview of method extension for multiple group classification is given. For method verification, a learning set consisting of 40 local earthquake seismograms and 40 explosion seismograms was used. The method was validated on seismogram set consisting of 60 local earthquake seismograms and 60 explosion seismograms, with correct classification of 85%.

  20. Optimal gravitational search algorithm for automatic generation control of interconnected power systems

    Directory of Open Access Journals (Sweden)

    Rabindra Kumar Sahu

    2014-09-01

    Full Text Available An attempt is made for the effective application of Gravitational Search Algorithm (GSA to optimize PI/PIDF controller parameters in Automatic Generation Control (AGC of interconnected power systems. Initially, comparison of several conventional objective functions reveals that ITAE yields better system performance. Then, the parameters of GSA technique are properly tuned and the GSA control parameters are proposed. The superiority of the proposed approach is demonstrated by comparing the results of some recently published techniques such as Differential Evolution (DE, Bacteria Foraging Optimization Algorithm (BFOA and Genetic Algorithm (GA. Additionally, sensitivity analysis is carried out that demonstrates the robustness of the optimized controller parameters to wide variations in operating loading condition and time constants of speed governor, turbine, tie-line power. Finally, the proposed approach is extended to a more realistic power system model by considering the physical constraints such as reheat turbine, Generation Rate Constraint (GRC and Governor Dead Band nonlinearity.

  1. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Directory of Open Access Journals (Sweden)

    H. Yue

    2016-06-01

    Full Text Available Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  2. Method for prefetching non-contiguous data structures

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Hoenicke, Dirk [Ossining, NY; Ohmacht, Martin [Brewster, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2009-05-05

    A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Each processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processor only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple perfecting for non-contiguous data structures is also disclosed. A memory line is redefined so that in addition to the normal physical memory data, every line includes a pointer that is large enough to point to any other line in the memory, wherein the pointers to determine which memory line to prefect rather than some other predictive algorithm. This enables hardware to effectively prefect memory access patterns that are non-contiguous, but repetitive.

  3. A deep convolutional neural network-based automatic delineation strategy for multiple brain metastases stereotactic radiosurgery.

    Directory of Open Access Journals (Sweden)

    Yan Liu

    Full Text Available Accurate and automatic brain metastases target delineation is a key step for efficient and effective stereotactic radiosurgery (SRS treatment planning. In this work, we developed a deep learning convolutional neural network (CNN algorithm for segmenting brain metastases on contrast-enhanced T1-weighted magnetic resonance imaging (MRI datasets. We integrated the CNN-based algorithm into an automatic brain metastases segmentation workflow and validated on both Multimodal Brain Tumor Image Segmentation challenge (BRATS data and clinical patients' data. Validation on BRATS data yielded average DICE coefficients (DCs of 0.75±0.07 in the tumor core and 0.81±0.04 in the enhancing tumor, which outperformed most techniques in the 2015 BRATS challenge. Segmentation results of patient cases showed an average of DCs 0.67±0.03 and achieved an area under the receiver operating characteristic curve of 0.98±0.01. The developed automatic segmentation strategy surpasses current benchmark levels and offers a promising tool for SRS treatment planning for multiple brain metastases.

  4. Automatic Fault Recognition of Photovoltaic Modules Based on Statistical Analysis of Uav Thermography

    Science.gov (United States)

    Kim, D.; Youn, J.; Kim, C.

    2017-08-01

    As a malfunctioning PV (Photovoltaic) cell has a higher temperature than adjacent normal cells, we can detect it easily with a thermal infrared sensor. However, it will be a time-consuming way to inspect large-scale PV power plants by a hand-held thermal infrared sensor. This paper presents an algorithm for automatically detecting defective PV panels using images captured with a thermal imaging camera from an UAV (unmanned aerial vehicle). The proposed algorithm uses statistical analysis of thermal intensity (surface temperature) characteristics of each PV module to verify the mean intensity and standard deviation of each panel as parameters for fault diagnosis. One of the characteristics of thermal infrared imaging is that the larger the distance between sensor and target, the lower the measured temperature of the object. Consequently, a global detection rule using the mean intensity of all panels in the fault detection algorithm is not applicable. Therefore, a local detection rule based on the mean intensity and standard deviation range was developed to detect defective PV modules from individual array automatically. The performance of the proposed algorithm was tested on three sample images; this verified a detection accuracy of defective panels of 97 % or higher. In addition, as the proposed algorithm can adjust the range of threshold values for judging malfunction at the array level, the local detection rule is considered better suited for highly sensitive fault detection compared to a global detection rule.

  5. AUTOMATIC FAULT RECOGNITION OF PHOTOVOLTAIC MODULES BASED ON STATISTICAL ANALYSIS OF UAV THERMOGRAPHY

    Directory of Open Access Journals (Sweden)

    D. Kim

    2017-08-01

    Full Text Available As a malfunctioning PV (Photovoltaic cell has a higher temperature than adjacent normal cells, we can detect it easily with a thermal infrared sensor. However, it will be a time-consuming way to inspect large-scale PV power plants by a hand-held thermal infrared sensor. This paper presents an algorithm for automatically detecting defective PV panels using images captured with a thermal imaging camera from an UAV (unmanned aerial vehicle. The proposed algorithm uses statistical analysis of thermal intensity (surface temperature characteristics of each PV module to verify the mean intensity and standard deviation of each panel as parameters for fault diagnosis. One of the characteristics of thermal infrared imaging is that the larger the distance between sensor and target, the lower the measured temperature of the object. Consequently, a global detection rule using the mean intensity of all panels in the fault detection algorithm is not applicable. Therefore, a local detection rule based on the mean intensity and standard deviation range was developed to detect defective PV modules from individual array automatically. The performance of the proposed algorithm was tested on three sample images; this verified a detection accuracy of defective panels of 97 % or higher. In addition, as the proposed algorithm can adjust the range of threshold values for judging malfunction at the array level, the local detection rule is considered better suited for highly sensitive fault detection compared to a global detection rule.

  6. Towards automatic music transcription: note extraction based on independent subspace analysis

    Science.gov (United States)

    Wellhausen, Jens; Hoynck, Michael

    2005-01-01

    Due to the increasing amount of music available electronically the need of automatic search, retrieval and classification systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications, music analysis and music classification. The first part of the algorithm performs a note accurate temporal audio segmentation. In the second part, the resulting segments are examined using Independent Subspace Analysis to extract sounding notes. Finally, the results are used to build a MIDI file as a new representation of the piece of music which is examined.

  7. Computationally Efficient Automatic Coast Mode Target Tracking Based on Occlusion Awareness in Infrared Images.

    Science.gov (United States)

    Kim, Sohyun; Jang, Gwang-Il; Kim, Sungho; Kim, Junmo

    2018-03-27

    This paper proposes the automatic coast mode tracking of centroid trackers for infrared images to overcome the target occlusion status. The centroid tracking method, using only the brightness information of an image, is still widely used in infrared imaging tracking systems because it is difficult to extract meaningful features from infrared images. However, centroid trackers are likely to lose the track because they are highly vulnerable to screened status by the clutter or background. Coast mode, one of the tracking modes, maintains the servo slew rate with the tracking rate right before the loss of track. The proposed automatic coast mode tracking method makes decisions regarding entering coast mode by the prediction of target occlusion and tries to re-lock the target and resume the tracking after blind time. This algorithm comprises three steps. The first step is the prediction process of the occlusion by checking both matters which have target-likelihood brightness and which may screen the target despite different brightness. The second step is the process making inertial tracking commands to the servo. The last step is the process of re-locking a target based on the target modeling of histogram ratio. The effectiveness of the proposed algorithm is addressed by presenting experimental results based on computer simulation with various test imagery sequences compared to published tracking algorithms. The proposed algorithm is tested under a real environment with a naval electro-optical tracking system (EOTS) and airborne EO/IR system.

  8. Computationally Efficient Automatic Coast Mode Target Tracking Based on Occlusion Awareness in Infrared Images

    Directory of Open Access Journals (Sweden)

    Sohyun Kim

    2018-03-01

    Full Text Available This paper proposes the automatic coast mode tracking of centroid trackers for infrared images to overcome the target occlusion status. The centroid tracking method, using only the brightness information of an image, is still widely used in infrared imaging tracking systems because it is difficult to extract meaningful features from infrared images. However, centroid trackers are likely to lose the track because they are highly vulnerable to screened status by the clutter or background. Coast mode, one of the tracking modes, maintains the servo slew rate with the tracking rate right before the loss of track. The proposed automatic coast mode tracking method makes decisions regarding entering coast mode by the prediction of target occlusion and tries to re-lock the target and resume the tracking after blind time. This algorithm comprises three steps. The first step is the prediction process of the occlusion by checking both matters which have target-likelihood brightness and which may screen the target despite different brightness. The second step is the process making inertial tracking commands to the servo. The last step is the process of re-locking a target based on the target modeling of histogram ratio. The effectiveness of the proposed algorithm is addressed by presenting experimental results based on computer simulation with various test imagery sequences compared to published tracking algorithms. The proposed algorithm is tested under a real environment with a naval electro-optical tracking system (EOTS and airborne EO/IR system.

  9. Multi-atlas-based automatic 3D segmentation for prostate brachytherapy in transrectal ultrasound images

    Science.gov (United States)

    Nouranian, Saman; Mahdavi, S. Sara; Spadinger, Ingrid; Morris, William J.; Salcudean, S. E.; Abolmaesumi, P.

    2013-03-01

    One of the commonly used treatment methods for early-stage prostate cancer is brachytherapy. The standard of care for planning this procedure is segmentation of contours from transrectal ultrasound (TRUS) images, which closely follow the prostate boundary. This process is currently performed either manually or using semi-automatic techniques. This paper introduces a fully automatic segmentation algorithm which uses a priori knowledge of contours in a reference data set of TRUS volumes. A non-parametric deformable registration method is employed to transform the atlas prostate contours to a target image coordinates. All atlas images are sorted based on their registration results and the highest ranked registration results are selected for decision fusion. A Simultaneous Truth and Performance Level Estimation algorithm is utilized to fuse labels from registered atlases and produce a segmented target volume. In this experiment, 50 patient TRUS volumes are obtained and a leave-one-out study on TRUS volumes is reported. We also compare our results with a state-of-the-art semi-automatic prostate segmentation method that has been clinically used for planning prostate brachytherapy procedures and we show comparable accuracy and precision within clinically acceptable runtime.

  10. An automatic algorithm for detecting stent endothelialization from volumetric optical coherence tomography datasets

    Energy Technology Data Exchange (ETDEWEB)

    Bonnema, Garret T; Barton, Jennifer K [College of Optical Sciences, University of Arizona, Tucson, AZ (United States); Cardinal, Kristen O' Halloran [Biomedical and General Engineering, California Polytechnic State University (United States); Williams, Stuart K [Cardiovascular Innovation Institute, University of Louisville, Louisville, KY 40292 (United States)], E-mail: barton@u.arizona.edu

    2008-06-21

    Recent research has suggested that endothelialization of vascular stents is crucial to reducing the risk of late stent thrombosis. With a resolution of approximately 10 {mu}m, optical coherence tomography (OCT) may be an appropriate imaging modality for visualizing the vascular response to a stent and measuring the percentage of struts covered with an anti-thrombogenic cellular lining. We developed an image analysis program to locate covered and uncovered stent struts in OCT images of tissue-engineered blood vessels. The struts were found by exploiting the highly reflective and shadowing characteristics of the metallic stent material. Coverage was evaluated by comparing the luminal surface with the depth of the strut reflection. Strut coverage calculations were compared to manual assessment of OCT images and epi-fluorescence analysis of the stented grafts. Based on the manual assessment, the strut identification algorithm operated with a sensitivity of 93% and a specificity of 99%. The strut coverage algorithm was 81% sensitive and 96% specific. The present study indicates that the program can automatically determine percent cellular coverage from volumetric OCT datasets of blood vessel mimics. The program could potentially be extended to assessments of stent endothelialization in native stented arteries.

  11. Utilization of a genetic algorithm for the automatic detection of oil spill from RADARSAT-2 SAR satellite data

    International Nuclear Information System (INIS)

    Marghany, Maged

    2014-01-01

    Highlights: • An oil platform located 70 km from the coast of Louisiana sank on Thursday. • Oil spill has backscatter values of −25 dB in RADARSAT-2 SAR. • Oil spill is portrayed in SCNB mode by shallower incidence angle. • Ideal detection of oil spills in SAR images requires moderate wind speeds. • Genetic algorithm is excellent tool for automatic detection of oil spill in RADARSAT-2 SAR data. - Abstract: In this work, a genetic algorithm is applied for the automatic detection of oil spills. The procedure is implemented using sequences from RADARSAT-2 SAR ScanSAR Narrow single-beam data acquired in the Gulf of Mexico. The study demonstrates that the implementation of crossover allows for the generation of an accurate oil spill pattern. This conclusion is confirmed by the receiver-operating characteristic (ROC) curve. The ROC curve indicates that the existence of oil slick footprints can be identified using the area between the ROC curve and the no-discrimination line of 90%, which is greater than that of other surrounding environmental features. In conclusion, the genetic algorithm can be used as a tool for the automatic detection of oil spills, and the ScanSAR Narrow single-beam mode serves as an excellent sensor for oil spill detection and survey

  12. 46 CFR 154.174 - Transverse contiguous hull structure.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Transverse contiguous hull structure. 154.174 Section... Equipment Hull Structure § 154.174 Transverse contiguous hull structure. (a) The transverse contiguous hull...) The transverse contiguous hull structure of a vessel having cargo containment systems with secondary...

  13. Algorithms for the automatic identification of MARFEs and UFOs in JET database of visible camera videos

    International Nuclear Information System (INIS)

    Murari, A.; Camplani, M.; Cannas, B.; Usai, P.; Mazon, D.; Delaunay, F.

    2010-01-01

    MARFE instabilities and UFOs leave clear signatures in JET fast visible camera videos. Given the potential harmful consequences of these events, particularly as triggers of disruptions, it would be important to have the means of detecting them automatically. In this paper, the results of various algorithms to identify automatically the MARFEs and UFOs in JET visible videos are reported. The objective is to retrieve the videos, which have captured these events, exploring the whole JET database of images, as a preliminary step to the development of real-time identifiers in the future. For the detection of MARFEs, a complete identifier has been finalized, using morphological operators and Hu moments. The final algorithm manages to identify the videos with MARFEs with a success rate exceeding 80%. Due to the lack of a complete statistics of examples, the UFO identifier is less developed, but a preliminary code can detect UFOs quite reliably. (authors)

  14. Cloud-Based Evaluation of Anatomical Structure Segmentation and Landmark Detection Algorithms : VISCERAL Anatomy Benchmarks

    OpenAIRE

    Jimenez-del-Toro, Oscar; Muller, Henning; Krenn, Markus; Gruenberg, Katharina; Taha, Abdel Aziz; Winterstein, Marianne; Eggel, Ivan; Foncubierta-Rodriguez, Antonio; Goksel, Orcun; Jakab, Andres; Kontokotsios, Georgios; Langs, Georg; Menze, Bjoern H.; Fernandez, Tomas Salas; Schaer, Roger

    2016-01-01

    Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the ...

  15. Algorithm of Defect Segmentation for AFP Based on Prepregs

    Directory of Open Access Journals (Sweden)

    CAI Zhiqiang

    2017-04-01

    Full Text Available In order to ensure the performance of the automated fiber placement forming parts, according to the homogeneity of the image of the prepreg surface along the fiber direction, a defect segmentation algorithm which was the combination of gray compensation and substraction algorithm based on image processing technology was proposed. The gray compensation matrix of image was used to compensate the gray image, and the maximum error point of the image matrix was eliminated according to the characteristics that the gray error obeys the normal distribution. The standard image was established, using the allowed deviation coefficient K as a criterion for substraction segmentation. Experiments show that the algorithm has good effect, fast speed in segmenting two kinds of typical laying defect of bubbles or foreign objects, and provides a good theoretical basis to realize automatic laying defect online monitoring.

  16. Automatic Parking Based on a Bird's Eye View Vision System

    Directory of Open Access Journals (Sweden)

    Chunxiang Wang

    2014-03-01

    Full Text Available This paper aims at realizing an automatic parking method through a bird's eye view vision system. With this method, vehicles can make robust and real-time detection and recognition of parking spaces. During parking process, the omnidirectional information of the environment can be obtained by using four on-board fisheye cameras around the vehicle, which are the main part of the bird's eye view vision system. In order to achieve this purpose, a polynomial fisheye distortion model is firstly used for camera calibration. An image mosaicking method based on the Levenberg-Marquardt algorithm is used to combine four individual images from fisheye cameras into one omnidirectional bird's eye view image. Secondly, features of the parking spaces are extracted with a Radon transform based method. Finally, double circular trajectory planning and a preview control strategy are utilized to realize autonomous parking. Through experimental analysis, we can see that the proposed method can get effective and robust real-time results in both parking space recognition and automatic parking.

  17. Fuzzy power control algorithm for a pressurized water reactor

    International Nuclear Information System (INIS)

    Hah, Y.J.; Lee, B.W.

    1994-01-01

    A fuzzy power control algorithm is presented for automatic reactor power control in a pressurized water reactor (PWR). Automatic power shape control is complicated by the use of control rods with a conventional proportional-integral-differential controller because it is highly coupled with reactivity compensation. Thus, manual shape controls are usually employed even for the limited capability needed for load-following operations including frequency control. In an attempt to achieve automatic power shape control without any design modifications to the core, a fuzzy power control algorithm is proposed. For the fuzzy control, the rule base is formulated based on a multiple-input multiple-output system. The minimum operation rule and the center of area method are implemented for the development of the fuzzy algorithm. The fuzzy power control algorithm has been applied to Yonggwang Nuclear Unit 3. The simulation results show that the fuzzy control can be adapted as a practical control strategy for automatic reactor power control of PWRs during the load-following operations

  18. Automatic stair-climbing algorithm of the planetary wheel type mobile robot in nuclear facilities

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Byung Soo; Kim, Seung Ho; Lee, Jong Min [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-10-01

    A mobile robot, named KAEROT, has been developed for inspection and maintenance operations in nuclear facilities. The main feature of locomotion system is the planetary wheel assembly with small wheels. This mechanism has been designed to be able to go over the stairs and obstacles with stability. This paper presents the inverse kinematic solution that is to be operated by remote control. The automatic stair climbing algorithm is also proposed. The proposed algorithms the moving paths of small wheels and calculates the angular velocity of 3 actuation wheels. The results of simulations and experiments are given for KAEROT performed on the irregular stairs in laboratory. It is shown that the proposed algorithm provides the lower inclination angle of the robot body and increases its stability during navigation. 14 figs., 16 refs. (Author).

  19. Automatic stair-climbing algorithm of the planetary wheel type mobile robot in nuclear facilities

    International Nuclear Information System (INIS)

    Kim, Byung Soo; Kim, Seung Ho; Lee, Jong Min

    1995-01-01

    A mobile robot, named KAEROT, has been developed for inspection and maintenance operations in nuclear facilities. The main feature of locomotion system is the planetary wheel assembly with small wheels. This mechanism has been designed to be able to go over the stairs and obstacles with stability. This paper presents the inverse kinematic solution that is to be operated by remote control. The automatic stair climbing algorithm is also proposed. The proposed algorithms the moving paths of small wheels and calculates the angular velocity of 3 actuation wheels. The results of simulations and experiments are given for KAEROT performed on the irregular stairs in laboratory. It is shown that the proposed algorithm provides the lower inclination angle of the robot body and increases its stability during navigation. 14 figs., 16 refs. (Author)

  20. Automatically fused instructions : algorithms for the customization of the instruction, set of a recon?gurable architecture

    NARCIS (Netherlands)

    Galuzzi, C.

    2009-01-01

    In this dissertation, we address the design of algorithms for the automatic identi?cation and selection of complex application-speci?c instructions used to speed up the execution of applications on recon?gurable architectures. The computationally intensive portions of an application are analyzed and

  1. Comparison of the effects of model-based iterative reconstruction and filtered back projection algorithms on software measurements in pulmonary subsolid nodules.

    Science.gov (United States)

    Cohen, Julien G; Kim, Hyungjin; Park, Su Bin; van Ginneken, Bram; Ferretti, Gilbert R; Lee, Chang Hyun; Goo, Jin Mo; Park, Chang Min

    2017-08-01

    To evaluate the differences between filtered back projection (FBP) and model-based iterative reconstruction (MBIR) algorithms on semi-automatic measurements in subsolid nodules (SSNs). Unenhanced CT scans of 73 SSNs obtained using the same protocol and reconstructed with both FBP and MBIR algorithms were evaluated by two radiologists. Diameter, mean attenuation, mass and volume of whole nodules and their solid components were measured. Intra- and interobserver variability and differences between FBP and MBIR were then evaluated using Bland-Altman method and Wilcoxon tests. Longest diameter, volume and mass of nodules and those of their solid components were significantly higher using MBIR (p algorithms with respect to the diameter, volume and mass of nodules and their solid components. There were no significant differences in intra- or interobserver variability between FBP and MBIR (p > 0.05). Semi-automatic measurements of SSNs significantly differed between FBP and MBIR; however, the differences were within the range of measurement variability. • Intra- and interobserver reproducibility of measurements did not differ between FBP and MBIR. • Differences in SSNs' semi-automatic measurement induced by reconstruction algorithms were not clinically significant. • Semi-automatic measurement may be conducted regardless of reconstruction algorithm. • SSNs' semi-automated classification agreement (pure vs. part-solid) did not significantly differ between algorithms.

  2. Design of an optimal SMES for automatic generation control of two-area thermal power system using Cuckoo search algorithm

    Directory of Open Access Journals (Sweden)

    Sabita Chaine

    2015-05-01

    Full Text Available This work presents a methodology adopted in order to tune the controller parameters of superconducting magnetic energy storage (SMES system in the automatic generation control (AGC of a two-area thermal power system. The gains of integral controllers of AGC loop, proportional controller of SMES loop and gains of the current feedback loop of the inductor in SMES are optimized simultaneously in order to achieve a desired performance. Recently proposed intelligent technique based algorithm known as Cuckoo search algorithm (CSA is applied for optimization. Sensitivity and robustness of the tuned gains tested at different operating conditions prove the effectiveness of fast acting energy storage devices like SMES in damping out oscillations in power system when their controllers are properly tuned.

  3. A Modified MinMax k-Means Algorithm Based on PSO.

    Science.gov (United States)

    Wang, Xiaoyan; Bai, Yanping

    The MinMax k -means algorithm is widely used to tackle the effect of bad initialization by minimizing the maximum intraclustering errors. Two parameters, including the exponent parameter and memory parameter, are involved in the executive process. Since different parameters have different clustering errors, it is crucial to choose appropriate parameters. In the original algorithm, a practical framework is given. Such framework extends the MinMax k -means to automatically adapt the exponent parameter to the data set. It has been believed that if the maximum exponent parameter has been set, then the programme can reach the lowest intraclustering errors. However, our experiments show that this is not always correct. In this paper, we modified the MinMax k -means algorithm by PSO to determine the proper values of parameters which can subject the algorithm to attain the lowest clustering errors. The proposed clustering method is tested on some favorite data sets in several different initial situations and is compared to the k -means algorithm and the original MinMax k -means algorithm. The experimental results indicate that our proposed algorithm can reach the lowest clustering errors automatically.

  4. Automatic learning-based beam angle selection for thoracic IMRT

    International Nuclear Information System (INIS)

    Amit, Guy; Marshall, Andrea; Purdie, Thomas G.; Jaffray, David A.; Levinshtein, Alex; Hope, Andrew J.; Lindsay, Patricia; Pekar, Vladimir

    2015-01-01

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  5. Fully automatic algorithm for the analysis of vessels in the angiographic image of the eye fundus

    Directory of Open Access Journals (Sweden)

    Koprowski Robert

    2012-06-01

    Full Text Available Abstract Background The available scientific literature contains descriptions of manual, semi-automated and automated methods for analysing angiographic images. The presented algorithms segment vessels calculating their tortuosity or number in a given area. We describe a statistical analysis of the inclination of the vessels in the fundus as related to their distance from the center of the optic disc. Methods The paper presents an automated method for analysing vessels which are found in angiographic images of the eye using a Matlab implemented algorithm. It performs filtration and convolution operations with suggested masks. The result is an image containing information on the location of vessels and their inclination angle in relation to the center of the optic disc. This is a new approach to the analysis of vessels whose usefulness has been confirmed in the diagnosis of hypertension. Results The proposed algorithm analyzed and processed the images of the eye fundus using a classifier in the form of decision trees. It enabled the proper classification of healthy patients and those with hypertension. The result is a very good separation of healthy subjects from the hypertensive ones: sensitivity - 83%, specificity - 100%, accuracy - 96%. This confirms a practical usefulness of the proposed method. Conclusions This paper presents an algorithm for the automatic analysis of morphological parameters of the fundus vessels. Such an analysis is performed during fluorescein angiography of the eye. The presented algorithm automatically calculates the global statistical features connected with both tortuosity of vessels and their total area or their number.

  6. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    Science.gov (United States)

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-09-03

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  7. Research on B Cell Algorithm for Learning to Rank Method Based on Parallel Strategy.

    Science.gov (United States)

    Tian, Yuling; Zhang, Hongxian

    2016-01-01

    For the purposes of information retrieval, users must find highly relevant documents from within a system (and often a quite large one comprised of many individual documents) based on input query. Ranking the documents according to their relevance within the system to meet user needs is a challenging endeavor, and a hot research topic-there already exist several rank-learning methods based on machine learning techniques which can generate ranking functions automatically. This paper proposes a parallel B cell algorithm, RankBCA, for rank learning which utilizes a clonal selection mechanism based on biological immunity. The novel algorithm is compared with traditional rank-learning algorithms through experimentation and shown to outperform the others in respect to accuracy, learning time, and convergence rate; taken together, the experimental results show that the proposed algorithm indeed effectively and rapidly identifies optimal ranking functions.

  8. [Affine transformation-based automatic registration for peripheral digital subtraction angiography (DSA)].

    Science.gov (United States)

    Kong, Gang; Dai, Dao-Qing; Zou, Lu-Min

    2008-07-01

    In order to remove the artifacts of peripheral digital subtraction angiography (DSA), an affine transformation-based automatic image registration algorithm is introduced here. The whole process is described as follows: First, rectangle feature templates are constructed with their centers of the extracted Harris corners in the mask, and motion vectors of the central feature points are estimated using template matching technology with the similarity measure of maximum histogram energy. And then the optimal parameters of the affine transformation are calculated with the matrix singular value decomposition (SVD) method. Finally, bilinear intensity interpolation is taken to the mask according to the specific affine transformation. More than 30 peripheral DSA registrations are performed with the presented algorithm, and as the result, moving artifacts of the images are removed with sub-pixel precision, and the time consumption is less enough to satisfy the clinical requirements. Experimental results show the efficiency and robustness of the algorithm.

  9. Snake Model Based on Improved Genetic Algorithm in Fingerprint Image Segmentation

    Directory of Open Access Journals (Sweden)

    Mingying Zhang

    2016-12-01

    Full Text Available Automatic fingerprint identification technology is a quite mature research field in biometric identification technology. As the preprocessing step in fingerprint identification, fingerprint segmentation can improve the accuracy of fingerprint feature extraction, and also reduce the time of fingerprint preprocessing, which has a great significance in improving the performance of the whole system. Based on the analysis of the commonly used methods of fingerprint segmentation, the existing segmentation algorithm is improved in this paper. The snake model is used to segment the fingerprint image. Additionally, it is improved by using the global optimization of the improved genetic algorithm. Experimental results show that the algorithm has obvious advantages both in the speed of image segmentation and in the segmentation effect.

  10. Class hierarchical test case generation algorithm based on expanded EMDPN model

    Institute of Scientific and Technical Information of China (English)

    LI Jun-yi; GONG Hong-fang; HU Ji-ping; ZOU Bei-ji; SUN Jia-guang

    2006-01-01

    A new model of event and message driven Petri network(EMDPN) based on the characteristic of class interaction for messages passing between two objects was extended. Using EMDPN interaction graph, a class hierarchical test-case generation algorithm with cooperated paths (copaths) was proposed, which can be used to solve the problems resulting from the class inheritance mechanism encountered in object-oriented software testing such as oracle, message transfer errors, and unreachable statement. Finally, the testing sufficiency was analyzed with the ordered sequence testing criterion(OSC). The results indicate that the test cases stemmed from newly proposed automatic algorithm of copaths generation satisfies synchronization message sequences testing criteria, therefore the proposed new algorithm of copaths generation has a good coverage rate.

  11. Dicty_cDB: Contig-U09694-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U09694-1 gap included 1129 1 4027135 4026071 MINUS 3 4 U09694 2 0 1 0 0 0 0 0 0 0 0 0 0 0 Show Contig...-U09694-1 Contig ID Contig-U09694-1 Contig update 2002. 9.13 Contig sequence >Contig-U09694-1 (Contig-U09694-1Q) /CSM_Contig/Contig-U0969...TTAAATTAAAACAACAACAATTTCATAATATAAATAAT Gap gap included Contig length 1129 Chromosome number (1..6, M) 1 Chr...iklkqqqfklkqqqfhninn own update 2004. 6.10 Homology vs CSM-cDNA Query= Contig-U09694-1 (Contig-U09694-1Q) /CSM_Contig/Contig...E Sequences producing significant alignments: (bits) Value Contig-U09694-1 (Contig-U09694-1Q) /CSM_Contig

  12. Dicty_cDB: Contig-U12086-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U12086-1 gap included 1101 3 5710254 5711336 PLUS 1 2 U12086 0 0 0 0 0 0 0 1 0 0 0 0 0 0 Show Contig...-U12086-1 Contig ID Contig-U12086-1 Contig update 2002.12.18 Contig sequence >Contig-U12086-1 (Contig-U12086-1Q) /CSM_Contig/Contig-U12086...ATCGGATTA Gap gap included Contig length 1101 Chromosome number (1..6, M) 3 Chromosome length 6358359 Start ...te 2004. 6.10 Homology vs CSM-cDNA Query= Contig-U12086-1 (Contig-U12086-1Q) /CSM_Contig/Contig...Sequences producing significant alignments: (bits) Value Contig-U12086-1 (Contig-U12086-1Q) /CSM_Contig/Conti... 404 e-113 Contig

  13. Automatic Recognition of Chinese Personal Name Using Conditional Random Fields and Knowledge Base

    Directory of Open Access Journals (Sweden)

    Chuan Gu

    2015-01-01

    Full Text Available According to the features of Chinese personal name, we present an approach for Chinese personal name recognition based on conditional random fields (CRF and knowledge base in this paper. The method builds multiple features of CRF model by adopting Chinese character as processing unit, selects useful features based on selection algorithm of knowledge base and incremental feature template, and finally implements the automatic recognition of Chinese personal name from Chinese document. The experimental results on open real corpus demonstrated the effectiveness of our method and obtained high accuracy rate and high recall rate of recognition.

  14. Automatic detection of optic disc based on PCA and mathematical morphology.

    Science.gov (United States)

    Morales, Sandra; Naranjo, Valery; Angulo, Us; Alcaniz, Mariano

    2013-04-01

    The algorithm proposed in this paper allows to automatically segment the optic disc from a fundus image. The goal is to facilitate the early detection of certain pathologies and to fully automate the process so as to avoid specialist intervention. The method proposed for the extraction of the optic disc contour is mainly based on mathematical morphology along with principal component analysis (PCA). It makes use of different operations such as generalized distance function (GDF), a variant of the watershed transformation, the stochastic watershed, and geodesic transformations. The input of the segmentation method is obtained through PCA. The purpose of using PCA is to achieve the grey-scale image that better represents the original RGB image. The implemented algorithm has been validated on five public databases obtaining promising results. The average values obtained (a Jaccard's and Dice's coefficients of 0.8200 and 0.8932, respectively, an accuracy of 0.9947, and a true positive and false positive fractions of 0.9275 and 0.0036) demonstrate that this method is a robust tool for the automatic segmentation of the optic disc. Moreover, it is fairly reliable since it works properly on databases with a large degree of variability and improves the results of other state-of-the-art methods.

  15. Verification test for on-line diagnosis algorithm based on noise analysis

    International Nuclear Information System (INIS)

    Tamaoki, T.; Naito, N.; Tsunoda, T.; Sato, M.; Kameda, A.

    1980-01-01

    An on-line diagnosis algorithm was developed and its verification test was performed using a minicomputer. This algorithm identifies the plant state by analyzing various system noise patterns, such as power spectral densities, coherence functions etc., in three procedure steps. Each obtained noise pattern is examined by using the distances from its reference patterns prepared for various plant states. Then, the plant state is identified by synthesizing each result with an evaluation weight. This weight is determined automatically from the reference noise patterns prior to on-line diagnosis. The test was performed with 50 MW (th) Steam Generator noise data recorded under various controller parameter values. The algorithm performance was evaluated based on a newly devised index. The results obtained with one kind of weight showed the algorithm efficiency under the proper selection of noise patterns. Results for another kind of weight showed the robustness of the algorithm to this selection. (orig.)

  16. Development of antibiotic regimens using graph based evolutionary algorithms.

    Science.gov (United States)

    Corns, Steven M; Ashlock, Daniel A; Bryden, Kenneth M

    2013-12-01

    This paper examines the use of evolutionary algorithms in the development of antibiotic regimens given to production animals. A model is constructed that combines the lifespan of the animal and the bacteria living in the animal's gastro-intestinal tract from the early finishing stage until the animal reaches market weight. This model is used as the fitness evaluation for a set of graph based evolutionary algorithms to assess the impact of diversity control on the evolving antibiotic regimens. The graph based evolutionary algorithms have two objectives: to find an antibiotic treatment regimen that maintains the weight gain and health benefits of antibiotic use and to reduce the risk of spreading antibiotic resistant bacteria. This study examines different regimens of tylosin phosphate use on bacteria populations divided into Gram positive and Gram negative types, with a focus on Campylobacter spp. Treatment regimens were found that provided decreased antibiotic resistance relative to conventional methods while providing nearly the same benefits as conventional antibiotic regimes. By using a graph to control the information flow in the evolutionary algorithm, a variety of solutions along the Pareto front can be found automatically for this and other multi-objective problems. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Automatic fuel lattice design in a boiling water reactor using a particle swarm optimization algorithm and local search

    International Nuclear Information System (INIS)

    Lin Chaung; Lin, Tung-Hsien

    2012-01-01

    Highlights: ► The automatic procedure was developed to design the radial enrichment and gadolinia (Gd) distribution of fuel lattice. ► The method is based on a particle swarm optimization algorithm and local search. ► The design goal were to achieve the minimum local peaking factor. ► The number of fuel pins with Gd and Gd concentration are fixed to reduce search complexity. ► In this study, three axial sections are design and lattice performance is calculated using CASMO-4. - Abstract: The axial section of fuel assembly in a boiling water reactor (BWR) consists of five or six different distributions; this requires a radial lattice design. In this study, an automatic procedure based on a particle swarm optimization (PSO) algorithm and local search was developed to design the radial enrichment and gadolinia (Gd) distribution of the fuel lattice. The design goals were to achieve the minimum local peaking factor (LPF), and to come as close as possible to the specified target average enrichment and target infinite multiplication factor (k ∞ ), in which the number of fuel pins with Gd and Gd concentration are fixed. In this study, three axial sections are designed, and lattice performance is calculated using CASMO-4. Finally, the neutron cross section library of the designed lattice is established by CMSLINK; the core status during depletion, such as thermal limits, cold shutdown margin and cycle length, are then calculated using SIMULATE-3 in order to confirm that the lattice design satisfies the design requirements.

  18. Dicty_cDB: Contig-U13065-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U13065-1 no gap 718 1 3561021 3561729 PLUS 1 1 U13065 0 0 0 0 0 0 0 0 0 1 0 0 0 0 Show Contig...-U13065-1 Contig ID Contig-U13065-1 Contig update 2002.12.18 Contig sequence >Contig-U13065-1 (Contig...-U13065-1Q) /CSM_Contig/Contig-U13065-1Q.Seq.d NNNNNNNNNNCAATCAAAGCAATCAATGGTAAATTAACTTTGTTACCATT ...TGATTCAACTCTCTCTG TTTCAAATTTACAACTTGCTTTAGATGAATCCTTTGAAGTTGATTTTGTA TTATATTAAAAATTATCA Gap no gap Contig...kny own update 2004. 6.10 Homology vs CSM-cDNA Query= Contig-U13065-1 (Contig-U13065-1Q) /CSM_Contig

  19. Dicty_cDB: Contig-U15058-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U15058-1 no gap 1987 4 4423139 4424727 PLUS 2 4 U15058 0 0 0 0 0 0 0 0 1 0 1 0 0 0 Show Contig...-U15058-1 Contig ID Contig-U15058-1 Contig update 2004. 6.11 Contig sequence >Contig-U15058-1 (Contig...-U15058-1Q) /CSM_Contig/Contig-U15058-1Q.Seq.d AAAAAAGGTTACTCACAAAGTTAAAGAAATCAATGAAAGATTTACCACCC...ACTCAAGGGGGTAGGAGAATAAAATCAACCGATTATCCAGGCNTTAAG CGACCTTTTTCCCAAAAAAAAAAGATGTTCAGAAAAT Gap no gap Contig len...srx*atffpkkkdvq k own update 2004. 6.23 Homology vs CSM-cDNA Query= Contig-U15058-1 (Contig-U15058-1Q) /CSM_Contig/Contig

  20. Dicty_cDB: Contig-U09640-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U09640-1 gap included 1368 2 219988 218635 MINUS 4 5 U09640 0 0 2 0 0 0 0 0 0 0 0 0 1 1 Show Contig...-U09640-1 Contig ID Contig-U09640-1 Contig update 2002. 9.13 Contig sequence >Contig-U09640-1 (Contig...-U09640-1Q) /CSM_Contig/Contig-U09640-1Q.Seq.d ACTGTTGGCCTACTGGNAAAAAATAGTGTAATAATAACCAACAAT...AACAACAACAACAAAAACAAAAACAAATTTTAATT AAATAAAATAATAATATAAAATATAATA Gap gap included Contig...ate 2004. 6.10 Homology vs CSM-cDNA Query= Contig-U09640-1 (Contig-U09640-1Q) /CSM_Contig/Contig

  1. Dicty_cDB: Contig-U14745-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U14745-1 no gap 1780 6 3063854 3065579 PLUS 2 4 U14745 1 0 1 0 0 0 0 0 0 0 0 0 0 0 Show Contig...-U14745-1 Contig ID Contig-U14745-1 Contig update 2002.12.18 Contig sequence >Contig-U14745-1 (Contig...-U14745-1Q) /CSM_Contig/Contig-U14745-1Q.Seq.d GCGTCCGGACAATTTCAATAAAACAAATTTAAAAATAAATAATTTTTAAT...AATAAAATA ATTTAAATAAAAAAATATTTATTTTATTTTAAGATTAACAAAATAAAATA ATTTAAATAAAAAAATATTTATTTTAAAGA Gap no gap Contig...k*kniyfk own update 2004. 6.10 Homology vs CSM-cDNA Query= Contig-U14745-1 (Contig-U14745-1Q) /CSM_Contig/Contig

  2. Dicty_cDB: Contig-U03367-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U03367-1 no gap 323 - - - - 2 1 U03367 0 0 0 0 0 0 0 0 0 0 0 0 1 1 Show Contig-U03367-1 Contig... ID Contig-U03367-1 Contig update 2001. 8.29 Contig sequence >Contig-U03367-1 (Contig-U03367-1Q) /CSM_Contig/Contig...TTGCGGGTTGGCAGGACTGTNGGNAGGCATGGNCATCGGTATNNTTGGAG ATGCTNGTGTGAGGGCGAATGCT Gap no gap Contig length 323 Chro...HLXXGLXCGLAGLXXGMXIGXXGDAXVRANA own update 2004. 6. 9 Homology vs CSM-cDNA Query= Contig-U03367-1 (Contig...-U03367-1Q) /CSM_Contig/Contig-U03367-1Q.Seq.d (323 letters) Database: CSM 6905 sequ

  3. Dicty_cDB: Contig-U16086-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U16086-1 gap included 1018 - - - - 3 4 U16086 0 0 0 0 0 1 1 0 0 0 1 0 0 0 Show Contig-U16086-1 Contig... ID Contig-U16086-1 Contig update 2004. 6.11 Contig sequence >Contig-U16086-1 (Contig-U16086-1Q) /CSM_Contig.../Contig-U16086-1Q.Seq.d AATTTGATGAAGTAGTAGTAGAGGTAAAACATGTATCAAAACATTATAAG ATTGCAGG...ACTTGGATATAAATGAAG GTAGCTCATCAAATTTTTCAAATAATGATAATTTTAAATCGGTAGATCAA ATTACCAATGACCTTAGCCGTATTTTAT Gap gap included Contig...KSVDQI TNDLSRIL own update 2004. 6.23 Homology vs CSM-cDNA Query= Contig-U16086-1 (Contig-U16086-1Q) /CSM_Contig/Contig

  4. Dicty_cDB: Contig-U13737-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U13737-1 no gap 672 6 1762420 1761754 MINUS 1 1 U13737 0 1 0 0 0 0 0 0 0 0 0 0 0 0 Show Contig...-U13737-1 Contig ID Contig-U13737-1 Contig update 2002.12.18 Contig sequence >Contig-U13737-1 (Contig...-U13737-1Q) /CSM_Contig/Contig-U13737-1Q.Seq.d NNNNNNNNNNAAAATTAGAAAATGGTACAATTGTTTTTAGAGATATTTCA...AGAATAGAAGGAAAATAT AGATCAATGGGGTGGCACAACA Gap no gap Contig length 672 Chromosome...gwhn own update 2004. 6.10 Homology vs CSM-cDNA Query= Contig-U13737-1 (Contig-U13737-1Q) /CSM_Contig/Contig

  5. Dicty_cDB: Contig-U06307-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U06307-1 no gap 637 6 29174 29801 PLUS 4 5 U06307 4 0 0 0 0 0 0 0 0 0 0 0 0 0 Show Contig...-U06307-1 Contig ID Contig-U06307-1 Contig update 2002. 9.13 Contig sequence >Contig-U06307-1 (Contig...-U06307-1Q) /CSM_Contig/Contig-U06307-1Q.Seq.d CCCGCGTCCGAATGCCTCGTATTTTACACACTATGCTCCGTGTGGGTAAT TTAG...ATAGTATTTTTATTTTATT CTTTTTCTTTTAAAAATTTTTTATATTGTCAACAATATAATCAAATAAAT GTATTTAATTATCGGGTATTAAAAAAAAAAAAAAAAA Gap no gap Contig...own update 2004. 6.10 Homology vs CSM-cDNA Query= Contig-U06307-1 (Contig-U06307-1Q) /CSM_Contig/Contig-U063

  6. Dicty_cDB: Contig-U15541-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U15541-1 gap included 2750 - - - - 634 1127 U15541 1 129 1 375 19 0 2 32 4 69 1 0 1 0 Show Contig...-U15541-1 Contig ID Contig-U15541-1 Contig update 2004. 6.11 Contig sequence >Contig-U15541-1 (Contig...-U15541-1Q) /CSM_Contig/Contig-U15541-1Q.Seq.d ATAATAAACGGTGAATACCTCGACTCCTAAATCGATGAAGACCGTAG...AAAAAT AAAAATAAAAATAAATAAATAATCATTTCATATTAATATTTTTTTTTATT TTTAAAAAAA Gap gap included Contig...ffyf*k own update 2004. 6.23 Homology vs CSM-cDNA Query= Contig-U15541-1 (Contig-U15541-1Q) /CSM_Contig/Contig

  7. Automatable algorithms to identify nonmedical opioid use using electronic data: a systematic review.

    Science.gov (United States)

    Canan, Chelsea; Polinski, Jennifer M; Alexander, G Caleb; Kowal, Mary K; Brennan, Troyen A; Shrank, William H

    2017-11-01

    Improved methods to identify nonmedical opioid use can help direct health care resources to individuals who need them. Automated algorithms that use large databases of electronic health care claims or records for surveillance are a potential means to achieve this goal. In this systematic review, we reviewed the utility, attempts at validation, and application of such algorithms to detect nonmedical opioid use. We searched PubMed and Embase for articles describing automatable algorithms that used electronic health care claims or records to identify patients or prescribers with likely nonmedical opioid use. We assessed algorithm development, validation, and performance characteristics and the settings where they were applied. Study variability precluded a meta-analysis. Of 15 included algorithms, 10 targeted patients, 2 targeted providers, 2 targeted both, and 1 identified medications with high abuse potential. Most patient-focused algorithms (67%) used prescription drug claims and/or medical claims, with diagnosis codes of substance abuse and/or dependence as the reference standard. Eleven algorithms were developed via regression modeling. Four used natural language processing, data mining, audit analysis, or factor analysis. Automated algorithms can facilitate population-level surveillance. However, there is no true gold standard for determining nonmedical opioid use. Users must recognize the implications of identifying false positives and, conversely, false negatives. Few algorithms have been applied in real-world settings. Automated algorithms may facilitate identification of patients and/or providers most likely to need more intensive screening and/or intervention for nonmedical opioid use. Additional implementation research in real-world settings would clarify their utility. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  8. Automatic Multi-Level Thresholding Segmentation Based on Multi-Objective Optimization

    Directory of Open Access Journals (Sweden)

    L. DJEROU,

    2012-01-01

    Full Text Available In this paper, we present a new multi-level image thresholding technique, called Automatic Threshold based on Multi-objective Optimization "ATMO" that combines the flexibility of multi-objective fitness functions with the power of a Binary Particle Swarm Optimization algorithm "BPSO", for searching the "optimum" number of the thresholds and simultaneously the optimal thresholds of three criteria: the between-class variances criterion, the minimum error criterion and the entropy criterion. Some examples of test images are presented to compare our segmentation method, based on the multi-objective optimization approach with Otsu’s, Kapur’s and Kittler’s methods. Our experimental results show that the thresholding method based on multi-objective optimization is more efficient than the classical Otsu’s, Kapur’s and Kittler’s methods.

  9. Location-Based Self-Adaptive Routing Algorithm for Wireless Sensor Networks in Home Automation

    Directory of Open Access Journals (Sweden)

    Hong SeungHo

    2011-01-01

    Full Text Available The use of wireless sensor networks in home automation (WSNHA is attractive due to their characteristics of self-organization, high sensing fidelity, low cost, and potential for rapid deployment. Although the AODVjr routing algorithm in IEEE 802.15.4/ZigBee and other routing algorithms have been designed for wireless sensor networks, not all are suitable for WSNHA. In this paper, we propose a location-based self-adaptive routing algorithm for WSNHA called WSNHA-LBAR. It confines route discovery flooding to a cylindrical request zone, which reduces the routing overhead and decreases broadcast storm problems in the MAC layer. It also automatically adjusts the size of the request zone using a self-adaptive algorithm based on Bayes' theorem. This makes WSNHA-LBAR more adaptable to the changes of the network state and easier to implement. Simulation results show improved network reliability as well as reduced routing overhead.

  10. Automatic document navigation for digital content remastering

    Science.gov (United States)

    Lin, Xiaofan; Simske, Steven J.

    2003-12-01

    This paper presents a novel method of automatically adding navigation capabilities to re-mastered electronic books. We first analyze the need for a generic and robust system to automatically construct navigation links into re-mastered books. We then introduce the core algorithm based on text matching for building the links. The proposed method utilizes the tree-structured dictionary and directional graph of the table of contents to efficiently conduct the text matching. Information fusion further increases the robustness of the algorithm. The experimental results on the MIT Press digital library project are discussed and the key functional features of the system are illustrated. We have also investigated how the quality of the OCR engine affects the linking algorithm. In addition, the analogy between this work and Web link mining has been pointed out.

  11. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  12. Dicty_cDB: Contig-U06822-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U06822-1 no gap 468 3 438742 439211 PLUS 1 1 U06822 1 0 0 0 0 0 0 0 0 0 0 0 0 0 Show Contig...-U06822-1 Contig ID Contig-U06822-1 Contig update 2001. 8.30 Contig sequence >Contig-U06822-1 (Contig...-U06822-1Q) /CSM_Contig/Contig-U06822-1Q.Seq.d ATATTATTCTATTCACTCGTAATAATACATATAAATTGATATCAATCAGA AA...TGCTATTAAGACTTTGGAGCAAAAAAC TAACAAATCAATTCAAAA Gap no gap Contig length 468 Chromosome number (1..6, M) 3 Ch...*mmlklkeikllvllrlwskkltnqfk own update 2004. 6.10 Homology vs CSM-cDNA Query= Contig-U06822-1 (Contig-U06822-1Q) /CSM_Contig/Contig

  13. Dicty_cDB: Contig-U01997-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U01997-1 gap included 886 2 1683026 1682230 MINUS 3 4 U01997 1 0 0 0 0 0 2 0 0 0 0 0 0 0 Show Contig...-U01997-1 Contig ID Contig-U01997-1 Contig update 2001. 8.29 Contig sequence >Contig-U01997-1 (Contig-U01997-1Q) /CSM_Contig/Contig-U01997...ATTGAAATAATATTTATTTATTTTTTTAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA Gap gap included Contig...nfkvfgieiifiyffkkkkkkkkkkkkkkkkkkk own update 2004. 6. 9 Homology vs CSM-cDNA Query= Contig-U01997-1 (Contig-U01997-1Q) /CSM_Contig.../Contig-U01997-1Q.Seq.d (896 letters) Database: CSM 6905 sequences; 5,674,871 total l

  14. Dicty_cDB: Contig-U13254-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U13254-1 no gap 575 5 203798 203233 MINUS 1 1 U13254 0 0 0 0 0 0 0 1 0 0 0 0 0 0 Show Contig...-U13254-1 Contig ID Contig-U13254-1 Contig update 2002.12.18 Contig sequence >Contig-U13254-1 (Contig...-U13254-1Q) /CSM_Contig/Contig-U13254-1Q.Seq.d AAATAATTTATTTAATTTTAAAATTAATAGATAAAAAGATGGAAATGATA A...CATTTTAACATTATTGGATAAT GTCAATGATTGGCCAANNNNNNNNN Gap no gap Contig length 575 Chromosome number (1..6, M) 5 ...2004. 6.10 Homology vs CSM-cDNA Query= Contig-U13254-1 (Contig-U13254-1Q) /CSM_Contig/Contig-U13254-1Q.Seq.d

  15. Dicty_cDB: Contig-U13891-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U13891-1 no gap 1355 6 799802 798446 MINUS 4 4 U13891 0 0 0 0 1 1 1 0 0 0 1 0 0 0 Show Contig...-U13891-1 Contig ID Contig-U13891-1 Contig update 2002.12.18 Contig sequence >Contig-U13891-1 (Contig...-U13891-1Q) /CSM_Contig/Contig-U13891-1Q.Seq.d TTTTAAAATATTTCAAAATTAGCGAGCACGCATTCGCATATAAATATATT ...ACAAATAAAAAAAAAAAATAAAAAAAATA ATTTA Gap no gap Contig length 1355 Chromosome numb...own update 2004. 6.10 Homology vs CSM-cDNA Query= Contig-U13891-1 (Contig-U13891-1Q) /CSM_Contig/Contig-U138

  16. Dicty_cDB: Contig-U16093-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U16093-1 gap included 1020 2 4899973 4899063 MINUS 29 31 U16093 7 0 0 0 0 2 ...18 0 0 0 0 0 2 0 Show Contig-U16093-1 Contig ID Contig-U16093-1 Contig update 2004. 6.11 Contig sequence >Contig-U16093-1 (Contig...-U16093-1Q) /CSM_Contig/Contig-U16093-1Q.Seq.d TTTTTTTTTTTTTTTTTAATTTTTTTTTTTCATAAAACTT...AAAATTAAATT Gap gap included Contig length 1020 Chromosome number (1..6, M) 2 Chr...pdate 2004. 6.23 Homology vs CSM-cDNA Query= Contig-U16093-1 (Contig-U16093-1Q) /CSM_Contig/Contig-U16093-1Q

  17. Dicty_cDB: Contig-U06384-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U06384-1 no gap 660 5 3008439 3007779 MINUS 2 2 U06384 2 0 0 0 0 0 0 0 0 0 0 0 0 0 Show Contig...-U06384-1 Contig ID Contig-U06384-1 Contig update 2001. 8.30 Contig sequence >Contig-U06384-1 (Contig...-U06384-1Q) /CSM_Contig/Contig-U06384-1Q.Seq.d TGAAAAAATTAGAGACAACAAGTGGATCAGCACGTAAAGTATGGCGTTTA...AAATAAAAATTAATTTCC AAAAATAAAA Gap no gap Contig length 660 Chromosome number (1.....own update 2004. 6.10 Homology vs CSM-cDNA Query= Contig-U06384-1 (Contig-U06384-1Q) /CSM_Contig/Contig-U063

  18. Dicty_cDB: Contig-U12545-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U12545-1 gap included 1165 3 3275272 3276395 PLUS 1 2 U12545 0 1 0 0 0 0 0 0 0 0 0 0 0 0 Show Contig...-U12545-1 Contig ID Contig-U12545-1 Contig update 2002.12.18 Contig sequence >Contig-U12545-1 (Contig-U12545-1Q) /CSM_Contig/Contig-U12545...CGTTCTAAATCACTCATTAAAAGATTAAAAATTAAANAAGGTAATATC TCACGACNGCTNNCTCATACACACN Gap gap included Contig length 11...vliknlskrkerkis*klyqlkriqlsl vknwlklvlnhslkd*klxkvishdxxliht own update 2004. 6.10 Homology vs CSM-cDNA Query= Contig...-U12545-1 (Contig-U12545-1Q) /CSM_Contig/Contig-U12545-1Q.Seq.d (1175 letters) Database: CSM 6905 s

  19. Dicty_cDB: Contig-U10823-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U10823-1 gap included 1750 1 3559501 3561234 PLUS 85 124 U10823 0 5 0 30 1 0... 0 20 0 29 0 0 0 0 Show Contig-U10823-1 Contig ID Contig-U10823-1 Contig update 2002.12.18 Contig sequence >Contig-U10823-1 (Contig...-U10823-1Q) /CSM_Contig/Contig-U10823-1Q.Seq.d ACTGTTGGCCTACTGGTATTTTTGGTAGTGTGTTAAAA...CAACAAATAAAATTAAAATTA GTTATATTTTTTTTAAATTAAAAAAAAAAATAAAAAAAATAAATTATTTA TTAAATTTTT Gap gap included Contig ...4. 6.10 Homology vs CSM-cDNA Query= Contig-U10823-1 (Contig-U10823-1Q) /CSM_Contig/Contig-U10823-1Q.Seq.d (1

  20. 46 CFR 154.176 - Longitudinal contiguous hull structure.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Longitudinal contiguous hull structure. 154.176 Section... Equipment Hull Structure § 154.176 Longitudinal contiguous hull structure. (a) The longitudinal contiguous hull structure of a vessel having cargo containment systems without secondary barriers must meet the...

  1. A Full-Body Layered Deformable Model for Automatic Model-Based Gait Recognition

    Science.gov (United States)

    Lu, Haiping; Plataniotis, Konstantinos N.; Venetsanopoulos, Anastasios N.

    2007-12-01

    This paper proposes a full-body layered deformable model (LDM) inspired by manually labeled silhouettes for automatic model-based gait recognition from part-level gait dynamics in monocular video sequences. The LDM is defined for the fronto-parallel gait with 22 parameters describing the human body part shapes (widths and lengths) and dynamics (positions and orientations). There are four layers in the LDM and the limbs are deformable. Algorithms for LDM-based human body pose recovery are then developed to estimate the LDM parameters from both manually labeled and automatically extracted silhouettes, where the automatic silhouette extraction is through a coarse-to-fine localization and extraction procedure. The estimated LDM parameters are used for model-based gait recognition by employing the dynamic time warping for matching and adopting the combination scheme in AdaBoost.M2. While the existing model-based gait recognition approaches focus primarily on the lower limbs, the estimated LDM parameters enable us to study full-body model-based gait recognition by utilizing the dynamics of the upper limbs, the shoulders and the head as well. In the experiments, the LDM-based gait recognition is tested on gait sequences with differences in shoe-type, surface, carrying condition and time. The results demonstrate that the recognition performance benefits from not only the lower limb dynamics, but also the dynamics of the upper limbs, the shoulders and the head. In addition, the LDM can serve as an analysis tool for studying factors affecting the gait under various conditions.

  2. Dicty_cDB: Contig-U08861-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U08861-1 gap included 1295 5 2877914 2879217 PLUS 1 2 U08861 0 0 0 0 1 0 0 0 0 0 0 0 0 0 Show Contig...-U08861-1 Contig ID Contig-U08861-1 Contig update 2002. 9.13 Contig sequence >Contig-U08861-1 (Contig-U08861-1Q) /CSM_Contig/Contig-U08861...CACATTATAAAGTACCAAATAAGTTATTAATTTTAGAAAATA AATTCCAAAGAATGCAATGTCTAAAGTTAATAAAAAAGAATACTAAAATA TTTTC Gap gap included Contig...k**iwsryccnhcl*kkqkttnef*r i*nql*tkistl*stk*vinfrk*ipknamskvnkkey*nif own update 2004. 6.10 Homology vs CSM-cDNA Query= Contig...-U08861-1 (Contig-U08861-1Q) /CSM_Contig/Contig-U08861-1Q.Seq.d (1305 letters) Database: C

  3. Dicty_cDB: Contig-U06829-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U06829-1 no gap 449 5 4394444 4394893 PLUS 1 1 U06829 1 0 0 0 0 0 0 0 0 0 0 0 0 0 Show Contig...-U06829-1 Contig ID Contig-U06829-1 Contig update 2001. 8.30 Contig sequence >Contig-U06829-1 (Contig...-U06829-1Q) /CSM_Contig/Contig-U06829-1Q.Seq.d GTAAAAGAATGTAATGAAAATGAAAAAATTAATTTTATAATAAAATTATT ...ATGATTTAGAATTGGTACAATTAGTTTA Gap no gap Contig length 449 Chromosome number (1..6, M) 5 Chromosome length 50...04. 6.10 Homology vs CSM-cDNA Query= Contig-U06829-1 (Contig-U06829-1Q) /CSM_Contig/Contig-U06829-1Q.Seq.d (

  4. Dicty_cDB: Contig-U12073-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U12073-1 gap included 912 2 2118980 2119867 PLUS 4 5 U12073 0 0 0 2 0 0 0 1 0 1 0 0 0 0 Show Contig...-U12073-1 Contig ID Contig-U12073-1 Contig update 2002.12.18 Contig sequence >Contig-U12073-1 (Contig...-U12073-1Q) /CSM_Contig/Contig-U12073-1Q.Seq.d CTGTTGGCCTACTGGNAATTGAAACAATTGTTTCAGCAAATATTA...AAGA Gap gap included Contig length 912 Chromosome number (1..6, M) 2 Chromosome length 8467578 Start point ...GPXSXDY*r own update 2004. 6.10 Homology vs CSM-cDNA Query= Contig-U12073-1 (Contig-U12073-1Q) /CSM_Contig/Contig

  5. Dicty_cDB: Contig-U09615-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U09615-1 gap included 1134 3 4459395 4458259 MINUS 1 2 U09615 0 0 1 0 0 0 0 0 0 0 0 0 0 0 Show Contig...-U09615-1 Contig ID Contig-U09615-1 Contig update 2002. 9.13 Contig sequence >Contig-U09615-1 (Contig-U09615-1Q) /CSM_Contig/Contig-U0961...TGCAAGATTAGAAAGATTAGAAAAAGATGCTATGCTAAAAATA Gap gap included Contig length 1134 Chromosome number (1..6, M) ...*wcnlyfrcre*emgkcn iefhiintrfkiwphrcidtighnvgicw**fnfecsfisleiqyrv**mgirfkyw*ww s*c*irpyfnnhafqyydyiwwskfwh*...4. 6.10 Homology vs CSM-cDNA Query= Contig-U09615-1 (Contig-U09615-1Q) /CSM_Contig/Contig-U09615-1Q.Seq.d (1

  6. Contiguity and quantum theory of measurement

    Energy Technology Data Exchange (ETDEWEB)

    Green, H.S. [Adelaide Univ., SA (Australia). Dept. of Mathematical Physics]|[Adelaide Univ., SA (Australia). Dept. of Physics

    1995-12-31

    This paper presents a comprehensive treatment of the problem of measurement in microscopic physics, consistent with the indeterministic Copenhagen interpretation of quantum mechanics and information theory. It is pointed out that there are serious difficulties in reconciling the deterministic interpretations of quantum mechanics, based on the concepts of a universal wave function or hidden variables, with the principle of contiguity. Quantum mechanics is reformulated entirely in terms of observables, represented by matrices, including the statistical matrix, and the utility of information theory is illustrated by a discussion of the EPR paradox. The principle of contiguity is satisfied by all conserved quantities. A theory of the operation of macroscopic measuring devices is given in the interaction representation, and the attenuation of the indeterminacy of a microscopic observable in the process of measurement is related to observable changes of entropy. 28 refs.

  7. Contiguity and quantum theory of measurement

    International Nuclear Information System (INIS)

    Green, H.S.; Adelaide Univ., SA

    1995-01-01

    This paper presents a comprehensive treatment of the problem of measurement in microscopic physics, consistent with the indeterministic Copenhagen interpretation of quantum mechanics and information theory. It is pointed out that there are serious difficulties in reconciling the deterministic interpretations of quantum mechanics, based on the concepts of a universal wave function or hidden variables, with the principle of contiguity. Quantum mechanics is reformulated entirely in terms of observables, represented by matrices, including the statistical matrix, and the utility of information theory is illustrated by a discussion of the EPR paradox. The principle of contiguity is satisfied by all conserved quantities. A theory of the operation of macroscopic measuring devices is given in the interaction representation, and the attenuation of the indeterminacy of a microscopic observable in the process of measurement is related to observable changes of entropy. 28 refs

  8. Retrieval Algorithms for Road Surface Modelling Using Laser-Based Mobile Mapping

    Directory of Open Access Journals (Sweden)

    Antero Kukko

    2008-09-01

    Full Text Available Automated processing of the data provided by a laser-based mobile mapping system will be a necessity due to the huge amount of data produced. In the future, vehiclebased laser scanning, here called mobile mapping, should see considerable use for road environment modelling. Since the geometry of the scanning and point density is different from airborne laser scanning, new algorithms are needed for information extraction. In this paper, we propose automatic methods for classifying the road marking and kerbstone points and modelling the road surface as a triangulated irregular network. On the basis of experimental tests, the mean classification accuracies obtained using automatic method for lines, zebra crossings and kerbstones were 80.6%, 92.3% and 79.7%, respectively.

  9. The guitar chord-generating algorithm based on complex network

    Science.gov (United States)

    Ren, Tao; Wang, Yi-fan; Du, Dan; Liu, Miao-miao; Siddiqi, Awais

    2016-02-01

    This paper aims to generate chords for popular songs automatically based on complex network. Firstly, according to the characteristics of guitar tablature, six chord networks of popular songs by six pop singers are constructed and the properties of all networks are concluded. By analyzing the diverse chord networks, the accompaniment regulations and features are shown, with which the chords can be generated automatically. Secondly, in terms of the characteristics of popular songs, a two-tiered network containing a verse network and a chorus network is constructed. With this network, the verse and chorus can be composed respectively with the random walk algorithm. Thirdly, the musical motif is considered for generating chords, with which the bad chord progressions can be revised. This method can make the accompaniments sound more melodious. Finally, a popular song is chosen for generating chords and the new generated accompaniment sounds better than those done by the composers.

  10. FORMATION OF THE SYNTHESIS ALGORITHMS OF THE COORDINATING CONTROL SYSTEMS BY MEANS OF THE AUTOMATIC GENERATION OF PETRI NETS

    Directory of Open Access Journals (Sweden)

    A. A. Gurskiy

    2016-09-01

    Full Text Available The coordinating control system by drives of the robot-manipulator is presented in this article. The purpose of the scientific work is the development and research of the new algorithms for parametric synthesis of the coordinating control systems. To achieve this aim it is necessary to develop the system generating the required parametric synthesis algorithms and performing the necessary procedures according to the generated algorithm. This scientific work deals with the synthesis of Petri net in the specific case with the automatic generation of Petri nets.

  11. Development and Comparative Study of Effects of Training Algorithms on Performance of Artificial Neural Network Based Analog and Digital Automatic Modulation Recognition

    Directory of Open Access Journals (Sweden)

    Jide Julius Popoola

    2015-11-01

    Full Text Available This paper proposes two new classifiers that automatically recognise twelve combined analog and digital modulated signals without any a priori knowledge of the modulation schemes and the modulation parameters. The classifiers are developed using pattern recognition approach. Feature keys extracted from the instantaneous amplitude, instantaneous phase and the spectrum symmetry of the simulated signals are used as inputs to the artificial neural network employed in developing the classifiers. The two developed classifiers are trained using scaled conjugate gradient (SCG and conjugate gradient (CONJGRAD training algorithms. Sample results of the two classifiers show good success recognition performance with an average overall recognition rate above 99.50% at signal-to-noise ratio (SNR value from 0 dB and above with the two training algorithms employed and an average overall recognition rate slightly above 99.00% and 96.40% respectively at - 5 dB SNR value for SCG and CONJGRAD training algorithms. The comparative performance evaluation of the two developed classifiers using the two training algorithms shows that the two training algorithms have different effects on both the response rate and efficiency of the two developed artificial neural networks classifiers. In addition, the result of the performance evaluation carried out on the overall success recognition rates between the two developed classifiers in this study using pattern recognition approach with the two training algorithms and one reported classifier in surveyed literature using decision-theoretic approach shows that the classifiers developed in this study perform favourably with regard to accuracy and performance probability as compared to classifier presented in previous study.

  12. Automatic two- and three-dimensional mesh generation based on fuzzy knowledge processing

    Science.gov (United States)

    Yagawa, G.; Yoshimura, S.; Soneda, N.; Nakao, K.

    1992-09-01

    This paper describes the development of a novel automatic FEM mesh generation algorithm based on the fuzzy knowledge processing technique. A number of local nodal patterns are stored in a nodal pattern database of the mesh generation system. These nodal patterns are determined a priori based on certain theories or past experience of experts of FEM analyses. For example, such human experts can determine certain nodal patterns suitable for stress concentration analyses of cracks, corners, holes and so on. Each nodal pattern possesses a membership function and a procedure of node placement according to this function. In the cases of the nodal patterns for stress concentration regions, the membership function which is utilized in the fuzzy knowledge processing has two meanings, i.e. the “closeness” of nodal location to each stress concentration field as well as “nodal density”. This is attributed to the fact that a denser nodal pattern is required near a stress concentration field. What a user has to do in a practical mesh generation process are to choose several local nodal patterns properly and to designate the maximum nodal density of each pattern. After those simple operations by the user, the system places the chosen nodal patterns automatically in an analysis domain and on its boundary, and connects them smoothly by the fuzzy knowledge processing technique. Then triangular or tetrahedral elements are generated by means of the advancing front method. The key issue of the present algorithm is an easy control of complex two- or three-dimensional nodal density distribution by means of the fuzzy knowledge processing technique. To demonstrate fundamental performances of the present algorithm, a prototype system was constructed with one of object-oriented languages, Smalltalk-80 on a 32-bit microcomputer, Macintosh II. The mesh generation of several two- and three-dimensional domains with cracks, holes and junctions was presented as examples.

  13. Dicty_cDB: Contig-U04432-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U04432-1 no gap 600 1 1520578 1521098 PLUS 1 1 U04432 0 0 0 0 0 0 1 0 0 0 0 0 0 0 Show Contig...-U04432-1 Contig ID Contig-U04432-1 Contig update 2001. 8.29 Contig sequence >Contig-U04432-1 (Contig...-U04432-1Q) /CSM_Contig/Contig-U04432-1Q.Seq.d AATTATAATCAAAACAAATTAATAAAAAAAATGATTAATAGTTTTGTCTC ...TCAACAATATGAAATTGCAAGAT TAAATGGTTATGATAATGCCCATAATTTACCAAGAGATATTAGTCAAATA Gap no gap Contig length 600 Chro...ni**fkgrnsnknyfsrymgtiessti*n ckikwl**cp*ftkry*sn own update 2004. 6.10 Homology vs CSM-cDNA Query= Contig-U04432-1 (Contig

  14. Solution to automatic generation control problem using firefly algorithm optimized I(λ)D(µ) controller.

    Science.gov (United States)

    Debbarma, Sanjoy; Saikia, Lalit Chandra; Sinha, Nidul

    2014-03-01

    Present work focused on automatic generation control (AGC) of a three unequal area thermal systems considering reheat turbines and appropriate generation rate constraints (GRC). A fractional order (FO) controller named as I(λ)D(µ) controller based on crone approximation is proposed for the first time as an appropriate technique to solve the multi-area AGC problem in power systems. A recently developed metaheuristic algorithm known as firefly algorithm (FA) is used for the simultaneous optimization of the gains and other parameters such as order of integrator (λ) and differentiator (μ) of I(λ)D(µ) controller and governor speed regulation parameters (R). The dynamic responses corresponding to optimized I(λ)D(µ) controller gains, λ, μ, and R are compared with that of classical integer order (IO) controllers such as I, PI and PID controllers. Simulation results show that the proposed I(λ)D(µ) controller provides more improved dynamic responses and outperforms the IO based classical controllers. Further, sensitivity analysis confirms the robustness of the so optimized I(λ)D(µ) controller to wide changes in system loading conditions and size and position of SLP. Proposed controller is also found to have performed well as compared to IO based controllers when SLP takes place simultaneously in any two areas or all the areas. Robustness of the proposed I(λ)D(µ) controller is also tested against system parameter variations. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Implementation of Automatic Clustering Algorithm and Fuzzy Time Series in Motorcycle Sales Forecasting

    Science.gov (United States)

    Rasim; Junaeti, E.; Wirantika, R.

    2018-01-01

    Accurate forecasting for the sale of a product depends on the forecasting method used. The purpose of this research is to build motorcycle sales forecasting application using Fuzzy Time Series method combined with interval determination using automatic clustering algorithm. Forecasting is done using the sales data of motorcycle sales in the last ten years. Then the error rate of forecasting is measured using Means Percentage Error (MPE) and Means Absolute Percentage Error (MAPE). The results of forecasting in the one-year period obtained in this study are included in good accuracy.

  16. Automatic segmentation of coronary angiograms based on fuzzy inferring and probabilistic tracking

    Directory of Open Access Journals (Sweden)

    Shoujun Zhou

    2010-08-01

    Full Text Available Abstract Background Segmentation of the coronary angiogram is important in computer-assisted artery motion analysis or reconstruction of 3D vascular structures from a single-plan or biplane angiographic system. Developing fully automated and accurate vessel segmentation algorithms is highly challenging, especially when extracting vascular structures with large variations in image intensities and noise, as well as with variable cross-sections or vascular lesions. Methods This paper presents a novel tracking method for automatic segmentation of the coronary artery tree in X-ray angiographic images, based on probabilistic vessel tracking and fuzzy structure pattern inferring. The method is composed of two main steps: preprocessing and tracking. In preprocessing, multiscale Gabor filtering and Hessian matrix analysis were used to enhance and extract vessel features from the original angiographic image, leading to a vessel feature map as well as a vessel direction map. In tracking, a seed point was first automatically detected by analyzing the vessel feature map. Subsequently, two operators [e.g., a probabilistic tracking operator (PTO and a vessel structure pattern detector (SPD] worked together based on the detected seed point to extract vessel segments or branches one at a time. The local structure pattern was inferred by a multi-feature based fuzzy inferring function employed in the SPD. The identified structure pattern, such as crossing or bifurcation, was used to control the tracking process, for example, to keep tracking the current segment or start tracking a new one, depending on the detected pattern. Results By appropriate integration of these advanced preprocessing and tracking steps, our tracking algorithm is able to extract both vessel axis lines and edge points, as well as measure the arterial diameters in various complicated cases. For example, it can walk across gaps along the longitudinal vessel direction, manage varying vessel

  17. Dicty_cDB: Contig-U10291-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U10291-1 no gap 932 4 3203354 3204286 PLUS 2 2 U10291 0 0 1 0 0 0 1 0 0 0 0 0 0 0 Show Contig...-U10291-1 Contig ID Contig-U10291-1 Contig update 2002. 9.13 Contig sequence >Contig-U10291-1 (Contig...-U10291-1Q) /CSM_Contig/Contig-U10291-1Q.Seq.d GTAAAGGTTTTATGTGTATATTTTTTAATGACCTTTTCGAATTAGTTTCA ...CAAAATAGATTAAATCTTAGTTACTCTCATGC TAATCAATATGTTGAGAGTTTTCCATCACAAATGTTATCAACAATTGCAA AATTCATTAGTTTCTTATTTGGTT...SLMYSL FNYIFDENGIIKSEFQDPTQRKRLSRGLSRRFMTIGILGLFTTPFIFFFLLINFFFEYAE ELKNRPGSLFSREWSPLARWEFRELNELPHYFQNRLNLSY

  18. Patent Keyword Extraction Algorithm Based on Distributed Representation for Patent Classification

    Directory of Open Access Journals (Sweden)

    Jie Hu

    2018-02-01

    Full Text Available Many text mining tasks such as text retrieval, text summarization, and text comparisons depend on the extraction of representative keywords from the main text. Most existing keyword extraction algorithms are based on discrete bag-of-words type of word representation of the text. In this paper, we propose a patent keyword extraction algorithm (PKEA based on the distributed Skip-gram model for patent classification. We also develop a set of quantitative performance measures for keyword extraction evaluation based on information gain and cross-validation, based on Support Vector Machine (SVM classification, which are valuable when human-annotated keywords are not available. We used a standard benchmark dataset and a homemade patent dataset to evaluate the performance of PKEA. Our patent dataset includes 2500 patents from five distinct technological fields related to autonomous cars (GPS systems, lidar systems, object recognition systems, radar systems, and vehicle control systems. We compared our method with Frequency, Term Frequency-Inverse Document Frequency (TF-IDF, TextRank and Rapid Automatic Keyword Extraction (RAKE. The experimental results show that our proposed algorithm provides a promising way to extract keywords from patent texts for patent classification.

  19. Automatic noninvasive measurement of systolic blood pressure using photoplethysmography

    Directory of Open Access Journals (Sweden)

    Glik Zehava

    2009-10-01

    Full Text Available Abstract Background Automatic measurement of arterial blood pressure is important, but the available commercial automatic blood pressure meters, mostly based on oscillometry, are of low accuracy. Methods In this study, we present a cuff-based technique for automatic measurement of systolic blood pressure, based on photoplethysmographic signals measured simultaneously in fingers of both hands. After inflating the pressure cuff to a level above systolic blood pressure in a relatively slow rate, it is slowly deflated. The cuff pressure for which the photoplethysmographic signal reappeared during the deflation of the pressure-cuff was taken as the systolic blood pressure. The algorithm for the detection of the photoplethysmographic signal involves: (1 determination of the time-segments in which the photoplethysmographic signal distal to the cuff is expected to appear, utilizing the photoplethysmographic signal in the free hand, and (2 discrimination between random fluctuations and photoplethysmographic pattern. The detected pulses in the time-segments were identified as photoplethysmographic pulses if they met two criteria, based on the pulse waveform and on the correlation between the signal in each segment and the signal in the two neighboring segments. Results Comparison of the photoplethysmographic-based automatic technique to sphygmomanometry, the reference standard, shows that the standard deviation of their differences was 3.7 mmHg. For subjects with systolic blood pressure above 130 mmHg the standard deviation was even lower, 2.9 mmHg. These values are much lower than the 8 mmHg value imposed by AAMI standard for automatic blood pressure meters. Conclusion The photoplethysmographic-based technique for automatic measurement of systolic blood pressure, and the algorithm which was presented in this study, seems to be accurate.

  20. Optimal Design for PID Controller Based on DE Algorithm in Omnidirectional Mobile Robot

    Directory of Open Access Journals (Sweden)

    Wu Peizhang

    2017-01-01

    Full Text Available This paper introduces a omnidirectional mobile robot based on Mecanum wheel, which is used for conveying heavy load in a small space of the automatic warehousing logistics center. Then analyzes and establishes the omnidirectional chassis inverse and forward kinematic model. In order to improve the performance of motion, the paper proposes the optimal PID controller based on differential evolution algorithm. Finally, through MATLAB simulation, the results show that the kinematic model of mobile robot chassis is correct, further more the controller optimized by the DE algorithm working better than the traditional Z-N PID tuned. So the optimal scheme is reasonable and feasible, which has a value for engineering applications.

  1. Automated spike sorting algorithm based on Laplacian eigenmaps and k-means clustering.

    Science.gov (United States)

    Chah, E; Hok, V; Della-Chiesa, A; Miller, J J H; O'Mara, S M; Reilly, R B

    2011-02-01

    This study presents a new automatic spike sorting method based on feature extraction by Laplacian eigenmaps combined with k-means clustering. The performance of the proposed method was compared against previously reported algorithms such as principal component analysis (PCA) and amplitude-based feature extraction. Two types of classifier (namely k-means and classification expectation-maximization) were incorporated within the spike sorting algorithms, in order to find a suitable classifier for the feature sets. Simulated data sets and in-vivo tetrode multichannel recordings were employed to assess the performance of the spike sorting algorithms. The results show that the proposed algorithm yields significantly improved performance with mean sorting accuracy of 73% and sorting error of 10% compared to PCA which combined with k-means had a sorting accuracy of 58% and sorting error of 10%.A correction was made to this article on 22 February 2011. The spacing of the title was amended on the abstract page. No changes were made to the article PDF and the print version was unaffected.

  2. Dicty_cDB: Contig-U16108-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U16108-1 gap included 1456 4 1889609 1888449 MINUS 4 6 U16108 0 0 2 0 1 0 1 0 0 0 0 0 0 0 Show Contig...-U16108-1 Contig ID Contig-U16108-1 Contig update 2004. 6.11 Contig sequence >Contig-U16108-1 (Contig-U16108-1Q) /CSM_Contig/Contig-U1610...AAAATCA TAAAATCAAAAATTGTATAATTAAAATAAAAATAAAAAAAAAAACAAAAA TAAAAAAAAAAAACAA Gap gap included Contig length 1...DFLSQFYGELN QPSLNNLTENIITIDQSSFIPIGYTTITAGLNNFAYAYIPTSCKNDKSLCSIHVAFHGCL QTVATIGDNFYTKTGYNEIAETNNIIILYPQALET...---NYVNNDNIKTMFDIQSEHAFITNSFGNNCTYLGPDYINNCNFNAPWDFLSQFYGELN QPSLNNLTENIITIDQSSFIPIGYTTITAGLNNFAYAYIPTSCKNDKSLCSIHVAFHGCL QTVATIG

  3. Dicty_cDB: Contig-U13974-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U13974-1 no gap 1782 1 1265322 1267105 PLUS 29 32 U13974 0 0 0 1 2 0 22 0 4 0 0 0 0 0 Show Contig...-U13974-1 Contig ID Contig-U13974-1 Contig update 2002.12.18 Contig sequence >Contig-U13974-1 (Contig...-U13974-1Q) /CSM_Contig/Contig-U13974-1Q.Seq.d AAGAGTTAAAACAAAAATAAAAAAATAAAATAAAAAAAAAAAATTAA...TAAAACAAATAA ACATTAAAATGATATTTAGGTTTTAAATTTAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA Gap no gap Contig...TTRKIYVYDNQNFFPIDNQGFD VDPAKRIYLNEKKTYHNYHFCMKMNTVFTYKGYEVFNFRGDDDVWVFINNKLVIDLGGLH SPIGTSVDTMTLGLTIG

  4. Dicty_cDB: Contig-U16008-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U16008-1 gap included 1557 5 1711154 1712676 PLUS 5 8 U16008 0 0 0 0 1 1 1 0 1 0 0 0 1 0 Show Contig...-U16008-1 Contig ID Contig-U16008-1 Contig update 2004. 6.11 Contig sequence >Contig-U16008-1 (Contig-U16008-1Q) /CSM_Contig/Contig-U16008... TAAGGTTTATGATTTTTGATTTTAGATTTTATATTTTATTTATTTTAATA AAAAAAAAAAAAAAAAA Gap gap included Contig length 1557 Ch...F LIFVHGSSTIIVLGIAIINFSISRIFERSKMLPAVTWIFNLIILWTCY--- ---PFGGFGARGPPSTIGYSRHTIGGMYGGHSPGPRLHLTGYLGIEPMNGKFLN...SSTIIVLGIAIINFSISRIFERSKMLPAVTWIFNLIILWTCY--- ---PFGGFGARGPPSTIGYSRHTIGGMYGGHSPGPRLHLTGYLGIEPMNGKFLNIGRTFR L

  5. Dicty_cDB: Contig-U11342-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U11342-1 gap included 2051 2 611517 609465 MINUS 4 7 U11342 0 2 1 1 0 0 0 0 0 0 0 0 0 0 Show Contig...-U11342-1 Contig ID Contig-U11342-1 Contig update 2002.12.18 Contig sequence >Contig-U11342-1 (Contig...-U11342-1Q) /CSM_Contig/Contig-U11342-1Q.Seq.d GTCAACATTAACATCATCATCATCATCATCACCATCTAGTAATAA...GAATTTGGTAATTTTAAAATCACTNATTAATATATTAAACAAAATTA TAAAAATAAAA Gap gap included Contig...EFFFIDRKSLLVNFP RGSICAQILKLIGNLYGSNDIIFKINTNNVSFFDGTIGANNSTNNSNSNQPMTPQQVVIK YLNPTARWKRREISNFEYLMTLNTIAGRTYN

  6. Dicty_cDB: Contig-U11195-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U11195-1 gap included 2858 2 4308456 4311316 PLUS 16 27 U11195 0 2 0 8 1 0 0... 3 0 2 0 0 0 0 Show Contig-U11195-1 Contig ID Contig-U11195-1 Contig update 2002.12.18 Contig sequence >Contig-U11195-1 (Contig...-U11195-1Q) /CSM_Contig/Contig-U11195-1Q.Seq.d AGCATTGGAACAAATCGAATTACGTGAAAAGATACCATTGTT...TATCACCTGCTCTTTATCCTTCAAATTTAAGT AATTCAACATTGGCCCAAAGAGTTACATGGATAAATAAATTATAAATAAT GTATAAAATCATTCTCTC Gap gap included Contig... EYREKIPLLDLPWGASKPWTLVDLRDDYDEDLMVRFYNELMLPNFPVKNELEPLSNFISA LSEERRESFNPHLSEVHVLLALRWPTDSSDLQPTIGAGIIFEYFSN

  7. Dicty_cDB: Contig-U01791-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U01791-1 no gap 527 2 7629792 7630319 PLUS 1 1 U01791 0 0 0 0 0 0 1 0 0 0 0 0 0 0 Show Contig...-U01791-1 Contig ID Contig-U01791-1 Contig update 2001. 8.29 Contig sequence >Contig-U01791-1 (Contig...-U01791-1Q) /CSM_Contig/Contig-U01791-1Q.Seq.d GTTTGATTATAATTTATATGAATGTGAAATTAGACAAGCATTATCAAATA ...TCGTTCCCTTATGATTTAAGAACAACTTT GAATAGTTACAGAAATGGTGAATTTAGTATTTATCAATAAATTTTTTTTT AAAGATTTATAATTAAAATAAAAAAAA Gap no gap Contig...SILWSIESIGSLIVSAQINDDRETMELLHRYQIPQKFLIPLF QILALIDQLEKDLSHQIELDKFTINRDYYFLKSFSNLIEPPLNCLGILKTSRPHFRIFKL VGKNMISQVLETIG

  8. Dicty_cDB: Contig-U09412-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U09412-1 gap included 873 3 3953072 3953946 PLUS 1 2 U09412 0 0 0 0 0 0 1 0 0 0 0 0 0 0 Show Contig...-U09412-1 Contig ID Contig-U09412-1 Contig update 2002. 9.13 Contig sequence >Contig-U09412-1 (Contig...-U09412-1Q) /CSM_Contig/Contig-U09412-1Q.Seq.d ATTATCACAACTATTTTATAATAAACCAATTTTAAAGATTAAAGT...TGGTTCAATAAAAGAAATTAAATATAATTATCAATAAT AATAATAAATTAATTAATAAATTTAAATCAAAA Gap gap included Contig length 873 ...DCQCGFVSVVENNNNNNNNSDNENNENNENNENNE NNEDLEDFIPRKLLKKSSSTLQSRTYLVIYLGRRGILEIWGLKHRSREYFKTIG

  9. Dicty_cDB: Contig-U12357-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U12357-1 gap included 1333 1 2827305 2828232 PLUS 5 6 U12357 0 1 1 2 0 0 1 0 0 0 0 0 0 0 Show Contig...-U12357-1 Contig ID Contig-U12357-1 Contig update 2002.12.18 Contig sequence >Contig-U12357-1 (Contig-U12357-1Q) /CSM_Contig/Contig-U12357...ATAAAATAAAATTTATTAATTTTCCAACT Gap gap included Contig length 1333 Chromosome numb...RYXEKKKXXXXDSXNXXXXXPXX XXLXXXXPXX--- ---QYEKMKLSGEKVDPTLDASIILGNRYLEKKKVTIGDSENYTITVPFSQILKNQKPLI IQRKTKGTL...-QYEKMKLSGEKVDPTLDASIILGNRYLEKKKVTIGDSENYTITVPFSQILKNQKPLI IQRKTKGTLYYSINLSYASLNPISKAIFNRGLNIKRTYYPVSNSNDVIY

  10. Dicty_cDB: Contig-U10996-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U10996-1 gap included 3017 2 5488454 5485454 MINUS 41 76 U10996 0 3 0 24 1 0... 0 8 0 5 0 0 0 0 Show Contig-U10996-1 Contig ID Contig-U10996-1 Contig update 2002.12.18 Contig sequence >Contig-U10996-1 (Contig...-U10996-1Q) /CSM_Contig/Contig-U10996-1Q.Seq.d TGGCCTACTGGTAAAAAAAATTCTAATTTTATTAAAACCC...CTATTTATAATGTATTGTTAAG GCAAAAATAAAAAAAAAAGNAAAAAAA Gap gap included Contig length...LTTTA SSSQQQQQELGLAVLTIRQGYEFENIVKELLDEKKKIEIWSMKPNSKQQWELIKKGSPGN TQMFEDVLLNGNCEGSVMMALKVTREKGSIVFGISFGDATFKTIG

  11. Dicty_cDB: Contig-U12049-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U12049-1 gap included 2563 4 3071598 3069091 MINUS 9 17 U12049 0 0 0 0 2 0 0... 1 4 1 1 0 0 0 Show Contig-U12049-1 Contig ID Contig-U12049-1 Contig update 2002.12.18 Contig sequence >Contig-U12049-1 (Contig...-U12049-1Q) /CSM_Contig/Contig-U12049-1Q.Seq.d TAATGAAGGTAGTAATAATAATATAGTTGAAGCATCAAAAGA...TATCATTTAAACTGAAAAAAGTC CAAAAGATTTATGCAATGATTGCTGCGAATATGCTGCAACTTGTTCTCAT TAAAAATAAACAAAAAAATAATA Gap gap included Contig...disngqcvyseiidcgsssienss nqesssdidittastlgstiastigstigltstttttttsqttgtpttppqtvseipisl astistspvsdegtiastiatt

  12. Automatic optimization of constants and special mathematic ensuring algorithms SKALA-micro system of RBMK-1000 reactor self-certification in operation

    International Nuclear Information System (INIS)

    Aleksandrov, S.I.; Dmitrenko, V.V.; Postnikov, V.V.; Sviridenkov, A.N.; Yurkin, G.V.; Yakunin, I.S.

    2007-01-01

    Paper dwells upon problems dealing with accuracy improvement of the energy release distribution and the safety margin of the RBMK-1000 operation. The accuracy is improved through the automatic optimization of some constants used in the SKALA-micro system special mathematic ensuring program and the regular self-validation of the algorithm to determine the energy release distribution calculation error. The validation based on the regular scanning of the reactor core by a calibrating detector and through the sequence disabling of the internal detectors is shown to give the close results [ru

  13. Automatic text summarization

    CERN Document Server

    Torres Moreno, Juan Manuel

    2014-01-01

    This new textbook examines the motivations and the different algorithms for automatic document summarization (ADS). We performed a recent state of the art. The book shows the main problems of ADS, difficulties and the solutions provided by the community. It presents recent advances in ADS, as well as current applications and trends. The approaches are statistical, linguistic and symbolic. Several exemples are included in order to clarify the theoretical concepts.  The books currently available in the area of Automatic Document Summarization are not recent. Powerful algorithms have been develop

  14. Dicty_cDB: Contig-U13894-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U13894-1 no gap 1550 2 2081463 2079913 MINUS 30 31 U13894 1 0 15 0 9 1 1 0 1 1 1 0 0 0 Show Contig...-U13894-1 Contig ID Contig-U13894-1 Contig update 2002.12.18 Contig sequence >Contig-U13894-1 (Contig...-U13894-1Q) /CSM_Contig/Contig-U13894-1Q.Seq.d CTTTTTGATTGTATAATTGAAAAAAAAAAAAAAAAAAAAAAAAAAA...TAAATTAAATAATTAAAAAAAACAAAAAAATTAAGTGAAAATCAAAAAA Gap no gap Contig length 1550 Chromosome number (1..6, M) ...V*kkkkikk*k*sk*fklnn*kkqkn*vkikk own update 2004. 6.10 Homology vs CSM-cDNA Query= Contig-U13894-1 (Contig

  15. Dicty_cDB: Contig-U15462-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U15462-1 no gap 546 4 3384206 3383661 MINUS 2 2 U15462 0 0 2 0 0 0 0 0 0 0 0 0 0 0 Show Contig...-U15462-1 Contig ID Contig-U15462-1 Contig update 2004. 6.11 Contig sequence >Contig-U15462-1 (Contig...-U15462-1Q) /CSM_Contig/Contig-U15462-1Q.Seq.d CTTTAGATTGGGGNTCAAGAAAAATATTGAAGTATTTGGTGGTGATAAGA...ATTCGATTCACTATCTTATA Gap no gap Contig length 546 Chromosome number (1..6, M) 4 Chromosome length 5430582 St...VMKLGFEVKDLITNDPKCDLFDSLS Y own update 2004. 6.23 Homology vs CSM-cDNA Query= Contig-U15462-1 (Contig

  16. Automatic differentiation bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Corliss, G.F. [comp.

    1992-07-01

    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  17. Dicty_cDB: Contig-U09720-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U09720-1 gap included 1323 2 5906974 5908260 PLUS 1 2 U09720 0 0 1 0 0 0 0 0 0 0 0 0 0 0 Show Contig...-U09720-1 Contig ID Contig-U09720-1 Contig update 2002. 9.13 Contig sequence >Contig-U09720-1 (Contig-U09720-1Q) /CSM_Contig/Contig-U09720...ATNATTATTATAAAAATTT Gap gap included Contig length 1323 Chromosome number (1..6, ...QLEAEDIVKQSQLVRNTLLSILNKLFSNY NNSNETTATTTIGQDQEKLSTLKNQREIIAQSLKIXKKL*linqxll*kf ...AEMFDIDSRNNHAIENDGRLDDA LVCSVGIALAPQSIFQSWKSMSEHKREKYFEQLEAEDIVKQSQLVRNTLLSILNKLFSNY NNSNETTATTTIGQDQEKLSTLK

  18. Dicty_cDB: Contig-U09379-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U09379-1 gap included 899 2 1392012 1392912 PLUS 1 2 U09379 0 0 0 0 0 0 1 0 0 0 0 0 0 0 Show Contig...-U09379-1 Contig ID Contig-U09379-1 Contig update 2002. 9.13 Contig sequence >Contig-U09379-1 (Contig...-U09379-1Q) /CSM_Contig/Contig-U09379-1Q.Seq.d AAAAATTTTTTAAACTAAAAAATAAAAAAAATAAATAAAAAAAAA...TTTAAAAATAATAATAAAAGTGAATATTATAATATTAT AATCTTTTTGGTATAATTGAAAAAGATCAATAATATATTAAAATTTCCAA AAAAAAAAA Gap gap included Contig...VSVCRAYATETATIENKTQIMGKMSGAQGAGFVLGPGIGFLLNFCNFTIG--- ---INNK******sn*finykl***f*kikqphfknlkiiikvniiil*sfwyn

  19. Dicty_cDB: Contig-U15566-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U15566-1 gap included 1830 4 3730704 3729599 MINUS 4 8 U15566 0 0 1 0 1 0 0 0 2 0 0 0 0 0 Show Contig...-U15566-1 Contig ID Contig-U15566-1 Contig update 2004. 6.11 Contig sequence >Contig-U15566-1 (Contig-U15566-1Q) /CSM_Contig/Contig-U1556...CAAGATCCAA TGGAATTTTAATAATAAATAAGAATAATAAAAAAAAAAAA Gap gap included Contig length 1830 Chromosome number (1...ITLTPSEDIEKKLKEI QDENLSNSEIWFAVKSYLEDNNLKEHLYNLVFHYTMPRIDEPVTIGLDHLGNVLVSNR*c tflvvvvvytfgcriephni*qerivlqf*...asilnhirvelsqnqipilkrsfdqillphfekc iieeqqiftnekqrknflsllpisykrqdrkipltpsediekklkeiqdenlsnseiwfa vksylednnlkehlynlvfhytmpridepvtig

  20. Dicty_cDB: Contig-U04334-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U04334-1 no gap 399 4 3746420 3746021 MINUS 3 3 U04334 0 0 0 0 0 0 3 0 0 0 0 0 0 0 Show Contig...-U04334-1 Contig ID Contig-U04334-1 Contig update 2001. 8.29 Contig sequence >Contig-U04334-1 (Contig...-U04334-1Q) /CSM_Contig/Contig-U04334-1Q.Seq.d CAAAAAAAAAAAAGTAAAACAATAAATTATATAAAAAAAATAAAAAAAAT...CTAATTTCA AACAATATCAATAAAATGTTATATAATTACTATTAAAATGAAAAAAAAA Gap no gap Contig len...ce QKKKSKTINYIKKIKKMSIINTISKLSLSNSLKSNITIGNLNGTTVNNYTHNETSSKFTE FFYKII*qnkrwf*kvkelnkkkrkkdyiissfcklysiyfvfs

  1. Dicty_cDB: Contig-U10335-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U10335-1 no gap 1353 2 2769724 2768368 MINUS 3 6 U10335 0 0 2 0 0 0 1 0 0 0 0 0 0 0 Show Contig...-U10335-1 Contig ID Contig-U10335-1 Contig update 2002. 9.13 Contig sequence >Contig-U10335-1 (Contig...-U10335-1Q) /CSM_Contig/Contig-U10335-1Q.Seq.d ATTTTTTTTCTAAATATATAAAAAATAATAATAATAATAATAATATAAT...AAACATAATAAAACAAAAGATAAAAATAAAA ACA Gap no gap Contig length 1353 Chromosome numb...SSLATNNNINNNKRITIPDNH SNNPDKLLEIQLINKIFDISKAFDGKSNNLVSSFQNCTNNNNNNNNNTDNNNNNNISNNN NNNNVPTLQPLSFNNRNNLVNGNISSSSSSNSSNNNIGSSNSNNVTIG

  2. Dicty_cDB: Contig-U12399-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U12399-1 gap included 1358 3 4712677 4711450 MINUS 1 2 U12399 0 1 0 0 0 0 0 0 0 0 0 0 0 0 Show Contig...-U12399-1 Contig ID Contig-U12399-1 Contig update 2002.12.18 Contig sequence >Contig-U12399-1 (Contig-U12399-1Q) /CSM_Contig/Contig-U1239...GAAGATGATATTAGTCTGAGGAAGATATTCTTAAAGA ATTTAACAAATGTTAACA Gap gap included Contig ...*e iekkklnyl*eqkvkyqknhqkimiq*enxmks*LQIYHXFAXLIGEPIPNNDXXX--- ---XXXRHVIWKLYEEITIGLKRTISITXKRESCKSHYLANCIMH...kkklnyl*eqkvkyqknhqkimiq*enxmks*LQIYHXFAXLIGEPIPNNDXXX--- ---XXXRHVIWKLYEEITIGLKRTISITXKRESCKSHYLANCIMHVYWRL

  3. Dicty_cDB: Contig-U11404-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U11404-1 gap included 1618 6 1729583 1727965 MINUS 11 19 U11404 0 6 1 1 0 2 ...0 1 0 0 0 0 0 0 Show Contig-U11404-1 Contig ID Contig-U11404-1 Contig update 2002.12.18 Contig sequence >Contig-U11404-1 (Contig...-U11404-1Q) /CSM_Contig/Contig-U11404-1Q.Seq.d ATTTTAAGAGTTTTAATTTTAATAACTATACTTTTAATAAA...TTTTTCTTTTGAACCAGAAAAAAAAA Gap gap included Contig length 1618 Chromosome number ...AGARMLASLATDKLSNVIYLDVSENDFGDEGVSVICDGFVGNSTIKKLILNGNFKQ SK--- ---YEKITIGLDSVFKDLILEESQAQNEASGATPIPDSPVPTRSP

  4. Dicty_cDB: Contig-U09569-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U09569-1 gap included 1424 5 3658944 3660352 PLUS 8 14 U09569 0 0 8 0 0 0 0 0 0 0 0 0 0 0 Show Contig...-U09569-1 Contig ID Contig-U09569-1 Contig update 2002. 9.13 Contig sequence >Contig-U09569-1 (Contig-U09569-1Q) /CSM_Contig/Contig-U0956...TTAAAAA TAAAATAAATATAAAATAAAATAAAAATTAACAA Gap gap included Contig length 1424 Chromosome number (1..6, M) 5...NQTFQQKYYVNDQYYNYKNGGPIILYINGEGPVSSPPYSSDDGVVIYAQA LNCMIVTLEHRFYGESSPFSELTIENLQYLSHQQALEDLATFVVDFQSKLVGAGHIVTIG...YLSHQQALEDLATFVVDFQSKLVGAGHIVTIG GSYSGALSAWFRIKYPHITVGSIASLGVVHSILDFTAFDAYVSYA---

  5. Dicty_cDB: Contig-U15306-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U15306-1 no gap 2452 3 3887051 3889342 PLUS 54 91 U15306 0 0 0 49 4 1 0 0 0 0 0 0 0 0 Show Contig...-U15306-1 Contig ID Contig-U15306-1 Contig update 2004. 6.11 Contig sequence >Contig-U15306-1 (Contig...-U15306-1Q) /CSM_Contig/Contig-U15306-1Q.Seq.d AAGCATAAACGGTGAATACCTCGACTCCTAAATCGATGAAGACCGTA...TTTTAGAACTTCAAAAAATAGTAC AAATTTTTTCAAATTAAGATAAAAAAAATAAAACAAAAATTAATTTAAAA CA Gap no gap Contig length 2452...*naagtgkgeegrt*hkslpywlapqvkgsvmprggqghygasrggrkhmgidfssivg qdivapisgkvvnfkgartkypmlqlypskkftefdylqmlyvhppvginmgasyqvsvg dtig

  6. Dicty_cDB: Contig-U14772-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U14772-1 no gap 665 1 1988279 1987624 MINUS 1 1 U14772 0 1 0 0 0 0 0 0 0 0 0 0 0 0 Show Contig...-U14772-1 Contig ID Contig-U14772-1 Contig update 2002.12.18 Contig sequence >Contig-U14772-1 (Contig...-U14772-1Q) /CSM_Contig/Contig-U14772-1Q.Seq.d AAAAACAATAACCATCGTTTTTTATTTTTATTTTCAAAATATGGATTTAA...AAATTAATGAAGAAAAAA AAGTAANNNNNNNNN Gap no gap Contig length 665 Chromosome number...DADTTISFLSSQNLSQLSIIKNLVNGKTIG DKKVIVDFYDFKKVIPTPTPIPTPTPPTKTQEESNKKIKLTNEKPKEKKP

  7. Dicty_cDB: Contig-U11141-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U11141-1 gap included 2122 2 1113359 1111236 MINUS 6 12 U11141 0 1 0 2 0 0 0... 1 0 2 0 0 0 0 Show Contig-U11141-1 Contig ID Contig-U11141-1 Contig update 2002.12.18 Contig sequence >Contig-U11141-1 (Contig...-U11141-1Q) /CSM_Contig/Contig-U11141-1Q.Seq.d AAAAAACAATCTTAAAACACACACACACTCAACACACTATCA...AAATCAAAATCAAAATCAAA ATAATAATAATTATAATAATAGCTATAATAAT Gap gap included Contig length 2122 Chromosome number ...HNYFGKVSRGIVSLSDYKYYGYLRSVHLIGYE QHEEELIKTIKSLPVGVSTLELSGHLNKIIFKEGSL--- ---DDSTIGAILNSFSSSSSRETFPRSVESLHLNI

  8. Dicty_cDB: Contig-U13202-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U13202-1 no gap 1083 4 1301578 1302630 PLUS 41 45 U13202 8 0 13 0 0 2 16 0 2 0 0 0 0 0 Show Contig...-U13202-1 Contig ID Contig-U13202-1 Contig update 2002.12.18 Contig sequence >Contig-U13202-1 (Contig...-U13202-1Q) /CSM_Contig/Contig-U13202-1Q.Seq.d ACTGTTGGCCTACTGGGATTTTCTGCAGTAATAATAAAATCAAATA...TTTGTAATTTTAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA Gap no gap Contig len...kvgqfirvprgaqpaqtskftlmih*gvkshffsmlqpnwpncttigpvq nqarcgsllgfwvlqnqlltvlcihnnekcsikfygygyl**nlitvvkvvmpslhg

  9. Dicty_cDB: Contig-U15062-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U15062-1 no gap 1282 3 4759691 4758480 MINUS 5 6 U15062 0 0 0 0 0 0 1 0 1 1 1 0 1 0 Show Contig...-U15062-1 Contig ID Contig-U15062-1 Contig update 2004. 6.11 Contig sequence >Contig-U15062-1 (Contig...-U15062-1Q) /CSM_Contig/Contig-U15062-1Q.Seq.d CAAATATTTAAATAAATTTAACATTATAAAAACAAAAATTAATAAAGTA...TTTTCAATAGATAATAATAAAAAAAAAAAAAAAAAAA AAAAAAAAATTATTTTAAAAATAAAAAAAAAA Gap no gap Contig length 1282 Chromos...KMSHNHNSNNNKTTTTTTNDSGSAIANGINLEKILADVKECN YNLVNSITATEAIQKEKESLENELSTKGTIGDGKRIKKLQYNISLQTETLMKTLMKLDSL SITG

  10. Dicty_cDB: Contig-U09432-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U09432-1 gap included 993 5 741953 740957 MINUS 1 2 U09432 0 0 0 0 0 0 1 0 0 0 0 0 0 0 Show Contig...-U09432-1 Contig ID Contig-U09432-1 Contig update 2002. 9.13 Contig sequence >Contig-U09432-1 (Contig...-U09432-1Q) /CSM_Contig/Contig-U09432-1Q.Seq.d AGGAAATATTTTAATATTTTATTTTTTTTATTTTTTTTATTTATTA...TTTTGGTGGTAAATATAGATATGAAAATAAA CAAATCCAAATTTTAGTTGAATTAAATTTCACTGATACCACTCAAAAAAA AAA Gap gap included Contig...iy*sni*SVKFGICYNYAKYHLSICNHTIYPGSDNQSLYFKLSSIFDS PTILSGYAVIYNSLDQIITNGTYNLILDEDVPTIG

  11. Texture Repairing by Unified Low Rank Optimization

    Institute of Scientific and Technical Information of China (English)

    Xiao Liang; Xiang Ren; Zhengdong Zhang; Yi Ma

    2016-01-01

    In this paper, we show how to harness both low-rank and sparse structures in regular or near-regular textures for image completion. Our method is based on a unified formulation for both random and contiguous corruption. In addition to the low rank property of texture, the algorithm also uses the sparse assumption of the natural image: because the natural image is piecewise smooth, it is sparse in certain transformed domain (such as Fourier or wavelet transform). We combine low-rank and sparsity properties of the texture image together in the proposed algorithm. Our algorithm based on convex optimization can automatically and correctly repair the global structure of a corrupted texture, even without precise information about the regions to be completed. This algorithm integrates texture rectification and repairing into one optimization problem. Through extensive simulations, we show our method can complete and repair textures corrupted by errors with both random and contiguous supports better than existing low-rank matrix recovery methods. Our method demonstrates significant advantage over local patch based texture synthesis techniques in dealing with large corruption, non-uniform texture, and large perspective deformation.

  12. Automatic vertebral identification using surface-based registration

    Science.gov (United States)

    Herring, Jeannette L.; Dawant, Benoit M.

    2000-06-01

    This work introduces an enhancement to currently existing methods of intra-operative vertebral registration by allowing the portion of the spinal column surface that correctly matches a set of physical vertebral points to be automatically selected from several possible choices. Automatic selection is made possible by the shape variations that exist among lumbar vertebrae. In our experiments, we register vertebral points representing physical space to spinal column surfaces extracted from computed tomography images. The vertebral points are taken from the posterior elements of a single vertebra to represent the region of surgical interest. The surface is extracted using an improved version of the fully automatic marching cubes algorithm, which results in a triangulated surface that contains multiple vertebrae. We find the correct portion of the surface by registering the set of physical points to multiple surface areas, including all vertebral surfaces that potentially match the physical point set. We then compute the standard deviation of the surface error for the set of points registered to each vertebral surface that is a possible match, and the registration that corresponds to the lowest standard deviation designates the correct match. We have performed our current experiments on two plastic spine phantoms and one patient.

  13. Dicty_cDB: Contig-U15005-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U15005-1 no gap 2023 1 1509217 1507616 MINUS 2 4 U15005 0 0 0 0 1 0 1 0 0 0 0 0 0 0 Show Contig...-U15005-1 Contig ID Contig-U15005-1 Contig update 2004. 6.11 Contig sequence >Contig-U15005-1 (Contig...-U15005-1Q) /CSM_Contig/Contig-U15005-1Q.Seq.d AATTTTCTTTTCTTTTTAAAACTTAAGTACCATATGGCAGAATATACAC...ATAATAACGATATTAA Gap no gap Contig length 2023 Chromosome number (1..6, M) 1 Chro...HMAEYTHYFIQYNLTDIFYEDVNIEKYSCSICYESVYKKEIYQCKEIHWF CKTCWAESLFKKKECMICRCIVKSISELSRNRFIEQDFLNIKVNCPNSFKYIDENKNNNN KIKDLENGCKDIITIG

  14. Dicty_cDB: Contig-U10406-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U10406-1 no gap 661 4 1621526 1620875 MINUS 1 1 U10406 0 0 1 0 0 0 0 0 0 0 0 0 0 0 Show Contig...-U10406-1 Contig ID Contig-U10406-1 Contig update 2002. 9.13 Contig sequence >Contig-U10406-1 (Contig...-U10406-1Q) /CSM_Contig/Contig-U10406-1Q.Seq.d NNNNNNNNNNATAAGTAAAAGAGTTATTGGTCCAAGATTAGATGATGACA...TACAAATAAGTAAAGTTG ATAAAGAACAT Gap no gap Contig length 661 Chromosome number (1....cid sequence XXXISKRVIGPRLDDDNNNNDNDKFNNNNKKAIGPSRIGPTIGPSIGPSRYNTNNNDSNH NSNNDDDDDSSEEDEEDTKSEWERVRNMIENNKN

  15. Dicty_cDB: Contig-U16457-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U16457-1 no gap 1065 3 996438 997502 PLUS 6 5 U16457 0 0 1 0 1 0 2 0 0 0 0 0 1 1 Show Contig...-U16457-1 Contig ID Contig-U16457-1 Contig update 2004. 6.11 Contig sequence >Contig-U16457-1 (Contig...-U16457-1Q) /CSM_Contig/Contig-U16457-1Q.Seq.d ACAATTGGTGTTGCTGCTCTATTCGGTCTTCCAGCTATGGCACGTTCCGC A...TTTAACAAGATTGGAAGAC CAAAAAGAAAAAAAA Gap no gap Contig length 1065 Chromosome numb... Translated Amino Acid sequence TIGVAALFGLPAMARSAAMSLVFLIPFMWIVFSVHYPINSVVADICMSYNNNTGSIEQQL ANYTNPIVSEIFGTC

  16. Dicty_cDB: Contig-U04768-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U04768-1 no gap 762 6 2607190 2606476 MINUS 3 3 U04768 1 0 0 0 0 0 2 0 0 0 0 0 0 0 Show Contig...-U04768-1 Contig ID Contig-U04768-1 Contig update 2001. 8.29 Contig sequence >Contig-U04768-1 (Contig...-U04768-1Q) /CSM_Contig/Contig-U04768-1Q.Seq.d AAAGTCTTATTTGTTTAAAAAAAAAAAAAAAAAATAAAAAACTTTATTCT...AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAA Gap no gap Contig length 762 Chromosome number (1..6, M...lknf*KMVMMHDEYISPTKLQFGFMIAVAFLG TIGVMGFCQNVFDILLGVISILSIYIGMRGVWKRKKRWLFVFMWLMMGMGFLHLVSFAVV VILHHKNPTKNTVF

  17. Dicty_cDB: Contig-U13326-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U13326-1 no gap 240 6 1728259 1728019 MINUS 1 1 U13326 0 0 0 0 0 1 0 0 0 0 0 0 0 0 Show Contig...-U13326-1 Contig ID Contig-U13326-1 Contig update 2002.12.18 Contig sequence >Contig-U13326-1 (Contig...-U13326-1Q) /CSM_Contig/Contig-U13326-1Q.Seq.d AATGACTCAACAAATCTTGGAGAGTATGCAAAATACTTTCCAATCTATGG...CCTCGTTAAAGGTGCTGGTGC TGAATTAAGTTCTCGTGCTCATGAGTGTTTCATTAGTGCCTTGGATATTG CCTCTGATTATACCTACGAGAAAATTACCATTGGCTTGGA Gap no gap Contig...FQSMDGPTIKRLATTIQYGSKDVDEQQIHSTLVKGAGAELSSRAHECF ISALDIASDYTYEKITIGL Translated Amino Acid sequence (All Fra

  18. Dicty_cDB: Contig-U15036-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U15036-1 no gap 3102 - - - - 16 24 U15036 0 5 1 2 0 1 1 2 3 1 0 0 0 0 Show Contig-U15036-1 Contig... ID Contig-U15036-1 Contig update 2004. 6.11 Contig sequence >Contig-U15036-1 (Contig-U15036-1Q) /CSM_Contig.../Contig-U15036-1Q.Seq.d ATCTTTTTAAAAAAAAAAAAAATAAAACAAATAAAGAAAGAAATTAAATA AATATTAATAAT...AATTTAAAATTAATTTTTAG AT Gap no gap Contig length 3102 Chromosome number (1..6, M) - Chromosome length - Star...RKKQTDAVAEIPVD NPTSTSTTTTTTTTSNATSILSAIHTSTINSNTSSHNNNQQQQQQQQTILPTQPTIINTP TPVRSSVSRSQSPLPSGNGSSIISQEKTPLSTFVLSTCRPSALVLPPGSTIG

  19. Dicty_cDB: Contig-U13455-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U13455-1 no gap 750 2 945431 946181 PLUS 2 2 U13455 0 0 0 0 0 0 1 0 0 1 0 0 0 0 Show Contig...-U13455-1 Contig ID Contig-U13455-1 Contig update 2002.12.18 Contig sequence >Contig-U13455-1 (Contig...-U13455-1Q) /CSM_Contig/Contig-U13455-1Q.Seq.d TAATTCCAACAACATCAACAAATTCAACAACAATTACAAATGCAACAACA TA...CAATAATAATAATAATAACAATAACAATAATAATAA Gap no gap Contig length 750 Chromosome number (1..6, M) 2 Chromosome l...KMLEYIQKNPSATRPSCIQVVQQPSSKVVWKNRRLDTPFKVKVDLKAASAMA GTNLTTASVITIGIVTDHKGKLQIDSVENFTEAFNGQGLAVFQGLKMTKGTWGKE

  20. Dicty_cDB: Contig-U14400-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U14400-1 no gap 1939 4 4053811 4055750 PLUS 5 7 U14400 0 0 2 0 0 1 0 0 1 1 0 0 0 0 Show Contig...-U14400-1 Contig ID Contig-U14400-1 Contig update 2002.12.18 Contig sequence >Contig-U14400-1 (Contig...-U14400-1Q) /CSM_Contig/Contig-U14400-1Q.Seq.d CATTACCAATAAATTTATCTGCTTCAACACCTATACCAATGACATCACCA...AGGTTTATAAAATATATTGAATCAATTTTTGATTAAA Gap no gap Contig length 1939 Chromosome number (1..6, M) 4 Chromosome...HQQQQSKTVTSSTTSTETTTTVESSTTSTTITTSTSTPIPTTITTTPTTPI NSDNSWTFTSFSPKVFKEIRRYYGVDEEFLKSQENSSGIVKFLEVQTIGRSGSFFY

  1. Dicty_cDB: Contig-U10709-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U10709-1 gap included 1228 4 757921 759149 PLUS 2 3 U10709 0 0 0 1 1 0 0 0 0 0 0 0 0 0 Show Contig...-U10709-1 Contig ID Contig-U10709-1 Contig update 2002.12.18 Contig sequence >Contig-U10709-1 (Contig...-U10709-1Q) /CSM_Contig/Contig-U10709-1Q.Seq.d ATTAGTAACACAGACATTGGTAACACGAATTTATTACCACCATCAC...ATGTTTAGGTGATAATACTCATAGTCAA Gap gap included Contig length 1228 Chromosome number (1..6, M) 4 Chromosome le...LDIFLIQIGAAIMGSNQFIQHAINIYNLEDWFEIEPFNG SLNKSTEGTPTTTSSQPPSTPSKQTSLRNSAGTVPTTPSQSSSTIVPTLDTIGETTTTTT TTATTTT

  2. Dicty_cDB: Contig-U10837-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U10837-1 gap included 1996 2 5280203 5282199 PLUS 8 9 U10837 0 3 0 3 1 0 0 1 0 0 0 0 0 0 Show Contig...-U10837-1 Contig ID Contig-U10837-1 Contig update 2002.12.18 Contig sequence >Contig-U10837-1 (Contig-U10837-1Q) /CSM_Contig/Contig-U10837...TCNT Gap gap included Contig length 1996 Chromosome number (1..6, M) 2 Chromosome...YSSKGYFKHLDSFLSEISVP LCESVSKSSTLVFSLLFNMLEYSTADYRYPILKILTALVKCGVNPAETKSSRVPEWFDTV TQFLNDHKTPHYIVSQAIRFIEITSGNSPTSLITIDNASLKPSKNTIG...SSRVPEWFDTV TQFLNDHKTPHYIVSQAIRFIEITSGNSPTSLITIDNASLKPSKNTIGTKKFSNKVDRGT LLAGNYFNKVLVDTVPGVRSSVNSLTKSIYSTTQI

  3. Dicty_cDB: Contig-U12765-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U12765-1 no gap 1256 6 1467819 1466563 MINUS 3 3 U12765 0 0 0 2 0 0 0 0 0 0 1 0 0 0 Show Contig...-U12765-1 Contig ID Contig-U12765-1 Contig update 2002.12.18 Contig sequence >Contig-U12765-1 (Contig...-U12765-1Q) /CSM_Contig/Contig-U12765-1Q.Seq.d CAAAAAGGAAACACTAGTCCAGTTAGAACCCCAAATACTACTACTACTA...TATCGATTGTTCAAAGGTTTCAATGGTTGATACTAAT TTCTTA Gap no gap Contig length 1256 Chromosome number (1..6, M) 6 Chr...EYQEDLTPIFEPIFLDLIKIL STTTLTGNVFPYYKVFSRLVQFKAVSDLVGTLQCWNSPNFNGKEMERNTILGSLFSPSSA SDDGSTIKQYFSNASTMNKNTIGDA

  4. Dicty_cDB: Contig-U09480-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U09480-1 gap included 705 5 4277527 4276817 MINUS 1 2 U09480 0 0 0 0 0 0 1 0 0 0 0 0 0 0 Show Contig...-U09480-1 Contig ID Contig-U09480-1 Contig update 2002. 9.13 Contig sequence >Contig-U09480-1 (Contig-U09480-1Q) /CSM_Contig/Contig-U09480...AAAAAAAAAA Gap gap included Contig length 705 Chromosome number (1..6, M) 5 Chromosome length 5062330 Start ...**********imaeinienpfhvntkidvntfvnqirgipngsrcdftnsvvkhf sslgynvfvchpnhavtgpyaklhcefrntkfstig...srcdftnsvvkhf sslgynvfvchpnhavtgpyaklhcefrntkfstigydvyiiargrkvtatnfgdggydn wasggh

  5. Dicty_cDB: Contig-U09345-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U09345-1 gap included 1216 4 3361857 3360637 MINUS 4 5 U09345 1 0 1 0 0 0 2 0 0 0 0 0 0 0 Show Contig...-U09345-1 Contig ID Contig-U09345-1 Contig update 2002. 9.13 Contig sequence >Contig-U09345-1 (Contig-U09345-1Q) /CSM_Contig/Contig-U0934...AATGGTATTTTAAAAATAA Gap gap included Contig length 1216 Chromosome number (1..6, M) 4 Chromosome length 5430...ALFTSSNPKYGCSGCVQLKNQIESFSLSYEPYL NSAGFLEKPIFIVILEVDYNMEVFQTIGLNTIPHLLFIPSGSKPITQKGYAYTGFEQTSS QSISDFIYSHSKI...LLALFTSSNPKYGCSGCVQLKNQIESFSLSYEPYL NSAGFLEKPIFIVILEVDYNMEVFQTIGLNTIPHLLFIPSGSKPI

  6. Dicty_cDB: Contig-U15323-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U15323-1 no gap 1230 2 3760829 3759661 MINUS 76 108 U15323 2 0 21 0 9 4 0 0 22 4 13 0 1 0 Show Contig...-U15323-1 Contig ID Contig-U15323-1 Contig update 2004. 6.11 Contig sequence >Contig-U15323-1 (Contig-U15323-1Q) /CSM_Contig/Contig-U1532...TAAAATTTAAGCAATCATTCCAT Gap no gap Contig length 1230 Chromosome number (1..6, M) 2 Chromosome length 846757...VGLLVFFNILYCTPLYYILFFFKMNSKFADELIATAKAIVAPGKGILAADESTNTIGAR FKKINLENNEENRRAYRELLIGTGNGVNEFIGGIILYEETLYQKMADG...MNSKFADELIATAKAIVAPGKGILAADESTNTIGAR FKKINLENNEENRRAYRELLIGTGNGVNEFIGGIILYEETLYQK

  7. Dicty_cDB: Contig-U14236-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U14236-1 no gap 660 2 5626866 5627517 PLUS 1 1 U14236 0 0 0 0 0 0 0 0 0 1 0 0 0 0 Show Contig...-U14236-1 Contig ID Contig-U14236-1 Contig update 2002.12.18 Contig sequence >Contig-U14236-1 (Contig...-U14236-1Q) /CSM_Contig/Contig-U14236-1Q.Seq.d NNNNNNNNNNGAAAATCAAAAATTAAAAAGTAACATTACTCTATTATATG ...CAATCACTCCAATTAAA CCATAGTTTT Gap no gap Contig length 660 Chromosome number (1..6...MGSEKSPFNLKQYPSLVKIDDVS QCPKYKCLKRKSLNEWTIGLNIPAFCRESRYDCSLCYKYIECSFSDEF*tnlsalfv

  8. Modified automatic R-peak detection algorithm for patients with epilepsy using a portable electrocardiogram recorder.

    Science.gov (United States)

    Jeppesen, J; Beniczky, S; Fuglsang Frederiksen, A; Sidenius, P; Johansen, P

    2017-07-01

    Earlier studies have shown that short term heart rate variability (HRV) analysis of ECG seems promising for detection of epileptic seizures. A precise and accurate automatic R-peak detection algorithm is a necessity in a real-time, continuous measurement of HRV, in a portable ECG device. We used the portable CE marked ePatch® heart monitor to record the ECG of 14 patients, who were enrolled in the videoEEG long term monitoring unit for clinical workup of epilepsy. Recordings of the first 7 patients were used as training set of data for the R-peak detection algorithm and the recordings of the last 7 patients (467.6 recording hours) were used to test the performance of the algorithm. We aimed to modify an existing QRS-detection algorithm to a more precise R-peak detection algorithm to avoid the possible jitter Qand S-peaks can create in the tachogram, which causes error in short-term HRVanalysis. The proposed R-peak detection algorithm showed a high sensitivity (Se = 99.979%) and positive predictive value (P+ = 99.976%), which was comparable with a previously published QRS-detection algorithm for the ePatch® ECG device, when testing the same dataset. The novel R-peak detection algorithm designed to avoid jitter has very high sensitivity and specificity and thus is a suitable tool for a robust, fast, real-time HRV-analysis in patients with epilepsy, creating the possibility for real-time seizure detection for these patients.

  9. NSAMD: A new approach to discover structured contiguous substrings in sequence datasets using Next-Symbol-Array.

    Science.gov (United States)

    Pari, Abdolvahed; Baraani, Ahmad; Parseh, Saeed

    2016-10-01

    In many sequence data mining applications, the goal is to find frequent substrings. Some of these applications like extracting motifs in protein and DNA sequences are looking for frequently occurring approximate contiguous substrings called simple motifs. By approximate we mean that some mismatches are allowed during similarity test between substrings, and it helps to discover unknown patterns. Structured motifs in DNA sequences are frequent structured contiguous substrings which contains two or more simple motifs. There are some works that have been done to find simple motifs but these works have problems such as low scalability, high execution time, no guarantee to find all patterns, and low flexibility in adaptation to other application. The Flame is the only algorithm that can find all unknown structured patterns in a dataset and has solved most of these problems but its scalability for very large sequences is still weak. In this research a new approach named Next-Symbol-Array based Motif Discovery (NSAMD) is represented to improve scalability in extracting all unknown simple and structured patterns. To reach this goal a new data structure has been presented called Next-Symbol-Array. This data structure makes change in how to find patterns by NSAMD in comparison with Flame and helps to find structured motif faster. Proposed algorithm is as accurate as Flame and extracts all existing patterns in dataset. Performance comparisons show that NSAMD outperforms Flame in extracting structured motifs in both execution time (51% faster) and memory usage (more than 99%). Proposed algorithm is slower in extracting simple motifs but considerable improvement in memory usage (more than 99%) makes NSAMD more scalable than Flame. This advantage of NSAMD is very important in biological applications in which very large sequences are applied. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. PHOTOGRAMMETRIC MODEL BASED METHOD OF AUTOMATIC ORIENTATION OF SPACE CARGO SHIP RELATIVE TO THE INTERNATIONAL SPACE STATION

    Directory of Open Access Journals (Sweden)

    Y. B. Blokhinov

    2012-07-01

    Full Text Available The technical problem of creating the new Russian version of an automatic Space Cargo Ship (SCS for the International Space Station (ISS is inseparably connected to the development of a digital video system for automatically measuring the SCS position relative to ISS in the process of spacecraft docking. This paper presents a method for estimating the orientation elements based on the use of a highly detailed digital model of the ISS. The input data are digital frames from a calibrated video system and the initial values of orientation elements, these can be estimated from navigation devices or by fast-and-rough viewpoint-dependent algorithm. Then orientation elements should be defined precisely by means of algorithmic processing. The main idea is to solve the exterior orientation problem mainly on the basis of contour information of the frame image of ISS instead of ground control points. A detailed digital model is used for generating raster templates of ISS nodes; the templates are used to detect and locate the nodes on the target image with the required accuracy. The process is performed for every frame, the resulting parameters are considered to be the orientation elements. The Kalman filter is used for statistical support of the estimation process and real time pose tracking. Finally, the modeling results presented show that the proposed method can be regarded as one means to ensure the algorithmic support of automatic space ships docking.

  11. Dicty_cDB: Contig-U07545-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U07545-1 no gap 439 3 4955441 4955098 MINUS 1 1 U07545 0 0 0 0 0 0 1 0 0 0 0 0 0 0 Show Contig...-U07545-1 Contig ID Contig-U07545-1 Contig update 2002. 5. 9 Contig sequence >Contig-U07545-1 (Contig...-U07545-1Q) /CSM_Contig/Contig-U07545-1Q.Seq.d ATATGAAATACTTAATACTTTTAATTTTCCTTTTAATAAATTCAACTTTT...ATGTTTCAGAGTCTGGTTG Gap no gap Contig length 439 Chromosome number (1..6, M) 3 Chromosome length 6358359 Sta...e MKYLILLIFLLINSTFGNIQFSKYISNSGNDNNSCGSFTSPCKTIGYSIQQIKSYEYNQY SIEILLDSGNYYSQNPINLYGLNISISAQNSNDLVQFLVPNINGT

  12. Dicty_cDB: Contig-U15359-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U15359-1 no gap 1420 6 1334613 1333192 MINUS 3 3 U15359 0 1 0 0 1 0 0 0 0 1 0 0 0 0 Show Contig...-U15359-1 Contig ID Contig-U15359-1 Contig update 2004. 6.11 Contig sequence >Contig-U15359-1 (Contig...-U15359-1Q) /CSM_Contig/Contig-U15359-1Q.Seq.d TATAGCATCATTTGCAAAGTTTAGTTTAAAGAAAAAAGAGAAAGCGGAA...A AAAAAAACTGGAAAAATTAA Gap no gap Contig length 1420 Chromosome number (1..6, M) 6 Chromosome length 3595308...SSGF DEPSLAVMYVDRALKGASAVQTIGRLSRVSKGKNACYIVDFVNTRREISDAFGQYWRETC LKGETRKTVLELKLNRVLGKLSAIEPLANGRLEESVEYILRD

  13. Dicty_cDB: Contig-U09581-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U09581-1 gap included 1235 1 2575525 2576764 PLUS 1 2 U09581 0 0 1 0 0 0 0 0 0 0 0 0 0 0 Show Contig...-U09581-1 Contig ID Contig-U09581-1 Contig update 2002. 9.13 Contig sequence >Contig-U09581-1 (Contig-U09581-1Q) /CSM_Contig/Contig-U09581...ATCAAAATAAATTTTTGTAACATTAATAATAAATAAN Gap gap included Contig length 1235 Chromosome number (1..6, M) 1 Chro... VFD420Z ,579,1237 Translated Amino Acid sequence KKPGVVTIKGSSFCSQPTITIGDDSCSQPILSVGNDYDSLTCNFQSNAGLSNSTLLVS...ames) Frame A: KKPGVVTIKGSSFCSQPTITIGDDSCSQPILSVGNDYDSLTCNFQSNAGLSNSTLLVSII CDTIQ

  14. Dicty_cDB: Contig-U04729-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U04729-1 no gap 251 5 1037629 1037880 PLUS 1 1 U04729 0 0 0 0 0 0 1 0 0 0 0 0 0 0 Show Contig...-U04729-1 Contig ID Contig-U04729-1 Contig update 2001. 8.29 Contig sequence >Contig-U04729-1 (Contig...-U04729-1Q) /CSM_Contig/Contig-U04729-1Q.Seq.d TGGATTTATAACAGAGGTTATTGTAGGTGGTAAAACTTTTAGAGGAATCG ...CATTATCTAATGGG T Gap no gap Contig length 251 Chromosome number (1..6, M) 5 Chromosome length 5062330 Start ...ITEVIVGGKTFRGIVFEDLKSSNQTNNHSQNFSPNQSGTNLNNSNSNIPSSKKIKDKN ISPSSFLPTIGSTTSTSNPLSNG Translated Amino Acid seq

  15. Dicty_cDB: Contig-U06929-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U06929-1 no gap 726 5 4252576 4251850 MINUS 1 1 U06929 1 0 0 0 0 0 0 0 0 0 0 0 0 0 Show Contig...-U06929-1 Contig ID Contig-U06929-1 Contig update 2001. 8.30 Contig sequence >Contig-U06929-1 (Contig...-U06929-1Q) /CSM_Contig/Contig-U06929-1Q.Seq.d AGTTCATTCATTTAGTCGTATGATAGTATCACCATTTATAAATCCAAAAT...TAAATTAAATAAATA Gap no gap Contig length 726 Chromosome number (1..6, M) 5 Chromosome length 5062330 Start p...PSAISNNSNNS NNNDDNRPPILGLPFLFDYKNRITRGSRFFETIHYKIVHVTSATEFGIRRISKLYGTKWQ LEIGLKHQITQSGALQCLFTHTIGQTTIFGLSFGF

  16. Dicty_cDB: Contig-U15828-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U15828-1 gap included 1593 1 4184040 4182448 MINUS 12 19 U15828 0 0 6 0 0 0 ...0 0 2 0 4 0 0 0 Show Contig-U15828-1 Contig ID Contig-U15828-1 Contig update 2004. 6.11 Contig sequence >Contig-U15828-1 (Contig...-U15828-1Q) /CSM_Contig/Contig-U15828-1Q.Seq.d ATAAAAAAAATTAAAAAATTAAAAAAGTTATCCACCCAAGT...ACA AATATTATAACTGGTACTGCTACTGTTTCAATCCCTCAAAAAAATTTAAT TTATATTTTACCAAATTCAAATACAATTAATCAATCAACAATTACAATTA CAA Gap gap included Contig...SFNPANSDFSFSYNINTTITQPTQIYLNQDIYYPNGFTTNIITGTATVSIPQ KNLIYILPNSNTINQSTITIT own update 2004. 6.23 Homology vs CSM-cDNA Query= Contig

  17. Dicty_cDB: Contig-U15525-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U15525-1 gap included 3361 6 202399 204109 PLUS 34 57 U15525 0 0 7 0 7 6 0 0 4 3 7 0 0 0 Show Contig...-U15525-1 Contig ID Contig-U15525-1 Contig update 2004. 6.11 Contig sequence >Contig-U15525-1 (Contig-U15525-1Q) /CSM_Contig/Contig-U15525...ATTTAATTAAATAATAATA Gap gap included Contig length 3361 Chromosome number (1..6, M) 6 Chromosome length 3595...TEATCLILSVD ETVQNNQAEQAQAGPQINNQTRQALSRVEVFKQ--- ---LDTIGIKKESGGGLGDSQFIAGAAFKRTFFYAGFEQQPKHIKNPKVLCLNIELELK...lslnsiqslpqlkqlv*ssll mkpfkiiklnklklvhklitkhvklyhg*rcss--- ---LDTIGIKKESGGGLGDSQFIAGAAFKRTFFYAGFEQQPKHIKNPKV

  18. Dicty_cDB: Contig-U01750-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U01750-1 no gap 811 3 3337090 3336279 MINUS 2 2 U01750 1 0 0 0 0 0 1 0 0 0 0 0 0 0 Show Contig...-U01750-1 Contig ID Contig-U01750-1 Contig update 2001. 8.29 Contig sequence >Contig-U01750-1 (Contig...-U01750-1Q) /CSM_Contig/Contig-U01750-1Q.Seq.d GGAAGTTGTAATAATAAAAAAATAAAAATAAAAATAAAAAAATAAAAAAA...GAATACCAAGGTGAAAGAATTTTTCAAAAACTTCCTCAA ATCAACACAAATTTCGAAAAATTAACAATTTGGGAAAAGAAAATCGTTTC AAATCTTTATT Gap no gap Contig...crncnciwsktl*tywiyskiinpi**i*ipr *knfsktssnqhkfrkinnlgkenrfksl own update 2004. 6. 7 Homology vs CSM-cDNA Query= Contig

  19. Dicty_cDB: Contig-U09822-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U09822-1 gap included 1255 3 5930658 5929418 MINUS 5 6 U09822 3 0 2 0 0 0 0 0 0 0 0 0 0 0 Show Contig...-U09822-1 Contig ID Contig-U09822-1 Contig update 2002. 9.13 Contig sequence >Contig-U09822-1 (Contig-U09822-1Q) /CSM_Contig/Contig-U0982...AAAAGAAAAAAAAAAAAAAAAGATTTAATTAAATAAAAAAAAA AAAAAAAAAAAAAAA Gap gap included Contig length 1255 Chromosome n...,975 est6= VSA519Z ,780,1257 Translated Amino Acid sequence QPFYLVQSMFEPIQDSSFTSIGEIISYDTIG...rfn*ikkkkkk k Frame C: QPFYLVQSMFEPIQDSSFTSIGEIISYDTIGFDGKINTAVMSSLSPSTMYFYCVGDKS

  20. Dicty_cDB: Contig-U11883-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U11883-1 gap included 599 2 1457179 1457762 PLUS 1 2 U11883 0 0 0 0 0 0 0 0 0 1 0 0 0 0 Show Contig...-U11883-1 Contig ID Contig-U11883-1 Contig update 2002.12.18 Contig sequence >Contig-U11883-1 (Contig...-U11883-1Q) /CSM_Contig/Contig-U11883-1Q.Seq.d TACAAAATTTATATATATATATAATATTTTTAAATAATTATATTT...ATTTAGATGTATTTGGTATTCAAACATTA ACCGAACAACAAGCCTCTACAAAATTATTAACTTTTGTCATTTCAAAATC AGGTGAAAA Gap gap included Contig...ffkixn*kikkgfhvkxksflwfkxxx--- ---xxxx******************yprkyiniti*rn*kdil*ii*rne*rergtksc* nifs*kestpl*fnsxfktniilfstvfnttnvstig

  1. Dicty_cDB: Contig-U07021-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U07021-1 no gap 601 2 3862699 3862098 MINUS 1 2 U07021 1 0 0 0 0 0 0 0 0 0 0 0 0 0 Show Contig...-U07021-1 Contig ID Contig-U07021-1 Contig update 2001. 8.30 Contig sequence >Contig-U07021-1 (Contig...-U07021-1Q) /CSM_Contig/Contig-U07021-1Q.Seq.d AAAAAAACAAAATGAATAAATTTAATATTACATCATTATTTATTATTTTA...TTTAATATATTCAGAAGGAAATTC TTATTTACAACAAAATTTCCCATTACTTTCTTANTTAAANTCCGTTAAAA T Gap no gap Contig length 601 C...QACCRTTQLFINYADNSFLDSAGFSPFGKVISGFNNTLNFYGGYGEEPDQSLIYSE GNSYLQQNFPLLSXLXSVK own update 2004. 6.10 Homology vs CSM-cDNA Query= Contig

  2. Dicty_cDB: Contig-U13680-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U13680-1 no gap 822 5 2371965 2372786 PLUS 2 2 U13680 0 2 0 0 0 0 0 0 0 0 0 0 0 0 Show Contig...-U13680-1 Contig ID Contig-U13680-1 Contig update 2002.12.18 Contig sequence >Contig-U13680-1 (Contig...-U13680-1Q) /CSM_Contig/Contig-U13680-1Q.Seq.d AAAAAGATTCTCAAGGAATTCACCGTGTTTATACTTCTTATGGTAGAACT ...GGGAATCAATGATTTAAATATCTACCAAATTCAAAAGG AAGGTGATGTCGAGTCACATTCATTACAATCACCATCGAAATTATTATTT CATGGTTCAAGAGCATCGAATT Gap no gap Contig...**sirtinkdig*kslc*snhsidk*ffsynh*twy*ntigclingt s*kw*tcfeknqylfewynqsiisrvgeikfrifhnyst*tw*rfrcclkeyh*kfgsie

  3. Dicty_cDB: Contig-U15718-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U15718-1 gap included 3735 6 2645446 2642451 MINUS 153 276 U15718 0 0 0 118 ...1 0 0 20 3 10 1 0 0 0 Show Contig-U15718-1 Contig ID Contig-U15718-1 Contig update 2004. 6.11 Contig sequence >Contig...-U15718-1 (Contig-U15718-1Q) /CSM_Contig/Contig-U15718-1Q.Seq.d AAATTATTAAATTGTTTATTAATTTTTTTTTTTAC...CCTG Gap gap included Contig length 3735 Chromosome number (1..6, M) 6 Chromosome length 3595308 Start point...ptqtppptqtpt nhsigvnecdccpegqycllifghercfiandggdgipeetigcpgvttgtptstdggtg hytesgtgnphlcdrhhcrsgmechvingipecl

  4. Dicty_cDB: Contig-U15573-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U15573-1 gap included 2005 4 5020093 5018210 MINUS 13 13 U15573 0 5 0 1 1 0 ...0 0 0 1 1 0 2 2 Show Contig-U15573-1 Contig ID Contig-U15573-1 Contig update 2004. 6.11 Contig sequence >Contig-U15573-1 (Contig...-U15573-1Q) /CSM_Contig/Contig-U15573-1Q.Seq.d AGTCTTGAGCTTTTATTGGGTCAACCATTGGGTGAATATAC... AGCNTTAACNGGNAA Gap gap included Contig length 2005 Chromosome number (1..6, M) ...xxlfrsnxslxxxxxxsxnxx Frame C: s*afigstig*iyiylkrfhlfl*skryyqskw*fkifpilkqttiiyen

  5. Dicty_cDB: Contig-U01204-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U01204-1 gap included 918 2 1928287 1927368 MINUS 2 3 U01204 0 0 0 0 0 2 0 0 0 0 0 0 0 0 Show Contig...-U01204-1 Contig ID Contig-U01204-1 Contig update 2001. 8.29 Contig sequence >Contig-U01204-1 (Contig-U01204-1Q) /CSM_Contig/Contig-U01204...AAAAATAATAA Gap gap included Contig length 918 Chromosome number (1..6, M) 2 Chromosome length 8467578 Start...LAWEVFWVGTPLFVLMASAFNQIHWALAWVLMVIILQSGFMN--- ---QHSHTIGNETIIIVMDSWVVDQIPDQVSWMEQ...fgwvlhyly*whqhsikfighwhgy*w*sfynlvl*--- ---QHSHTIGNETIIIVMDSWVVDQIPDQVSWMEQVLSDNN

  6. Dicty_cDB: Contig-U12043-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U12043-1 gap included 1898 6 2694437 2692539 MINUS 7 13 U12043 0 6 0 0 0 0 0... 1 0 0 0 0 0 0 Show Contig-U12043-1 Contig ID Contig-U12043-1 Contig update 2002.12.18 Contig sequence >Contig-U12043-1 (Contig...-U12043-1Q) /CSM_Contig/Contig-U12043-1Q.Seq.d GAAACCATTCGTTTAAAGAAATGAAATATTTATATATATTAA...ATAAA AATAAATT Gap gap included Contig length 1898 Chromosome number (1..6, M) 6 Chromosome length 3595308 S...VPDIVSGILASKYASITLLNSGEM DLTNGITIGLLENSTSDQLFQINPILNTSLTNILVGQRFSIPFEISIKDSTISNQL

  7. Dicty_cDB: Contig-U16467-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U16467-1 no gap 1261 2 7818565 7817305 MINUS 17 18 U16467 0 0 5 0 1 2 1 0 6 0 0 1 1 0 Show Contig...-U16467-1 Contig ID Contig-U16467-1 Contig update 2004. 6.11 Contig sequence >Contig-U16467-1 (Contig...-U16467-1Q) /CSM_Contig/Contig-U16467-1Q.Seq.d CAACAATTAACATTACTTAAATATAATATTATTATATTTTTTTTTTT...TTCAAATAAATAATTGTTTAGAAATTTCTAGAAAAAAAA AAAAAAAAAAA Gap no gap Contig length 1261 Chromosome number (1..6, M...LK833Z ,1005,1249 Translated Amino Acid sequence qqltllkyniiiffffyllplhlyhy**LKKKTLTIIKYFFQKMNKIALLFTIFFALFAI SFACDEFNPNTSTIG

  8. An algorithm developed in Matlab for the automatic selection of cut-off frequencies, in the correction of strong motion data

    Science.gov (United States)

    Sakkas, Georgios; Sakellariou, Nikolaos

    2018-05-01

    Strong motion recordings are the key in many earthquake engineering applications and are also fundamental for seismic design. The present study focuses on the automated correction of accelerograms, analog and digital. The main feature of the proposed algorithm is the automatic selection for the cut-off frequencies based on a minimum spectral value in a predefined frequency bandwidth, instead of the typical signal-to-noise approach. The algorithm follows the basic steps of the correction procedure (instrument correction, baseline correction and appropriate filtering). Besides the corrected time histories, Peak Ground Acceleration, Peak Ground Velocity, Peak Ground Displacement values and the corrected Fourier Spectra are also calculated as well as the response spectra. The algorithm is written in Matlab environment, is fast enough and can be used for batch processing or in real-time applications. In addition, the possibility to also perform a signal-to-noise ratio is added as well as to perform causal or acausal filtering. The algorithm has been tested in six significant earthquakes (Kozani-Grevena 1995, Aigio 1995, Athens 1999, Lefkada 2003 and Kefalonia 2014) of the Greek territory with analog and digital accelerograms.

  9. AUTOMATA PROGRAMS CONSTRUCTION FROM SPECIFICATION WITH AN ANT COLONY OPTIMIZATION ALGORITHM BASED ON MUTATION GRAPH

    Directory of Open Access Journals (Sweden)

    Daniil S. Chivilikhin

    2014-11-01

    Full Text Available The procedure of testing traditionally used in software engineering cannot guarantee program correctness; therefore verification is used at the excess requirements to programs reliability. Verification makes it possible to check certain properties of programs in all possible computational states; however, this process is very complex. In the model checking method a model of the program is built (often, manually and requirements in terms of temporal logic are formulated. Such temporal properties of the model can be checked automatically. The main issue in this framework is the gap between the program and its model. Automata-based programming paradigm gives the possibility to overcome this limitation. In this paradigm, program logic is represented using finite-state machines. The advantage of finite-state machines is that their models can be constructed automatically. The paper deals with the application of mutation-based ant colony optimization algorithm to the problem of finite-state machine construction from their specification, defined by test scenarios and temporal properties. The presented approach has been tested on the elevator doors control problem as well as on randomly generated data. Obtained results show the ant colony algorithm is two-three times faster than the previously used genetic algorithm. The proposed approach can be recommended for inferring control programs for critical systems.

  10. Fast compact algorithms and software for spline smoothing

    CERN Document Server

    Weinert, Howard L

    2012-01-01

    Fast Compact Algorithms and Software for Spline Smoothing investigates algorithmic alternatives for computing cubic smoothing splines when the amount of smoothing is determined automatically by minimizing the generalized cross-validation score. These algorithms are based on Cholesky factorization, QR factorization, or the fast Fourier transform. All algorithms are implemented in MATLAB and are compared based on speed, memory use, and accuracy. An overall best algorithm is identified, which allows very large data sets to be processed quickly on a personal computer.

  11. 46 CFR 154.178 - Contiguous hull structure: Heating system.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Contiguous hull structure: Heating system. 154.178... Equipment Hull Structure § 154.178 Contiguous hull structure: Heating system. The heating system for transverse and longitudinal contiguous hull structure must: (a) Be shown by a heat load calculation to have...

  12. A Novel Algorithm for Intrusion Detection Based on RASL Model Checking

    Directory of Open Access Journals (Sweden)

    Weijun Zhu

    2013-01-01

    Full Text Available The interval temporal logic (ITL model checking (MC technique enhances the power of intrusion detection systems (IDSs to detect concurrent attacks due to the strong expressive power of ITL. However, an ITL formula suffers from difficulty in the description of the time constraints between different actions in the same attack. To address this problem, we formalize a novel real-time interval temporal logic—real-time attack signature logic (RASL. Based on such a new logic, we put forward a RASL model checking algorithm. Furthermore, we use RASL formulas to describe attack signatures and employ discrete timed automata to create an audit log. As a result, RASL model checking algorithm can be used to automatically verify whether the automata satisfy the formulas, that is, whether the audit log coincides with the attack signatures. The simulation experiments show that the new approach effectively enhances the detection power of the MC-based intrusion detection methods for a number of telnet attacks, p-trace attacks, and the other sixteen types of attacks. And these experiments indicate that the new algorithm can find several types of real-time attacks, whereas the existing MC-based intrusion detection approaches cannot do that.

  13. Algorithm of automatic generation of technology process and process relations of automotive wiring harnesses

    Institute of Scientific and Technical Information of China (English)

    XU Benzhu; ZHU Jiman; LIU Xiaoping

    2012-01-01

    Identifying each process and their constraint relations from the complex wiring harness drawings quickly and accurately is the basis for formulating process routes. According to the knowledge of automotive wiring harness and the characteristics of wiring harness components, we established the model of wiring harness graph. Then we research the algorithm of identifying technology processes automatically, finally we describe the relationships between processes by introducing the constraint matrix, which is in or- der to lay a good foundation for harness process planning and production scheduling.

  14. Sensitivity analysis and design optimization through automatic differentiation

    International Nuclear Information System (INIS)

    Hovland, Paul D; Norris, Boyana; Strout, Michelle Mills; Bhowmick, Sanjukta; Utke, Jean

    2005-01-01

    Automatic differentiation is a technique for transforming a program or subprogram that computes a function, including arbitrarily complex simulation codes, into one that computes the derivatives of that function. We describe the implementation and application of automatic differentiation tools. We highlight recent advances in the combinatorial algorithms and compiler technology that underlie successful implementation of automatic differentiation tools. We discuss applications of automatic differentiation in design optimization and sensitivity analysis. We also describe ongoing research in the design of language-independent source transformation infrastructures for automatic differentiation algorithms

  15. ATIPS: Automatic Travel Itinerary Planning System for Domestic Areas

    Science.gov (United States)

    2016-01-01

    Leisure travel has become a topic of great interest to Taiwanese residents in recent years. Most residents expect to be able to relax on a vacation during the holidays; however, the complicated procedure of travel itinerary planning is often discouraging and leads them to abandon the idea of traveling. In this paper, we design an automatic travel itinerary planning system for the domestic area (ATIPS) using an algorithm to automatically plan a domestic travel itinerary based on user intentions that allows users to minimize the process of trip planning. Simply by entering the travel time, the departure point, and the destination location, the system can automatically generate a travel itinerary. According to the results of the experiments, 70% of users were satisfied with the result of our system, and 82% of users were satisfied with the automatic user preference learning mechanism of ATIPS. Our algorithm also provides a framework for substituting modules or weights and offers a new method for travel planning. PMID:26839529

  16. ATIPS: Automatic Travel Itinerary Planning System for Domestic Areas

    Directory of Open Access Journals (Sweden)

    Hsien-Tsung Chang

    2016-01-01

    Full Text Available Leisure travel has become a topic of great interest to Taiwanese residents in recent years. Most residents expect to be able to relax on a vacation during the holidays; however, the complicated procedure of travel itinerary planning is often discouraging and leads them to abandon the idea of traveling. In this paper, we design an automatic travel itinerary planning system for the domestic area (ATIPS using an algorithm to automatically plan a domestic travel itinerary based on user intentions that allows users to minimize the process of trip planning. Simply by entering the travel time, the departure point, and the destination location, the system can automatically generate a travel itinerary. According to the results of the experiments, 70% of users were satisfied with the result of our system, and 82% of users were satisfied with the automatic user preference learning mechanism of ATIPS. Our algorithm also provides a framework for substituting modules or weights and offers a new method for travel planning.

  17. ATIPS: Automatic Travel Itinerary Planning System for Domestic Areas.

    Science.gov (United States)

    Chang, Hsien-Tsung; Chang, Yi-Ming; Tsai, Meng-Tze

    2016-01-01

    Leisure travel has become a topic of great interest to Taiwanese residents in recent years. Most residents expect to be able to relax on a vacation during the holidays; however, the complicated procedure of travel itinerary planning is often discouraging and leads them to abandon the idea of traveling. In this paper, we design an automatic travel itinerary planning system for the domestic area (ATIPS) using an algorithm to automatically plan a domestic travel itinerary based on user intentions that allows users to minimize the process of trip planning. Simply by entering the travel time, the departure point, and the destination location, the system can automatically generate a travel itinerary. According to the results of the experiments, 70% of users were satisfied with the result of our system, and 82% of users were satisfied with the automatic user preference learning mechanism of ATIPS. Our algorithm also provides a framework for substituting modules or weights and offers a new method for travel planning.

  18. Algorithm for detecting violations of traffic rules based on computer vision approaches

    Directory of Open Access Journals (Sweden)

    Ibadov Samir

    2017-01-01

    Full Text Available We propose a new algorithm for automatic detect violations of traffic rules for improving the people safety on the unregulated pedestrian crossing. The algorithm uses multi-step proceedings. They are zebra detection, cars detection, and pedestrian detection. For car detection, we use faster R-CNN deep learning tool. The algorithm shows promising results in the detection violations of traffic rules.

  19. Maritime over the Horizon Sensor Integration: High Frequency Surface-Wave-Radar and Automatic Identification System Data Integration Algorithm.

    Science.gov (United States)

    Nikolic, Dejan; Stojkovic, Nikola; Lekic, Nikola

    2018-04-09

    To obtain the complete operational picture of the maritime situation in the Exclusive Economic Zone (EEZ) which lies over the horizon (OTH) requires the integration of data obtained from various sensors. These sensors include: high frequency surface-wave-radar (HFSWR), satellite automatic identification system (SAIS) and land automatic identification system (LAIS). The algorithm proposed in this paper utilizes radar tracks obtained from the network of HFSWRs, which are already processed by a multi-target tracking algorithm and associates SAIS and LAIS data to the corresponding radar tracks, thus forming an integrated data pair. During the integration process, all HFSWR targets in the vicinity of AIS data are evaluated and the one which has the highest matching factor is used for data association. On the other hand, if there is multiple AIS data in the vicinity of a single HFSWR track, the algorithm still makes only one data pair which consists of AIS and HFSWR data with the highest mutual matching factor. During the design and testing, special attention is given to the latency of AIS data, which could be very high in the EEZs of developing countries. The algorithm is designed, implemented and tested in a real working environment. The testing environment is located in the Gulf of Guinea and includes a network of HFSWRs consisting of two HFSWRs, several coastal sites with LAIS receivers and SAIS data provided by provider of SAIS data.

  20. An AK-LDMeans algorithm based on image clustering

    Science.gov (United States)

    Chen, Huimin; Li, Xingwei; Zhang, Yongbin; Chen, Nan

    2018-03-01

    Clustering is an effective analytical technique for handling unmarked data for value mining. Its ultimate goal is to mark unclassified data quickly and correctly. We use the roadmap for the current image processing as the experimental background. In this paper, we propose an AK-LDMeans algorithm to automatically lock the K value by designing the Kcost fold line, and then use the long-distance high-density method to select the clustering centers to further replace the traditional initial clustering center selection method, which further improves the efficiency and accuracy of the traditional K-Means Algorithm. And the experimental results are compared with the current clustering algorithm and the results are obtained. The algorithm can provide effective reference value in the fields of image processing, machine vision and data mining.

  1. Physics-Based Image Segmentation Using First Order Statistical Properties and Genetic Algorithm for Inductive Thermography Imaging.

    Science.gov (United States)

    Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun

    2018-05-01

    Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.

  2. Automatic, ECG-based detection of autonomic arousals and their association with cortical arousals, leg movements, and respiratory events in sleep

    DEFF Research Database (Denmark)

    Olsen, Mads; Schneider, Logan Douglas; Cheung, Joseph

    2018-01-01

    The current definition of sleep arousals neglects to address the diversity of arousals and their systemic cohesion. Autonomic arousals (AA) are autonomic activations often associated with cortical arousals (CA), but they may also occur in isolation in relation to a respiratory event, a leg movement...... event or spontaneously, without any other physiological associations. AA should be acknowledged as essential events to understand and explore the systemic implications of arousals. We developed an automatic AA detection algorithm based on intelligent feature selection and advanced machine learning using...... or respiratory events. This indicates that most FP constitute autonomic activations that are indistinguishable from those with cortical cohesion. The proposed algorithm provides an automatic system trained in a clinical environment, which can be utilized to analyse the systemic and clinical impacts of arousals....

  3. Dicty_cDB: Contig-U03323-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U03323-1 no gap 533 2 4820223 4820756 PLUS 2 1 U03323 0 0 0 0 0 0 0 0 0 0 0 0 1 1 Show Contig...-U03323-1 Contig ID Contig-U03323-1 Contig update 2001. 8.29 Contig sequence >Contig-U03323-1 (Contig...-U03323-1Q) /CSM_Contig/Contig-U03323-1Q.Seq.d ACATGTGACATTACTATTGGTAAATGTCAATGTTTAAAAAATACATGGTC ...TCAATAATGGTGGTGGTGGTGGTTTAGGT GAAACCCCCAATAGTAATAGTAATAGTGGTGAACTAGTTATCCCACCAAA ATCAAATACTACATTAAATGAAGAAACAGGTGG Gap no gap Contig... Link to clone list U03323 List of clone(s) est1= FC-IC0176F ,1,534 Translated Amino Acid sequence TCDITIGKC

  4. 46 CFR 154.180 - Contiguous hull structure: Welding procedure.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Contiguous hull structure: Welding procedure. 154.180... Equipment Hull Structure § 154.180 Contiguous hull structure: Welding procedure. Welding procedure tests for contiguous hull structure designed for a temperature colder than −18 °C (0 °F) must meet § 54.05-15 and...

  5. Automatic Algorithm for the Determination of the Anderson-wilkins Acuteness Score In Patients With St Elevation Myocardial Infarction

    DEFF Research Database (Denmark)

    Fakhri, Yama; Sejersten, Maria; Schoos, Mikkel Malby

    2016-01-01

    using 50 ECGs. Each ECG lead (except aVR) was manually scored according to AW-score by two independent experts (Exp1 and Exp2) and automatically by our designed algorithm (auto-score). An adjudicated manual score (Adj-score) was determined between Exp1 and Exp2. The inter-rater reliabilities (IRRs...

  6. A Causal Contiguity Effect That Persists across Time Scales

    Science.gov (United States)

    Kilic, Asli; Criss, Amy H.; Howard, Marc W.

    2013-01-01

    The contiguity effect refers to the tendency to recall an item from nearby study positions of the just recalled item. Causal models of contiguity suggest that recalled items are used as probes, causing a change in the memory state for subsequent recall attempts. Noncausal models of the contiguity effect assume the memory state is unaffected by…

  7. Algorithm for designing smart factory Industry 4.0

    Science.gov (United States)

    Gurjanov, A. V.; Zakoldaev, D. A.; Shukalov, A. V.; Zharinov, I. O.

    2018-03-01

    The designing task of production division of the Industry 4.0 item designing company is being studied. The authors proposed an algorithm, which is based on the modified V L Volkovich method. This algorithm allows generating options how to arrange the production with robotized technological equipment functioning in the automatic mode. The optimization solution of the multi-criteria task for some additive criteria is the base of the algorithm.

  8. 33 CFR 334.635 - Hillsborough Bay and waters contiguous to MacDill Air Force Base, Fla.; restricted area.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Hillsborough Bay and waters... Waters CORPS OF ENGINEERS, DEPARTMENT OF THE ARMY, DEPARTMENT OF DEFENSE DANGER ZONE AND RESTRICTED AREA REGULATIONS § 334.635 Hillsborough Bay and waters contiguous to MacDill Air Force Base, Fla.; restricted area...

  9. Loading pattern optimization using ant colony algorithm

    International Nuclear Information System (INIS)

    Hoareau, Fabrice

    2008-01-01

    Electricite de France (EDF) operates 58 nuclear power plants (NPP), of the Pressurized Water Reactor type. The loading pattern optimization of these NPP is currently done by EDF expert engineers. Within this framework, EDF R and D has developed automatic optimization tools that assist the experts. LOOP is an industrial tool, developed by EDF R and D and based on a simulated annealing algorithm. In order to improve the results of such automatic tools, new optimization methods have to be tested. Ant Colony Optimization (ACO) algorithms are recent methods that have given very good results on combinatorial optimization problems. In order to evaluate the performance of such methods on loading pattern optimization, direct comparisons between LOOP and a mock-up based on the Max-Min Ant System algorithm (a particular variant of ACO algorithms) were made on realistic test-cases. It is shown that the results obtained by the ACO mock-up are very similar to those of LOOP. Future research will consist in improving these encouraging results by using parallelization and by hybridizing the ACO algorithm with local search procedures. (author)

  10. Dicty_cDB: Contig-U15069-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U15069-1 no gap 1241 1 2719927 2720886 PLUS 37 43 U15069 16 2 0 0 0 5 8 0 0 1 1 0 4 0 Show Contig...-U15069-1 Contig ID Contig-U15069-1 Contig update 2004. 6.11 Contig sequence >Contig-U15069-1 (Contig...-U15069-1Q) /CSM_Contig/Contig-U15069-1Q.Seq.d TTTCAAACCAAAACATAAAATAATTAAAAATGACAACTGTTAAACCA...AAAAATAAAATAAATAAAAATAGTTTTAAA Gap no gap Contig length 1241 Chromosome number (1..6, M) 1 Chromosome length...07Z ,263,623 est42= VSJ431Z ,390,646 est43= CHB363Z ,460,1187 Translated Amino Acid sequence snqnik*lkmttvkptspenprvffditig

  11. Dicty_cDB: Contig-U12316-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U12316-1 gap included 1238 4 1925901 1927143 PLUS 5 6 U12316 0 4 1 0 0 0 0 0 0 0 0 0 0 0 Show Contig...-U12316-1 Contig ID Contig-U12316-1 Contig update 2002.12.18 Contig sequence >Contig-U12316-1 (Contig-U12316-1Q) /CSM_Contig/Contig-U12316...GAGTTGAAGATTTAGTTTTATCAGNANGAANAAATAAGAT Gap gap included Contig length 1238 Chromosome number (1..6, M) 4 C...,915,1174 Translated Amino Acid sequence lvqhhyh*liscvivllksmv*isqvhivvhlfmfvn*qyileih*iptlknlskiftig...lip*r*rtrkttn*kiknny*itketkiqs*t*rvmmmi*vedlvls xxxnk Frame B: lvqhhyh*liscvivllksmv*isqvhivvhlfmfvn*qyileih*iptlknlskiftig

  12. Dicty_cDB: Contig-U12682-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available Contig-U12682-1 no gap 1408 4 4961739 4963050 PLUS 47 48 U12682 0 0 0 5 0 0 2 30 0 10 0 0 0 0 Show Contig...-U12682-1 Contig ID Contig-U12682-1 Contig update 2002.12.18 Contig sequence >Contig-U12682-1 (Contig...-U12682-1Q) /CSM_Contig/Contig-U12682-1Q.Seq.d AAACACATCATCCCGTTCGATCTGATAAGTAAATCGACCTCAGGCC...ATGA AACTACTG Gap no gap Contig length 1408 Chromosome number (1..6, M) 4 Chromosome length 5430582 Start po... kwniikwysyinwykswyn**fihsiklqwsy*qcke*si*yiir*ny own update 2004. 6.10 Homology vs CSM-cDNA Query= Contig

  13. A Clonal Selection Algorithm for Minimizing Distance Travel and Back Tracking of Automatic Guided Vehicles in Flexible Manufacturing System

    Science.gov (United States)

    Chawla, Viveak Kumar; Chanda, Arindam Kumar; Angra, Surjit

    2018-03-01

    The flexible manufacturing system (FMS) constitute of several programmable production work centers, material handling systems (MHSs), assembly stations and automatic storage and retrieval systems. In FMS, the automatic guided vehicles (AGVs) play a vital role in material handling operations and enhance the performance of the FMS in its overall operations. To achieve low makespan and high throughput yield in the FMS operations, it is highly imperative to integrate the production work centers schedules with the AGVs schedules. The Production schedule for work centers is generated by application of the Giffler and Thompson algorithm under four kind of priority hybrid dispatching rules. Then the clonal selection algorithm (CSA) is applied for the simultaneous scheduling to reduce backtracking as well as distance travel of AGVs within the FMS facility. The proposed procedure is computationally tested on the benchmark FMS configuration from the literature and findings from the investigations clearly indicates that the CSA yields best results in comparison of other applied methods from the literature.

  14. A Decision-Tree-Based Algorithm for Speech/Music Classification and Segmentation

    Directory of Open Access Journals (Sweden)

    Lavner Yizhar

    2009-01-01

    Full Text Available We present an efficient algorithm for segmentation of audio signals into speech or music. The central motivation to our study is consumer audio applications, where various real-time enhancements are often applied. The algorithm consists of a learning phase and a classification phase. In the learning phase, predefined training data is used for computing various time-domain and frequency-domain features, for speech and music signals separately, and estimating the optimal speech/music thresholds, based on the probability density functions of the features. An automatic procedure is employed to select the best features for separation. In the test phase, initial classification is performed for each segment of the audio signal, using a three-stage sieve-like approach, applying both Bayesian and rule-based methods. To avoid erroneous rapid alternations in the classification, a smoothing technique is applied, averaging the decision on each segment with past segment decisions. Extensive evaluation of the algorithm, on a database of more than 12 hours of speech and more than 22 hours of music showed correct identification rates of 99.4% and 97.8%, respectively, and quick adjustment to alternating speech/music sections. In addition to its accuracy and robustness, the algorithm can be easily adapted to different audio types, and is suitable for real-time operation.

  15. Automatic 2D segmentation of airways in thorax computed tomography images

    International Nuclear Information System (INIS)

    Cavalcante, Tarique da Silveira; Cortez, Paulo Cesar; Almeida, Thomaz Maia de; Felix, John Hebert da Silva; Holanda, Marcelo Alcantara

    2013-01-01

    Introduction: much of the world population is affected by pulmonary diseases, such as the bronchial asthma, bronchitis and bronchiectasis. The bronchial diagnosis is based on the airways state. In this sense, the automatic segmentation of the airways in Computed Tomography (CT) scans is a critical step in the aid to diagnosis of these diseases. Methods: this paper evaluates algorithms for airway automatic segmentation, using Neural Network Multilayer Perceptron (MLP) and Lung Densities Analysis (LDA) for detecting airways, along with Region Growing (RG), Active Contour Method (ACM) Balloon and Topology Adaptive to segment them. Results: we obtained results in three stages: comparative analysis of the detection algorithms MLP and LDA, with a gold standard acquired by three physicians with expertise in CT imaging of the chest; comparative analysis of segmentation algorithms ACM Balloon, ACM Topology Adaptive, MLP and RG; and evaluation of possible combinations between segmentation and detection algorithms, resulting in the complete method for automatic segmentation of the airways in 2D. Conclusion: the low incidence of false negative and the significant reduction of false positive, results in similarity coefficient and sensitivity exceeding 91% and 87% respectively, for a combination of algorithms with satisfactory segmentation quality. (author)

  16. Algorithm Indicating Moment of P-Wave Arrival Based on Second-Moment Characteristic

    Directory of Open Access Journals (Sweden)

    Jakub Sokolowski

    2016-01-01

    Full Text Available The moment of P-wave arrival can provide us with many information about the nature of a seismic event. Without adequate knowledge regarding the onset moment, many properties of the events related to location, polarization of P-wave, and so forth are impossible to receive. In order to save time required to indicate P-wave arrival moment manually, one can benefit from automatic picking algorithms. In this paper two algorithms based on a method finding a regime switch point are applied to seismic event data in order to find P-wave arrival time. The algorithms are based on signals transformed via a basic transform rather than on raw recordings. They involve partitioning the transformed signal into two separate series and fitting logarithm function to the first subset (which corresponds to pure noise and therefore it is considered stationary, exponent or power function to the second subset (which corresponds to nonstationary seismic event, and finding the point at which these functions best fit the statistic in terms of sum of squared errors. Effectiveness of the algorithms is tested on seismic data acquired from O/ZG “Rudna” underground copper ore mine with moments of P-wave arrival initially picked by broadly known STA/LTA algorithm and then corrected by seismic station specialists. The results of proposed algorithms are compared to those obtained using STA/LTA.

  17. An optimization framework for process discovery algorithms

    NARCIS (Netherlands)

    Weijters, A.J.M.M.; Stahlbock, R.

    2011-01-01

    Today there are many process mining techniques that, based on an event log, allow for the automatic induction of a process model. The process mining algorithms that are able to deal with incomplete event logs, exceptions, and noise typically have many parameters to tune the algorithm. Therefore, the

  18. A novel validation algorithm allows for automated cell tracking and the extraction of biologically meaningful parameters.

    Directory of Open Access Journals (Sweden)

    Daniel H Rapoport

    Full Text Available Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters

  19. Spike sorting based upon machine learning algorithms (SOMA).

    Science.gov (United States)

    Horton, P M; Nicol, A U; Kendrick, K M; Feng, J F

    2007-02-15

    We have developed a spike sorting method, using a combination of various machine learning algorithms, to analyse electrophysiological data and automatically determine the number of sampled neurons from an individual electrode, and discriminate their activities. We discuss extensions to a standard unsupervised learning algorithm (Kohonen), as using a simple application of this technique would only identify a known number of clusters. Our extra techniques automatically identify the number of clusters within the dataset, and their sizes, thereby reducing the chance of misclassification. We also discuss a new pre-processing technique, which transforms the data into a higher dimensional feature space revealing separable clusters. Using principal component analysis (PCA) alone may not achieve this. Our new approach appends the features acquired using PCA with features describing the geometric shapes that constitute a spike waveform. To validate our new spike sorting approach, we have applied it to multi-electrode array datasets acquired from the rat olfactory bulb, and from the sheep infero-temporal cortex, and using simulated data. The SOMA sofware is available at http://www.sussex.ac.uk/Users/pmh20/spikes.

  20. ALGORITHM FOR THE AUTOMATIC ESTIMATION OF AGRICULTURAL TREE GEOMETRIC PARAMETERS USING AIRBORNE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    E. Hadaś

    2016-06-01

    Full Text Available The estimation of dendrometric parameters has become an important issue for the agricultural planning and management. Since the classical field measurements are time consuming and inefficient, Airborne Laser Scanning (ALS data can be used for this purpose. Point clouds acquired for orchard areas allow to determine orchard structures and geometric parameters of individual trees. In this research we propose an automatic method that allows to determine geometric parameters of individual olive trees using ALS data. The method is based on the α-shape algorithm applied for normalized point clouds. The algorithm returns polygons representing crown shapes. For points located inside each polygon, we select the maximum height and the minimum height and then we estimate the tree height and the crown base height. We use the first two components of the Principal Component Analysis (PCA as the estimators for crown diameters. The α-shape algorithm requires to define the radius parameter R. In this study we investigated how sensitive are the results to the radius size, by comparing the results obtained with various settings of the R with reference values of estimated parameters from field measurements. Our study area was the olive orchard located in the Castellon Province, Spain. We used a set of ALS data with an average density of 4 points m−2. We noticed, that there was a narrow range of the R parameter, from 0.48 m to 0.80 m, for which all trees were detected and for which we obtained a high correlation coefficient (> 0.9 between estimated and measured values. We compared our estimates with field measurements. The RMSE of differences was 0.8 m for the tree height, 0.5 m for the crown base height, 0.6 m and 0.4 m for the longest and shorter crown diameter, respectively. The accuracy obtained with the method is thus sufficient for agricultural applications.

  1. Bellman Ford algorithm - in Routing Information Protocol (RIP)

    Science.gov (United States)

    Krianto Sulaiman, Oris; Mahmud Siregar, Amir; Nasution, Khairuddin; Haramaini, Tasliyah

    2018-04-01

    In a large scale network need a routing that can handle a lot number of users, one of the solutions to cope with large scale network is by using a routing protocol, There are 2 types of routing protocol that is static and dynamic, Static routing is manually route input based on network admin, while dynamic routing is automatically route input formed based on existing network. Dynamic routing is efficient used to network extensively because of the input of route automatic formed, Routing Information Protocol (RIP) is one of dynamic routing that uses the bellman-ford algorithm where this algorithm will search for the best path that traversed the network by leveraging the value of each link, so with the bellman-ford algorithm owned by RIP can optimize existing networks.

  2. Automatic identification of otological drilling faults: an intelligent recognition algorithm.

    Science.gov (United States)

    Cao, Tianyang; Li, Xisheng; Gao, Zhiqiang; Feng, Guodong; Shen, Peng

    2010-06-01

    This article presents an intelligent recognition algorithm that can recognize milling states of the otological drill by fusing multi-sensor information. An otological drill was modified by the addition of sensors. The algorithm was designed according to features of the milling process and is composed of a characteristic curve, an adaptive filter and a rule base. The characteristic curve can weaken the impact of the unstable normal milling process and reserve the features of drilling faults. The adaptive filter is capable of suppressing interference in the characteristic curve by fusing multi-sensor information. The rule base can identify drilling faults through the filtering result data. The experiments were repeated on fresh porcine scapulas, including normal milling and two drilling faults. The algorithm has high rates of identification. This study shows that the intelligent recognition algorithm can identify drilling faults under interference conditions. (c) 2010 John Wiley & Sons, Ltd.

  3. Automatic analysis algorithm for radionuclide pulse-height data from beta-gamma coincidence systems

    International Nuclear Information System (INIS)

    Foltz Biegalski, K.M.

    2001-01-01

    There are two acceptable noble gas monitoring measurement modes for Comprehensive Nuclear-Test-Ban-Treaty (CTBT) verification purposes defined in CTBT/PC/II/WG.B/1. These include beta-gamma coincidence and high-resolution gamma-spectrometry. There are at present no commercial, off-the-shelf (COTS) applications for the analysis of β-γ coincidence data. Development of such software is in progress at the Prototype International Data Centre (PIDC) for eventual deployment at the International Data Centre (IDC). Flowcharts detailing the automatic analysis algorithm for β-γ coincidence data to be coded at the PIDC is included. The program is being written in C with Oracle databasing capabilities. (author)

  4. An airport surface surveillance solution based on fusion algorithm

    Science.gov (United States)

    Liu, Jianliang; Xu, Yang; Liang, Xuelin; Yang, Yihuang

    2017-01-01

    In this paper, we propose an airport surface surveillance solution combined with Multilateration (MLAT) and Automatic Dependent Surveillance Broadcast (ADS-B). The moving target to be monitored is regarded as a linear stochastic hybrid system moving freely and each surveillance technology is simplified as a sensor with white Gaussian noise. The dynamic model of target and the observation model of sensor are established in this paper. The measurements of sensors are filtered properly by estimators to get the estimation results for current time. Then, we analysis the characteristics of two fusion solutions proposed, and decide to use the scheme based on sensor estimation fusion for our surveillance solution. In the proposed fusion algorithm, according to the output of estimators, the estimation error is quantified, and the fusion weight of each sensor is calculated. The two estimation results are fused with weights, and the position estimation of target is computed accurately. Finally the proposed solution and algorithm are validated by an illustrative target tracking simulation.

  5. Advanced Oil Spill Detection Algorithms For Satellite Based Maritime Environment Monitoring

    Science.gov (United States)

    Radius, Andrea; Azevedo, Rui; Sapage, Tania; Carmo, Paulo

    2013-12-01

    During the last years, the increasing pollution occurrence and the alarming deterioration of the environmental health conditions of the sea, lead to the need of global monitoring capabilities, namely for marine environment management in terms of oil spill detection and indication of the suspected polluter. The sensitivity of Synthetic Aperture Radar (SAR) to the different phenomena on the sea, especially for oil spill and vessel detection, makes it a key instrument for global pollution monitoring. The SAR performances in maritime pollution monitoring are being operationally explored by a set of service providers on behalf of the European Maritime Safety Agency (EMSA), which has launched in 2007 the CleanSeaNet (CSN) project - a pan-European satellite based oil monitoring service. EDISOFT, which is from the beginning a service provider for CSN, is continuously investing in R&D activities that will ultimately lead to better algorithms and better performance on oil spill detection from SAR imagery. This strategy is being pursued through EDISOFT participation in the FP7 EC Sea-U project and in the Automatic Oil Spill Detection (AOSD) ESA project. The Sea-U project has the aim to improve the current state of oil spill detection algorithms, through the informative content maximization obtained with data fusion, the exploitation of different type of data/ sensors and the development of advanced image processing, segmentation and classification techniques. The AOSD project is closely related to the operational segment, because it is focused on the automation of the oil spill detection processing chain, integrating auxiliary data, like wind information, together with image and geometry analysis techniques. The synergy between these different objectives (R&D versus operational) allowed EDISOFT to develop oil spill detection software, that combines the operational automatic aspect, obtained through dedicated integration of the processing chain in the existing open source NEST

  6. Automatic segmentation of thermal images of diabetic-at-risk feet using the snakes algorithm

    Science.gov (United States)

    Etehadtavakol, Mahnaz; Ng, E. Y. K.; Kaabouch, Naima

    2017-11-01

    Diabetes is a disease with multi-systemic problems. It is a leading cause of death, medical costs, and loss of productivity. Foot ulcers are one generally known problem of uncontrolled diabetes that can lead to amputation signs of foot ulcers are not always obvious. Sometimes, symptoms won't even show up until ulcer is infected. Hence, identification of pre-ulceration of the plantar surface of the foot in diabetics is beneficial. Thermography has the potential to identify regions of the plantar with no evidence of ulcer but yet risk. Thermography is a technique that is safe, easy, non-invasive, with no contact, and repeatable. In this study, 59 thermographic images of the plantar foot of patients with diabetic neuropathy are implemented using the snakes algorithm to separate two feet from background automatically and separating the right foot from the left on each image. The snakes algorithm both separates the right and left foot into segmented different clusters according to their temperatures. The hottest regions will have the highest risk of ulceration for each foot. This algorithm also worked perfectly for all the current images.

  7. Automatic Derivation of Statistical Algorithms: The EM Family and Beyond

    OpenAIRE

    Gray, Alexander G.; Fischer, Bernd; Schumann, Johann; Buntine, Wray

    2003-01-01

    Machine learning has reached a point where many probabilistic methods can be understood as variations, extensions and combinations of a much smaller set of abstract themes, e.g., as different instances of the EM algorithm. This enables the systematic derivation of algorithms customized for different models. Here, we describe the AUTOBAYES system which takes a high-level statistical model specification, uses powerful symbolic techniques based on schema-based program synthesis and computer alge...

  8. AUTOMATIC DETECTION ALGORITHM OF DYNAMIC PRESSURE PULSES IN THE SOLAR WIND

    International Nuclear Information System (INIS)

    Zuo, Pingbing; Feng, Xueshang; Wang, Yi; Xie, Yanqiong; Li, Huijun; Xu, Xiaojun

    2015-01-01

    Dynamic pressure pulses (DPPs) in the solar wind are a significant phenomenon closely related to the solar-terrestrial connection and physical processes of solar wind dynamics. In order to automatically identify DPPs from solar wind measurements, we develop a procedure with a three-step detection algorithm that is able to rapidly select DPPs from the plasma data stream and simultaneously define the transition region where large dynamic pressure variations occur and demarcate the upstream and downstream region by selecting the relatively quiet status before and after the abrupt change in dynamic pressure. To demonstrate the usefulness, efficiency, and accuracy of this procedure, we have applied it to the Wind observations from 1996 to 2008 by successfully obtaining the DPPs. The procedure can also be applied to other solar wind spacecraft observation data sets with different time resolutions

  9. Automatic classification of visual evoked potentials based on wavelet decomposition

    Science.gov (United States)

    Stasiakiewicz, Paweł; Dobrowolski, Andrzej P.; Tomczykiewicz, Kazimierz

    2017-04-01

    Diagnosis of part of the visual system, that is responsible for conducting compound action potential, is generally based on visual evoked potentials generated as a result of stimulation of the eye by external light source. The condition of patient's visual path is assessed by set of parameters that describe the time domain characteristic extremes called waves. The decision process is compound therefore diagnosis significantly depends on experience of a doctor. The authors developed a procedure - based on wavelet decomposition and linear discriminant analysis - that ensures automatic classification of visual evoked potentials. The algorithm enables to assign individual case to normal or pathological class. The proposed classifier has a 96,4% sensitivity at 10,4% probability of false alarm in a group of 220 cases and area under curve ROC equals to 0,96 which, from the medical point of view, is a very good result.

  10. An Automatic K-Means Clustering Algorithm of GPS Data Combining a Novel Niche Genetic Algorithm with Noise and Density

    Directory of Open Access Journals (Sweden)

    Xiangbing Zhou

    2017-12-01

    Full Text Available Rapidly growing Global Positioning System (GPS data plays an important role in trajectory and their applications (e.g., GPS-enabled smart devices. In order to employ K-means to mine the better origins and destinations (OD behind the GPS data and overcome its shortcomings including slowness of convergence, sensitivity to initial seeds selection, and getting stuck in a local optimum, this paper proposes and focuses on a novel niche genetic algorithm (NGA with density and noise for K-means clustering (NoiseClust. In NoiseClust, an improved noise method and K-means++ are proposed to produce the initial population and capture higher quality seeds that can automatically determine the proper number of clusters, and also handle the different sizes and shapes of genes. A density-based method is presented to divide the number of niches, with its aim to maintain population diversity. Adaptive probabilities of crossover and mutation are also employed to prevent the convergence to a local optimum. Finally, the centers (the best chromosome are obtained and then fed into the K-means as initial seeds to generate even higher quality clustering results by allowing the initial seeds to readjust as needed. Experimental results based on taxi GPS data sets demonstrate that NoiseClust has high performance and effectiveness, and easily mine the city’s situations in four taxi GPS data sets.

  11. Atlas-based automatic segmentation of head and neck organs at risk and nodal target volumes: a clinical validation.

    Science.gov (United States)

    Daisne, Jean-François; Blumhofer, Andreas

    2013-06-26

    Intensity modulated radiotherapy for head and neck cancer necessitates accurate definition of organs at risk (OAR) and clinical target volumes (CTV). This crucial step is time consuming and prone to inter- and intra-observer variations. Automatic segmentation by atlas deformable registration may help to reduce time and variations. We aim to test a new commercial atlas algorithm for automatic segmentation of OAR and CTV in both ideal and clinical conditions. The updated Brainlab automatic head and neck atlas segmentation was tested on 20 patients: 10 cN0-stages (ideal population) and 10 unselected N-stages (clinical population). Following manual delineation of OAR and CTV, automatic segmentation of the same set of structures was performed and afterwards manually corrected. Dice Similarity Coefficient (DSC), Average Surface Distance (ASD) and Maximal Surface Distance (MSD) were calculated for "manual to automatic" and "manual to corrected" volumes comparisons. In both groups, automatic segmentation saved about 40% of the corresponding manual segmentation time. This effect was more pronounced for OAR than for CTV. The edition of the automatically obtained contours significantly improved DSC, ASD and MSD. Large distortions of normal anatomy or lack of iodine contrast were the limiting factors. The updated Brainlab atlas-based automatic segmentation tool for head and neck Cancer patients is timesaving but still necessitates review and corrections by an expert.

  12. Automatic Microaneurysms Detection Based on Multifeature Fusion Dictionary Learning

    Directory of Open Access Journals (Sweden)

    Wei Zhou

    2017-01-01

    Full Text Available Recently, microaneurysm (MA detection has attracted a lot of attention in the medical image processing community. Since MAs can be seen as the earliest lesions in diabetic retinopathy, their detection plays a critical role in diabetic retinopathy diagnosis. In this paper, we propose a novel MA detection approach named multifeature fusion dictionary learning (MFFDL. The proposed method consists of four steps: preprocessing, candidate extraction, multifeature dictionary learning, and classification. The novelty of our proposed approach lies in incorporating the semantic relationships among multifeatures and dictionary learning into a unified framework for automatic detection of MAs. We evaluate the proposed algorithm by comparing it with the state-of-the-art approaches and the experimental results validate the effectiveness of our algorithm.

  13. An epileptic seizures detection algorithm based on the empirical mode decomposition of EEG.

    Science.gov (United States)

    Orosco, Lorena; Laciar, Eric; Correa, Agustina Garces; Torres, Abel; Graffigna, Juan P

    2009-01-01

    Epilepsy is a neurological disorder that affects around 50 million people worldwide. The seizure detection is an important component in the diagnosis of epilepsy. In this study, the Empirical Mode Decomposition (EMD) method was proposed on the development of an automatic epileptic seizure detection algorithm. The algorithm first computes the Intrinsic Mode Functions (IMFs) of EEG records, then calculates the energy of each IMF and performs the detection based on an energy threshold and a minimum duration decision. The algorithm was tested in 9 invasive EEG records provided and validated by the Epilepsy Center of the University Hospital of Freiburg. In 90 segments analyzed (39 with epileptic seizures) the sensitivity and specificity obtained with the method were of 56.41% and 75.86% respectively. It could be concluded that EMD is a promissory method for epileptic seizure detection in EEG records.

  14. Towards Automatic Music Transcription: Extraction of MIDI-Data out of Polyphonic Piano Music

    Directory of Open Access Journals (Sweden)

    Jens Wellhausen

    2005-06-01

    Full Text Available Driven by the increasing amount of music available electronically the need of automatic search and retrieval systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications and music analysis. The first part of the algorithm performs a note accurate temporal audio segmentation. The resulting segments are examined to extract the notes played in the second part. An algorithm for chord separation based on Independent Subspace Analysis is presented. Finally, the results are used to build a MIDI file.

  15. AUTOMATIC TEXTURE MAPPING OF ARCHITECTURAL AND ARCHAEOLOGICAL 3D MODELS

    Directory of Open Access Journals (Sweden)

    T. P. Kersten

    2012-07-01

    Full Text Available Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.

  16. Automatic Texture Mapping of Architectural and Archaeological 3d Models

    Science.gov (United States)

    Kersten, T. P.; Stallmann, D.

    2012-07-01

    Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage) algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.

  17. Automatic optimization of a nuclear reactor reload using the algorithm Ant-Q; A otimizacao automatica da recarga nuclear utilizando o algoritmo Ant-Q

    Energy Technology Data Exchange (ETDEWEB)

    Machado, Liana; Schirru, Roberto [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear

    2002-07-01

    The nuclear fuel reload optimization is a NP-Complete combinatorial optimization problem. For decades this problem was solved using an expert's knowledge. From the eighties, however there have been efforts to automatic fuel reload and the more recent ones show the Genetic Algorithm's (GA) efficiency on this problem. Following this trend, our aim is to optimization nuclear fuel reload using Ant-Q, artificial theory based algorithms. Ant-Q's results on the Traveling salesman Problem, which is conceptuality similar to fuel reload, are better than GA's. Ant-Q was tested in real application on the cycle 7 reload of Angra I. Comparing Ant-Q result with the GA's, it can be verified that, even without a local heuristics, the former algorithm, as it superiority comparing the GA in Angra I show. Is a valid technique to solve the nuclear fuel reload problem. (author)

  18. Alignment of Custom Standards by Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Adela Sirbu

    2010-09-01

    Full Text Available Building an efficient model for automatic alignment of terminologies would bring a significant improvement to the information retrieval process. We have developed and compared two machine learning based algorithms whose aim is to align 2 custom standards built on a 3 level taxonomy, using kNN and SVM classifiers that work on a vector representation consisting of several similarity measures. The weights utilized by the kNN were optimized with an evolutionary algorithm, while the SVM classifier's hyper-parameters were optimized with a grid search algorithm. The database used for train was semi automatically obtained by using the Coma++ tool. The performance of our aligners is shown by the results obtained on the test set.

  19. Automatic system for 3D reconstruction of the chick eye based on digital photographs.

    Science.gov (United States)

    Wong, Alexander; Genest, Reno; Chandrashekar, Naveen; Choh, Vivian; Irving, Elizabeth L

    2012-01-01

    The geometry of anatomical specimens is very complex and accurate 3D reconstruction is important for morphological studies, finite element analysis (FEA) and rapid prototyping. Although magnetic resonance imaging, computed tomography and laser scanners can be used for reconstructing biological structures, the cost of the equipment is fairly high and specialised technicians are required to operate the equipment, making such approaches limiting in terms of accessibility. In this paper, a novel automatic system for 3D surface reconstruction of the chick eye from digital photographs of a serially sectioned specimen is presented as a potential cost-effective and practical alternative. The system is designed to allow for automatic detection of the external surface of the chick eye. Automatic alignment of the photographs is performed using a combination of coloured markers and an algorithm based on complex phase order likelihood that is robust to noise and illumination variations. Automatic segmentation of the external boundaries of the eye from the aligned photographs is performed using a novel level-set segmentation approach based on a complex phase order energy functional. The extracted boundaries are sampled to construct a 3D point cloud, and a combination of Delaunay triangulation and subdivision surfaces is employed to construct the final triangular mesh. Experimental results using digital photographs of the chick eye show that the proposed system is capable of producing accurate 3D reconstructions of the external surface of the eye. The 3D model geometry is similar to a real chick eye and could be used for morphological studies and FEA.

  20. Swarm intelligence-based approach for optimal design of CMOS differential amplifier and comparator circuit using a hybrid salp swarm algorithm

    Science.gov (United States)

    Asaithambi, Sasikumar; Rajappa, Muthaiah

    2018-05-01

    In this paper, an automatic design method based on a swarm intelligence approach for CMOS analog integrated circuit (IC) design is presented. The hybrid meta-heuristics optimization technique, namely, the salp swarm algorithm (SSA), is applied to the optimal sizing of a CMOS differential amplifier and the comparator circuit. SSA is a nature-inspired optimization algorithm which mimics the navigating and hunting behavior of salp. The hybrid SSA is applied to optimize the circuit design parameters and to minimize the MOS transistor sizes. The proposed swarm intelligence approach was successfully implemented for an automatic design and optimization of CMOS analog ICs using Generic Process Design Kit (GPDK) 180 nm technology. The circuit design parameters and design specifications are validated through a simulation program for integrated circuit emphasis simulator. To investigate the efficiency of the proposed approach, comparisons have been carried out with other simulation-based circuit design methods. The performances of hybrid SSA based CMOS analog IC designs are better than the previously reported studies.

  1. Multithreshold Segmentation by Using an Algorithm Based on the Behavior of Locust Swarms

    Directory of Open Access Journals (Sweden)

    Erik Cuevas

    2015-01-01

    Full Text Available As an alternative to classical techniques, the problem of image segmentation has also been handled through evolutionary methods. Recently, several algorithms based on evolutionary principles have been successfully applied to image segmentation with interesting performances. However, most of them maintain two important limitations: (1 they frequently obtain suboptimal results (misclassifications as a consequence of an inappropriate balance between exploration and exploitation in their search strategies; (2 the number of classes is fixed and known in advance. This paper presents an algorithm for the automatic selection of pixel classes for image segmentation. The proposed method combines a novel evolutionary method with the definition of a new objective function that appropriately evaluates the segmentation quality with respect to the number of classes. The new evolutionary algorithm, called Locust Search (LS, is based on the behavior of swarms of locusts. Different to the most of existent evolutionary algorithms, it explicitly avoids the concentration of individuals in the best positions, avoiding critical flaws such as the premature convergence to suboptimal solutions and the limited exploration-exploitation balance. Experimental tests over several benchmark functions and images validate the efficiency of the proposed technique with regard to accuracy and robustness.

  2. Automatic Depth Extraction from 2D Images Using a Cluster-Based Learning Framework.

    Science.gov (United States)

    Herrera, Jose L; Del-Blanco, Carlos R; Garcia, Narciso

    2018-07-01

    There has been a significant increase in the availability of 3D players and displays in the last years. Nonetheless, the amount of 3D content has not experimented an increment of such magnitude. To alleviate this problem, many algorithms for converting images and videos from 2D to 3D have been proposed. Here, we present an automatic learning-based 2D-3D image conversion approach, based on the key hypothesis that color images with similar structure likely present a similar depth structure. The presented algorithm estimates the depth of a color query image using the prior knowledge provided by a repository of color + depth images. The algorithm clusters this database attending to their structural similarity, and then creates a representative of each color-depth image cluster that will be used as prior depth map. The selection of the appropriate prior depth map corresponding to one given color query image is accomplished by comparing the structural similarity in the color domain between the query image and the database. The comparison is based on a K-Nearest Neighbor framework that uses a learning procedure to build an adaptive combination of image feature descriptors. The best correspondences determine the cluster, and in turn the associated prior depth map. Finally, this prior estimation is enhanced through a segmentation-guided filtering that obtains the final depth map estimation. This approach has been tested using two publicly available databases, and compared with several state-of-the-art algorithms in order to prove its efficiency.

  3. On Coding Non-Contiguous Letter Combinations

    Directory of Open Access Journals (Sweden)

    Frédéric eDandurand

    2011-06-01

    Full Text Available Starting from the hypothesis that printed word identification initially involves the parallel mapping of visual features onto location-specific letter identities, we analyze the type of information that would be involved in optimally mapping this location-specific orthographic code onto a location-invariant lexical code. We assume that some intermediate level of coding exists between individual letters and whole words, and that this involves the representation of letter combinations. We then investigate the nature of this intermediate level of coding given the constraints of optimality. This intermediate level of coding is expected to compress data while retaining as much information as possible about word identity. Information conveyed by letters is a function of how much they constrain word identity and how visible they are. Optimization of this coding is a combination of minimizing resources (using the most compact representations and maximizing information. We show that in a large proportion of cases, non-contiguous letter sequences contain more information than contiguous sequences, while at the same time requiring less precise coding. Moreover, we found that the best predictor of human performance in orthographic priming experiments was within-word ranking of conditional probabilities, rather than average conditional probabilities. We conclude that from an optimality perspective, readers learn to select certain contiguous and non-contiguous letter combinations as information that provides the best cue to word identity.

  4. Analysis of facial expressions in parkinson's disease through video-based automatic methods.

    Science.gov (United States)

    Bandini, Andrea; Orlandi, Silvia; Escalante, Hugo Jair; Giovannelli, Fabio; Cincotta, Massimo; Reyes-Garcia, Carlos A; Vanni, Paola; Zaccara, Gaetano; Manfredi, Claudia

    2017-04-01

    The automatic analysis of facial expressions is an evolving field that finds several clinical applications. One of these applications is the study of facial bradykinesia in Parkinson's disease (PD), which is a major motor sign of this neurodegenerative illness. Facial bradykinesia consists in the reduction/loss of facial movements and emotional facial expressions called hypomimia. In this work we propose an automatic method for studying facial expressions in PD patients relying on video-based METHODS: 17 Parkinsonian patients and 17 healthy control subjects were asked to show basic facial expressions, upon request of the clinician and after the imitation of a visual cue on a screen. Through an existing face tracker, the Euclidean distance of the facial model from a neutral baseline was computed in order to quantify the changes in facial expressivity during the tasks. Moreover, an automatic facial expressions recognition algorithm was trained in order to study how PD expressions differed from the standard expressions. Results show that control subjects reported on average higher distances than PD patients along the tasks. This confirms that control subjects show larger movements during both posed and imitated facial expressions. Moreover, our results demonstrate that anger and disgust are the two most impaired expressions in PD patients. Contactless video-based systems can be important techniques for analyzing facial expressions also in rehabilitation, in particular speech therapy, where patients could get a definite advantage from a real-time feedback about the proper facial expressions/movements to perform. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. 46 CFR 154.182 - Contiguous hull structure: Production weld test.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Contiguous hull structure: Production weld test. 154.182... Equipment Hull Structure § 154.182 Contiguous hull structure: Production weld test. If a portion of the contiguous hull structure is designed for a temperature colder than −34 °C (−30 °F) and is not part of the...

  6. A dual-adaptive support-based stereo matching algorithm

    Science.gov (United States)

    Zhang, Yin; Zhang, Yun

    2017-07-01

    Many stereo matching algorithms use fixed color thresholds and a rigid cross skeleton to segment supports (viz., Cross method), which, however, does not work well for different images. To address this issue, this paper proposes a novel dual adaptive support (viz., DAS)-based stereo matching method, which uses both appearance and shape information of a local region to segment supports automatically, and, then, integrates the DAS-based cost aggregation with the absolute difference plus census transform cost, scanline optimization and disparity refinement to develop a stereo matching system. The performance of the DAS method is also evaluated in the Middlebury benchmark and by comparing with the Cross method. The results show that the average error for the DAS method 25.06% lower than that for the Cross method, indicating that the proposed method is more accurate, with fewer parameters and suitable for parallel computing.

  7. DeepSleepNet: A Model for Automatic Sleep Stage Scoring Based on Raw Single-Channel EEG.

    Science.gov (United States)

    Supratak, Akara; Dong, Hao; Wu, Chao; Guo, Yike

    2017-11-01

    This paper proposes a deep learning model, named DeepSleepNet, for automatic sleep stage scoring based on raw single-channel EEG. Most of the existing methods rely on hand-engineered features, which require prior knowledge of sleep analysis. Only a few of them encode the temporal information, such as transition rules, which is important for identifying the next sleep stages, into the extracted features. In the proposed model, we utilize convolutional neural networks to extract time-invariant features, and bidirectional-long short-term memory to learn transition rules among sleep stages automatically from EEG epochs. We implement a two-step training algorithm to train our model efficiently. We evaluated our model using different single-channel EEGs (F4-EOG (left), Fpz-Cz, and Pz-Oz) from two public sleep data sets, that have different properties (e.g., sampling rate) and scoring standards (AASM and R&K). The results showed that our model achieved similar overall accuracy and macro F1-score (MASS: 86.2%-81.7, Sleep-EDF: 82.0%-76.9) compared with the state-of-the-art methods (MASS: 85.9%-80.5, Sleep-EDF: 78.9%-73.7) on both data sets. This demonstrated that, without changing the model architecture and the training algorithm, our model could automatically learn features for sleep stage scoring from different raw single-channel EEGs from different data sets without utilizing any hand-engineered features.

  8. An extension theory-based maximum power tracker using a particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Chao, Kuei-Hsiang

    2014-01-01

    Highlights: • We propose an adaptive maximum power point tracking (MPPT) approach for PV systems. • Transient and steady state performances in tracking process are improved. • The proposed MPPT can automatically tune tracking step size along a P–V curve. • A PSO algorithm is used to determine the weighting values of extension theory. - Abstract: The aim of this work is to present an adaptive maximum power point tracking (MPPT) approach for photovoltaic (PV) power generation system. Integrating the extension theory as well as the conventional perturb and observe method, an maximum power point (MPP) tracker is made able to automatically tune tracking step size by way of the category recognition along a P–V characteristic curve. Accordingly, the transient and steady state performances in tracking process are improved. Furthermore, an optimization approach is proposed on the basis of a particle swarm optimization (PSO) algorithm for the complexity reduction in the determination of weighting values. At the end of this work, a simulated improvement in the tracking performance is experimentally validated by an MPP tracker with a programmable system-on-chip (PSoC) based controller

  9. Successive approximation algorithm for beam-position-monitor-based LHC collimator alignment

    Science.gov (United States)

    Valentino, Gianluca; Nosych, Andriy A.; Bruce, Roderik; Gasior, Marek; Mirarchi, Daniele; Redaelli, Stefano; Salvachua, Belen; Wollmann, Daniel

    2014-02-01

    Collimators with embedded beam position monitor (BPM) button electrodes will be installed in the Large Hadron Collider (LHC) during the current long shutdown period. For the subsequent operation, BPMs will allow the collimator jaws to be kept centered around the beam orbit. In this manner, a better beam cleaning efficiency and machine protection can be provided at unprecedented higher beam energies and intensities. A collimator alignment algorithm is proposed to center the jaws automatically around the beam. The algorithm is based on successive approximation and takes into account a correction of the nonlinear BPM sensitivity to beam displacement and an asymmetry of the electronic channels processing the BPM electrode signals. A software implementation was tested with a prototype collimator in the Super Proton Synchrotron. This paper presents results of the tests along with some considerations for eventual operation in the LHC.

  10. Designing a Method for AN Automatic Earthquake Intensities Calculation System Based on Data Mining and On-Line Polls

    Science.gov (United States)

    Liendo Sanchez, A. K.; Rojas, R.

    2013-05-01

    Seismic intensities can be calculated using the Modified Mercalli Intensity (MMI) scale or the European Macroseismic Scale (EMS-98), among others, which are based on a serie of qualitative aspects related to a group of subjective factors that describe human perception, effects on nature or objects and structural damage due to the occurrence of an earthquake. On-line polls allow experts to get an overview of the consequences of an earthquake, without going to the locations affected. However, this could be a hard work if the polls are not properly automated. Taking into account that the answers given to these polls are subjective and there is a number of them that have already been classified for some past earthquakes, it is possible to use data mining techniques in order to automate this process and to obtain preliminary results based on the on-line polls. In order to achieve these goal, a predictive model has been used, using a classifier based on a supervised learning techniques such as decision tree algorithm and a group of polls based on the MMI and EMS-98 scales. It summarized the most important questions of the poll, and recursive divides the instance space corresponding to each question (nodes), while each node splits the space depending on the possible answers. Its implementation was done with Weka, a collection of machine learning algorithms for data mining tasks, using the J48 algorithm which is an implementation of the C4.5 algorithm for decision tree models. By doing this, it was possible to obtain a preliminary model able to identify up to 4 different seismic intensities with 73% correctly classified polls. The error obtained is rather high, therefore, we will update the on-line poll in order to improve the results, based on just one scale, for instance the MMI. Besides, the integration of automatic seismic intensities methodology with a low error probability and a basic georeferencing system, will allow to generate preliminary isoseismal maps

  11. Markov random field based automatic image alignment for electron tomography.

    Science.gov (United States)

    Amat, Fernando; Moussavi, Farshid; Comolli, Luis R; Elidan, Gal; Downing, Kenneth H; Horowitz, Mark

    2008-03-01

    We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.

  12. Atlas-based automatic segmentation of head and neck organs at risk and nodal target volumes: a clinical validation

    International Nuclear Information System (INIS)

    Daisne, Jean-François; Blumhofer, Andreas

    2013-01-01

    Intensity modulated radiotherapy for head and neck cancer necessitates accurate definition of organs at risk (OAR) and clinical target volumes (CTV). This crucial step is time consuming and prone to inter- and intra-observer variations. Automatic segmentation by atlas deformable registration may help to reduce time and variations. We aim to test a new commercial atlas algorithm for automatic segmentation of OAR and CTV in both ideal and clinical conditions. The updated Brainlab automatic head and neck atlas segmentation was tested on 20 patients: 10 cN0-stages (ideal population) and 10 unselected N-stages (clinical population). Following manual delineation of OAR and CTV, automatic segmentation of the same set of structures was performed and afterwards manually corrected. Dice Similarity Coefficient (DSC), Average Surface Distance (ASD) and Maximal Surface Distance (MSD) were calculated for “manual to automatic” and “manual to corrected” volumes comparisons. In both groups, automatic segmentation saved about 40% of the corresponding manual segmentation time. This effect was more pronounced for OAR than for CTV. The edition of the automatically obtained contours significantly improved DSC, ASD and MSD. Large distortions of normal anatomy or lack of iodine contrast were the limiting factors. The updated Brainlab atlas-based automatic segmentation tool for head and neck Cancer patients is timesaving but still necessitates review and corrections by an expert

  13. Automatic image-based analyses using a coupled quadtree-SBFEM/SCM approach

    Science.gov (United States)

    Gravenkamp, Hauke; Duczek, Sascha

    2017-10-01

    Quadtree-based domain decomposition algorithms offer an efficient option to create meshes for automatic image-based analyses. Without introducing hanging nodes the scaled boundary finite element method (SBFEM) can directly operate on such meshes by only discretizing the edges of each subdomain. However, the convergence of a numerical method that relies on a quadtree-based geometry approximation is often suboptimal due to the inaccurate representation of the boundary. To overcome this problem a combination of the SBFEM with the spectral cell method (SCM) is proposed. The basic idea is to treat each uncut quadtree cell as an SBFEM polygon, while all cut quadtree cells are computed employing the SCM. This methodology not only reduces the required number of degrees of freedom but also avoids a two-dimensional quadrature in all uncut quadtree cells. Numerical examples including static, harmonic, modal and transient analyses of complex geometries are studied, highlighting the performance of this novel approach.

  14. Development of Automatic Cluster Algorithm for Microcalcification in Digital Mammography

    International Nuclear Information System (INIS)

    Choi, Seok Yoon; Kim, Chang Soo

    2009-01-01

    Digital Mammography is an efficient imaging technique for the detection and diagnosis of breast pathological disorders. Six mammographic criteria such as number of cluster, number, size, extent and morphologic shape of microcalcification, and presence of mass, were reviewed and correlation with pathologic diagnosis were evaluated. It is very important to find breast cancer early when treatment can reduce deaths from breast cancer and breast incision. In screening breast cancer, mammography is typically used to view the internal organization. Clusterig microcalcifications on mammography represent an important feature of breast mass, especially that of intraductal carcinoma. Because microcalcification has high correlation with breast cancer, a cluster of a microcalcification can be very helpful for the clinical doctor to predict breast cancer. For this study, three steps of quantitative evaluation are proposed : DoG filter, adaptive thresholding, Expectation maximization. Through the proposed algorithm, each cluster in the distribution of microcalcification was able to measure the number calcification and length of cluster also can be used to automatically diagnose breast cancer as indicators of the primary diagnosis.

  15. Spatial Contiguity and Incidental Learning in Multimedia Environments

    Science.gov (United States)

    Paek, Seungoh; Hoffman, Daniel L.; Saravanos, Antonios

    2017-01-01

    Drawing on dual-process theories of cognitive function, the degree to which spatial contiguity influences incidental learning outcomes was examined. It was hypothesized that spatial contiguity would mediate what was learned even in the absence of an explicit learning goal. To test this hypothesis, 149 adults completed a multimedia-related task…

  16. Automatic Clustering Using FSDE-Forced Strategy Differential Evolution

    Science.gov (United States)

    Yasid, A.

    2018-01-01

    Clustering analysis is important in datamining for unsupervised data, cause no adequate prior knowledge. One of the important tasks is defining the number of clusters without user involvement that is known as automatic clustering. This study intends on acquiring cluster number automatically utilizing forced strategy differential evolution (AC-FSDE). Two mutation parameters, namely: constant parameter and variable parameter are employed to boost differential evolution performance. Four well-known benchmark datasets were used to evaluate the algorithm. Moreover, the result is compared with other state of the art automatic clustering methods. The experiment results evidence that AC-FSDE is better or competitive with other existing automatic clustering algorithm.

  17. Automatic segmentation of psoriasis lesions

    Science.gov (United States)

    Ning, Yang; Shi, Chenbo; Wang, Li; Shu, Chang

    2014-10-01

    The automatic segmentation of psoriatic lesions is widely researched these years. It is an important step in Computer-aid methods of calculating PASI for estimation of lesions. Currently those algorithms can only handle single erythema or only deal with scaling segmentation. In practice, scaling and erythema are often mixed together. In order to get the segmentation of lesions area - this paper proposes an algorithm based on Random forests with color and texture features. The algorithm has three steps. The first step, the polarized light is applied based on the skin's Tyndall-effect in the imaging to eliminate the reflection and Lab color space are used for fitting the human perception. The second step, sliding window and its sub windows are used to get textural feature and color feature. In this step, a feature of image roughness has been defined, so that scaling can be easily separated from normal skin. In the end, Random forests will be used to ensure the generalization ability of the algorithm. This algorithm can give reliable segmentation results even the image has different lighting conditions, skin types. In the data set offered by Union Hospital, more than 90% images can be segmented accurately.

  18. Thermogram breast cancer prediction approach based on Neutrosophic sets and fuzzy c-means algorithm.

    Science.gov (United States)

    Gaber, Tarek; Ismail, Gehad; Anter, Ahmed; Soliman, Mona; Ali, Mona; Semary, Noura; Hassanien, Aboul Ella; Snasel, Vaclav

    2015-08-01

    The early detection of breast cancer makes many women survive. In this paper, a CAD system classifying breast cancer thermograms to normal and abnormal is proposed. This approach consists of two main phases: automatic segmentation and classification. For the former phase, an improved segmentation approach based on both Neutrosophic sets (NS) and optimized Fast Fuzzy c-mean (F-FCM) algorithm was proposed. Also, post-segmentation process was suggested to segment breast parenchyma (i.e. ROI) from thermogram images. For the classification, different kernel functions of the Support Vector Machine (SVM) were used to classify breast parenchyma into normal or abnormal cases. Using benchmark database, the proposed CAD system was evaluated based on precision, recall, and accuracy as well as a comparison with related work. The experimental results showed that our system would be a very promising step toward automatic diagnosis of breast cancer using thermograms as the accuracy reached 100%.

  19. Simulating Deformations of MR Brain Images for Validation of Atlas-based Segmentation and Registration Algorithms

    OpenAIRE

    Xue, Zhong; Shen, Dinggang; Karacali, Bilge; Stern, Joshua; Rottenberg, David; Davatzikos, Christos

    2006-01-01

    Simulated deformations and images can act as the gold standard for evaluating various template-based image segmentation and registration algorithms. Traditional deformable simulation methods, such as the use of analytic deformation fields or the displacement of landmarks followed by some form of interpolation, are often unable to construct rich (complex) and/or realistic deformations of anatomical organs. This paper presents new methods aiming to automatically simulate realistic inter- and in...

  20. 46 CFR 154.172 - Contiguous steel hull structure.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 5 2010-10-01 2010-10-01 false Contiguous steel hull structure. 154.172 Section 154.172... STANDARDS FOR SELF-PROPELLED VESSELS CARRYING BULK LIQUEFIED GASES Design, Construction and Equipment Hull Structure § 154.172 Contiguous steel hull structure. (a) Except as allowed in paragraphs (b) and (c) of this...

  1. Image-based automatic recognition of larvae

    Science.gov (United States)

    Sang, Ru; Yu, Guiying; Fan, Weijun; Guo, Tiantai

    2010-08-01

    As the main objects, imagoes have been researched in quarantine pest recognition in these days. However, pests in their larval stage are latent, and the larvae spread abroad much easily with the circulation of agricultural and forest products. It is presented in this paper that, as the new research objects, larvae are recognized by means of machine vision, image processing and pattern recognition. More visional information is reserved and the recognition rate is improved as color image segmentation is applied to images of larvae. Along with the characteristics of affine invariance, perspective invariance and brightness invariance, scale invariant feature transform (SIFT) is adopted for the feature extraction. The neural network algorithm is utilized for pattern recognition, and the automatic identification of larvae images is successfully achieved with satisfactory results.

  2. Automatic Motion Generation for Robotic Milling Optimizing Stiffness with Sample-Based Planning

    Directory of Open Access Journals (Sweden)

    Julian Ricardo Diaz Posada

    2017-01-01

    Full Text Available Optimal and intuitive robotic machining is still a challenge. One of the main reasons for this is the lack of robot stiffness, which is also dependent on the robot positioning in the Cartesian space. To make up for this deficiency and with the aim of increasing robot machining accuracy, this contribution describes a solution approach for optimizing the stiffness over a desired milling path using the free degree of freedom of the machining process. The optimal motion is computed based on the semantic and mathematical interpretation of the manufacturing process modeled on its components: product, process and resource; and by configuring automatically a sample-based motion problem and the transition-based rapid-random tree algorithm for computing an optimal motion. The approach is simulated on a CAM software for a machining path revealing its functionality and outlining future potentials for the optimal motion generation for robotic machining processes.

  3. Developing operation algorithms for vision subsystems in autonomous mobile robots

    Science.gov (United States)

    Shikhman, M. V.; Shidlovskiy, S. V.

    2018-05-01

    The paper analyzes algorithms for selecting keypoints on the image for the subsequent automatic detection of people and obstacles. The algorithm is based on the histogram of oriented gradients and the support vector method. The combination of these methods allows successful selection of dynamic and static objects. The algorithm can be applied in various autonomous mobile robots.

  4. Study on distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database

    Science.gov (United States)

    WANG, Qingrong; ZHU, Changfeng

    2017-06-01

    Integration of distributed heterogeneous data sources is the key issues under the big data applications. In this paper the strategy of variable precision is introduced to the concept lattice, and the one-to-one mapping mode of variable precision concept lattice and ontology concept lattice is constructed to produce the local ontology by constructing the variable precision concept lattice for each subsystem, and the distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database is proposed to draw support from the special relationship between concept lattice and ontology construction. Finally, based on the standard of main concept lattice of the existing heterogeneous database generated, a case study has been carried out in order to testify the feasibility and validity of this algorithm, and the differences between the main concept lattice and the standard concept lattice are compared. Analysis results show that this algorithm above-mentioned can automatically process the construction process of distributed concept lattice under the heterogeneous data sources.

  5. A Robust Automated Cataract Detection Algorithm Using Diagnostic Opinion Based Parameter Thresholding for Telemedicine Application

    Directory of Open Access Journals (Sweden)

    Shashwat Pathak

    2016-09-01

    Full Text Available This paper proposes and evaluates an algorithm to automatically detect the cataracts from color images in adult human subjects. Currently, methods available for cataract detection are based on the use of either fundus camera or Digital Single-Lens Reflex (DSLR camera; both are very expensive. The main motive behind this work is to develop an inexpensive, robust and convenient algorithm which in conjugation with suitable devices will be able to diagnose the presence of cataract from the true color images of an eye. An algorithm is proposed for cataract screening based on texture features: uniformity, intensity and standard deviation. These features are first computed and mapped with diagnostic opinion by the eye expert to define the basic threshold of screening system and later tested on real subjects in an eye clinic. Finally, a tele-ophthamology model using our proposed system has been suggested, which confirms the telemedicine application of the proposed system.

  6. A pattern recognition approach based on DTW for automatic transient identification in nuclear power plants

    International Nuclear Information System (INIS)

    Galbally, Javier; Galbally, David

    2015-01-01

    Highlights: • Novel transient identification method for NPPs. • Low-complexity. • Low training data requirements. • High accuracy. • Fully reproducible protocol carried out on a real benchmark. - Abstract: Automatic identification of transients in nuclear power plants (NPPs) allows monitoring the fatigue damage accumulated by critical components during plant operation, and is therefore of great importance for ensuring that usage factors remain within the original design bases postulated by the plant designer. Although several schemes to address this important issue have been explored in the literature, there is still no definitive solution available. In the present work, a new method for automatic transient identification is proposed, based on the Dynamic Time Warping (DTW) algorithm, largely used in other related areas such as signature or speech recognition. The novel transient identification system is evaluated on real operational data following a rigorous pattern recognition protocol. Results show the high accuracy of the proposed approach, which is combined with other interesting features such as its low complexity and its very limited requirements of training data

  7. Knowledge-based full-automatic control system for a nuclear ship reactor

    International Nuclear Information System (INIS)

    Shimazaki, J.; Nakazawa, T.; Yabuuchi, N.

    2000-01-01

    Plant operations aboard nuclear ships require quick judgements and actions due to changing marine conditions such as wind, waves and currents. Furthermore, additional human support is not available for nuclear ship operation at sea, so advanced automatic operations are necessary to reduce the number of operators required finally. Therefore, an advanced automatic operating system has been developed based on operational knowledge of nuclear ship 'Mutsu' plant. The advanced automatic operating system includes both the automatic operation system and the operator-support system which assists operators in completing actions during plant accidents, anomaly diagnosis and plant supervision. These system are largely being developed using artificial intelligent techniques such as neural network, fuzzy logic and knowledge-based expert. The automatic operation system is fundamentally based upon application of an operator's knowledge of both normal (start-up to rated power level) and abnormal (after scram) operations. Comparing plant behaviors from start-up to power level by the automatic operation with by 'Mutsu' manual operation, stable automatic operation was obtained almost same as manual operation within all operating limits. The abnormal automatic system was for hard work of manual operations after scram or LOCA accidents. An integrating system with the normal and the abnormal automatic systems are being developed for interacting smoothly both systems. (author)

  8. Automatic multimodal detection for long-term seizure documentation in epilepsy.

    Science.gov (United States)

    Fürbass, F; Kampusch, S; Kaniusas, E; Koren, J; Pirker, S; Hopfengärtner, R; Stefan, H; Kluge, T; Baumgartner, C

    2017-08-01

    This study investigated sensitivity and false detection rate of a multimodal automatic seizure detection algorithm and the applicability to reduced electrode montages for long-term seizure documentation in epilepsy patients. An automatic seizure detection algorithm based on EEG, EMG, and ECG signals was developed. EEG/ECG recordings of 92 patients from two epilepsy monitoring units including 494 seizures were used to assess detection performance. EMG data were extracted by bandpass filtering of EEG signals. Sensitivity and false detection rate were evaluated for each signal modality and for reduced electrode montages. All focal seizures evolving to bilateral tonic-clonic (BTCS, n=50) and 89% of focal seizures (FS, n=139) were detected. Average sensitivity in temporal lobe epilepsy (TLE) patients was 94% and 74% in extratemporal lobe epilepsy (XTLE) patients. Overall detection sensitivity was 86%. Average false detection rate was 12.8 false detections in 24h (FD/24h) for TLE and 22 FD/24h in XTLE patients. Utilization of 8 frontal and temporal electrodes reduced average sensitivity from 86% to 81%. Our automatic multimodal seizure detection algorithm shows high sensitivity with full and reduced electrode montages. Evaluation of different signal modalities and electrode montages paces the way for semi-automatic seizure documentation systems. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  9. A new preprocessing parameter estimation based on geodesic active contour model for automatic vestibular neuritis diagnosis.

    Science.gov (United States)

    Ben Slama, Amine; Mouelhi, Aymen; Sahli, Hanene; Manoubi, Sondes; Mbarek, Chiraz; Trabelsi, Hedi; Fnaiech, Farhat; Sayadi, Mounir

    2017-07-01

    The diagnostic of the vestibular neuritis (VN) presents many difficulties to traditional assessment methods This paper deals with a fully automatic VN diagnostic system based on nystagmus parameter estimation using a pupil detection algorithm. A geodesic active contour model is implemented to find an accurate segmentation region of the pupil. Hence, the novelty of the proposed algorithm is to speed up the standard segmentation by using a specific mask located on the region of interest. This allows a drastically computing time reduction and a great performance and accuracy of the obtained results. After using this fast segmentation algorithm, the obtained estimated parameters are represented in temporal and frequency settings. A useful principal component analysis (PCA) selection procedure is then applied to obtain a reduced number of estimated parameters which are used to train a multi neural network (MNN). Experimental results on 90 eye movement videos show the effectiveness and the accuracy of the proposed estimation algorithm versus previous work. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Successive approximation algorithm for beam-position-monitor-based LHC collimator alignment

    Directory of Open Access Journals (Sweden)

    Gianluca Valentino

    2014-02-01

    Full Text Available Collimators with embedded beam position monitor (BPM button electrodes will be installed in the Large Hadron Collider (LHC during the current long shutdown period. For the subsequent operation, BPMs will allow the collimator jaws to be kept centered around the beam orbit. In this manner, a better beam cleaning efficiency and machine protection can be provided at unprecedented higher beam energies and intensities. A collimator alignment algorithm is proposed to center the jaws automatically around the beam. The algorithm is based on successive approximation and takes into account a correction of the nonlinear BPM sensitivity to beam displacement and an asymmetry of the electronic channels processing the BPM electrode signals. A software implementation was tested with a prototype collimator in the Super Proton Synchrotron. This paper presents results of the tests along with some considerations for eventual operation in the LHC.

  11. Study on application of adaptive fuzzy control and neural network in the automatic leveling system

    Science.gov (United States)

    Xu, Xiping; Zhao, Zizhao; Lan, Weiyong; Sha, Lei; Qian, Cheng

    2015-04-01

    This paper discusses the adaptive fuzzy control and neural network BP algorithm in large flat automatic leveling control system application. The purpose is to develop a measurement system with a flat quick leveling, Make the installation on the leveling system of measurement with tablet, to be able to achieve a level in precision measurement work quickly, improve the efficiency of the precision measurement. This paper focuses on the automatic leveling system analysis based on fuzzy controller, Use of the method of combining fuzzy controller and BP neural network, using BP algorithm improve the experience rules .Construct an adaptive fuzzy control system. Meanwhile the learning rate of the BP algorithm has also been run-rate adjusted to accelerate convergence. The simulation results show that the proposed control method can effectively improve the leveling precision of automatic leveling system and shorten the time of leveling.

  12. A synthesis/design optimization algorithm for Rankine cycle based energy systems

    International Nuclear Information System (INIS)

    Toffolo, Andrea

    2014-01-01

    The algorithm presented in this work has been developed to search for the optimal topology and design parameters of a set of Rankine cycles forming an energy system that absorbs/releases heat at different temperature levels and converts part of the absorbed heat into electricity. This algorithm can deal with several applications in the field of energy engineering: e.g., steam cycles or bottoming cycles in combined/cogenerative plants, steam networks, low temperature organic Rankine cycles. The main purpose of this algorithm is to overcome the limitations of the search space introduced by the traditional mixed-integer programming techniques, which assume that possible solutions are derived from a single superstructure embedding them all. The algorithm presented in this work is a hybrid evolutionary/traditional optimization algorithm organized in two levels. A complex original codification of the topology and the intensive design parameters of the system is managed by the upper level evolutionary algorithm according to the criteria set by the HEATSEP method, which are used for the first time to automatically synthesize a “basic” system configuration from a set of elementary thermodynamic cycles. The lower SQP (sequential quadratic programming) algorithm optimizes the objective function(s) with respect to cycle mass flow rates only, taking into account the heat transfer feasibility constraint within the undefined heat transfer section. A challenging example of application is also presented to show the capabilities of the algorithm. - Highlights: • Energy systems based on Rankine cycles are used in many applications. • A hybrid algorithm is proposed to optimize the synthesis/design of such systems. • The topology of the candidate solutions is not limited by a superstructure. • Topology is managed by the genetic operators of the upper level algorithm. • The effectiveness of the algorithm is proved in a complex test case

  13. Data-driven automatic parking constrained control for four-wheeled mobile vehicles

    Directory of Open Access Journals (Sweden)

    Wenxu Yan

    2016-11-01

    Full Text Available In this article, a novel data-driven constrained control scheme is proposed for automatic parking systems. The design of the proposed scheme only depends on the steering angle and the orientation angle of the car, and it does not involve any model information of the car. Therefore, the proposed scheme-based automatic parking system is applicable to different kinds of cars. In order to further reduce the desired trajectory coordinate tracking errors, a coordinates compensation algorithm is also proposed. In the design procedure of the controller, a novel dynamic anti-windup compensator is used to deal with the change magnitude and rate saturations of automatic parking control input. It is theoretically proven that all the signals in the closed-loop system are uniformly ultimately bounded based on Lyapunov stability analysis method. Finally, a simulation comparison among the proposed scheme with coordinates compensation and Proportion Integration Differentiation (PID control algorithm is given. It is shown that the proposed scheme with coordinates compensation has smaller tracking errors and more rapid responses than PID scheme.

  14. Automatic Mapping of Forest Stands Based on Three-Dimensional Point Clouds Derived from Terrestrial Laser-Scanning

    Directory of Open Access Journals (Sweden)

    Tim Ritter

    2017-07-01

    Full Text Available Mapping of exact tree positions can be regarded as a crucial task of field work associated with forest monitoring, especially on intensive research plots. We propose a two-stage density clustering approach for the automatic mapping of tree positions, and an algorithm for automatic tree diameter estimates based on terrestrial laser-scanning (TLS point cloud data sampled under limited sighting conditions. We show that our novel approach is able to detect tree positions in a mixed and vertically structured stand with an overall accuracy of 91.6%, and with omission- and commission error of only 5.7% and 2.7% respectively. Moreover, we were able to reproduce the stand’s diameter in breast height (DBH distribution, and to estimate single trees DBH with a mean average deviation of ±2.90 cm compared with tape measurements as reference.

  15. The problem of automatic identification of concepts

    International Nuclear Information System (INIS)

    Andreewsky, Alexandre

    1975-11-01

    This paper deals with the problem of the automatic recognition of concepts and describes an important language tool, the ''linguistic filter'', which facilitates the construction of statistical algorithms. Certain special filters, of prepositions, conjunctions, negatives, logical implication, compound words, are presented. This is followed by a detailed description of a statistical algorithm allowing recognition of pronoun referents, and finally the problem of the automatic treatment of negatives in French is discussed [fr

  16. The combination of a histogram-based clustering algorithm and support vector machine for the diagnosis of osteoporosis

    International Nuclear Information System (INIS)

    Heo, Min Suk; Kavitha, Muthu Subash; Asano, Akira; Taguchi, Akira

    2013-01-01

    To prevent low bone mineral density (BMD), that is, osteoporosis, in postmenopausal women, it is essential to diagnose osteoporosis more precisely. This study presented an automatic approach utilizing a histogram-based automatic clustering (HAC) algorithm with a support vector machine (SVM) to analyse dental panoramic radiographs (DPRs) and thus improve diagnostic accuracy by identifying postmenopausal women with low BMD or osteoporosis. We integrated our newly-proposed histogram-based automatic clustering (HAC) algorithm with our previously-designed computer-aided diagnosis system. The extracted moment-based features (mean, variance, skewness, and kurtosis) of the mandibular cortical width for the radial basis function (RBF) SVM classifier were employed. We also compared the diagnostic efficacy of the SVM model with the back propagation (BP) neural network model. In this study, DPRs and BMD measurements of 100 postmenopausal women patients (aged >50 years), with no previous record of osteoporosis, were randomly selected for inclusion. The accuracy, sensitivity, and specificity of the BMD measurements using our HAC-SVM model to identify women with low BMD were 93.0% (88.0%-98.0%), 95.8% (91.9%-99.7%) and 86.6% (79.9%-93.3%), respectively, at the lumbar spine; and 89.0% (82.9%-95.1%), 96.0% (92.2%-99.8%) and 84.0% (76.8%-91.2%), respectively, at the femoral neck. Our experimental results predict that the proposed HAC-SVM model combination applied on DPRs could be useful to assist dentists in early diagnosis and help to reduce the morbidity and mortality associated with low BMD and osteoporosis.

  17. Automatic media-adventitia IVUS image segmentation based on sparse representation framework and dynamic directional active contour model.

    Science.gov (United States)

    Zakeri, Fahimeh Sadat; Setarehdan, Seyed Kamaledin; Norouzi, Somayye

    2017-10-01

    Segmentation of the arterial wall boundaries from intravascular ultrasound images is an important image processing task in order to quantify arterial wall characteristics such as shape, area, thickness and eccentricity. Since manual segmentation of these boundaries is a laborious and time consuming procedure, many researchers attempted to develop (semi-) automatic segmentation techniques as a powerful tool for educational and clinical purposes in the past but as yet there is no any clinically approved method in the market. This paper presents a deterministic-statistical strategy for automatic media-adventitia border detection by a fourfold algorithm. First, a smoothed initial contour is extracted based on the classification in the sparse representation framework which is combined with the dynamic directional convolution vector field. Next, an active contour model is utilized for the propagation of the initial contour toward the interested borders. Finally, the extracted contour is refined in the leakage, side branch openings and calcification regions based on the image texture patterns. The performance of the proposed algorithm is evaluated by comparing the results to those manually traced borders by an expert on 312 different IVUS images obtained from four different patients. The statistical analysis of the results demonstrates the efficiency of the proposed method in the media-adventitia border detection with enough consistency in the leakage and calcification regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Advanced metaheuristic algorithms for laser optimization

    International Nuclear Information System (INIS)

    Tomizawa, H.

    2010-01-01

    A laser is one of the most important experimental tools. In synchrotron radiation field, lasers are widely used for experiments with Pump-Probe techniques. Especially for Xray-FELs, a laser has important roles as a seed light source or photo-cathode-illuminating light source to generate a high brightness electron bunch. The controls of laser pulse characteristics are required for many kinds of experiments. However, the laser should be tuned and customized for each requirement by laser experts. The automatic tuning of laser is required to realize with some sophisticated algorithms. The metaheuristic algorithm is one of the useful candidates to find one of the best solutions as acceptable as possible. The metaheuristic laser tuning system is expected to save our human resources and time for the laser preparations. I have shown successful results on a metaheuristic algorithm based on a genetic algorithm to optimize spatial (transverse) laser profiles and a hill climbing method extended with a fuzzy set theory to choose one of the best laser alignments automatically for each experimental requirement. (author)

  19. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    International Nuclear Information System (INIS)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Vermandel, Maximilien; Baillet, Clio

    2015-01-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging.Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used.Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results.The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging. (paper)

  20. Is STAPLE algorithm confident to assess segmentation methods in PET imaging?

    Science.gov (United States)

    Dewalle-Vignion, Anne-Sophie; Betrouni, Nacim; Baillet, Clio; Vermandel, Maximilien

    2015-12-01

    Accurate tumor segmentation in [18F]-fluorodeoxyglucose positron emission tomography is crucial for tumor response assessment and target volume definition in radiation therapy. Evaluation of segmentation methods from clinical data without ground truth is usually based on physicians’ manual delineations. In this context, the simultaneous truth and performance level estimation (STAPLE) algorithm could be useful to manage the multi-observers variability. In this paper, we evaluated how this algorithm could accurately estimate the ground truth in PET imaging. Complete evaluation study using different criteria was performed on simulated data. The STAPLE algorithm was applied to manual and automatic segmentation results. A specific configuration of the implementation provided by the Computational Radiology Laboratory was used. Consensus obtained by the STAPLE algorithm from manual delineations appeared to be more accurate than manual delineations themselves (80% of overlap). An improvement of the accuracy was also observed when applying the STAPLE algorithm to automatic segmentations results. The STAPLE algorithm, with the configuration used in this paper, is more appropriate than manual delineations alone or automatic segmentations results alone to estimate the ground truth in PET imaging. Therefore, it might be preferred to assess the accuracy of tumor segmentation methods in PET imaging.

  1. Oocytes Polar Body Detection for Automatic Enucleation

    Directory of Open Access Journals (Sweden)

    Di Chen

    2016-02-01

    Full Text Available Enucleation is a crucial step in cloning. In order to achieve automatic blind enucleation, we should detect the polar body of the oocyte automatically. The conventional polar body detection approaches have low success rate or low efficiency. We propose a polar body detection method based on machine learning in this paper. On one hand, the improved Histogram of Oriented Gradient (HOG algorithm is employed to extract features of polar body images, which will increase success rate. On the other hand, a position prediction method is put forward to narrow the search range of polar body, which will improve efficiency. Experiment results show that the success rate is 96% for various types of polar bodies. Furthermore, the method is applied to an enucleation experiment and improves the degree of automatic enucleation.

  2. Automatic SIMD vectorization of SSA-based control flow graphs

    CERN Document Server

    Karrenberg, Ralf

    2015-01-01

    Ralf Karrenberg presents Whole-Function Vectorization (WFV), an approach that allows a compiler to automatically create code that exploits data-parallelism using SIMD instructions. Data-parallel applications such as particle simulations, stock option price estimation or video decoding require the same computations to be performed on huge amounts of data. Without WFV, one processor core executes a single instance of a data-parallel function. WFV transforms the function to execute multiple instances at once using SIMD instructions. The author describes an advanced WFV algorithm that includes a v

  3. A cloud-based system for automatic glaucoma screening.

    Science.gov (United States)

    Fengshou Yin; Damon Wing Kee Wong; Ying Quan; Ai Ping Yow; Ngan Meng Tan; Gopalakrishnan, Kavitha; Beng Hai Lee; Yanwu Xu; Zhuo Zhang; Jun Cheng; Jiang Liu

    2015-08-01

    In recent years, there has been increasing interest in the use of automatic computer-based systems for the detection of eye diseases including glaucoma. However, these systems are usually standalone software with basic functions only, limiting their usage in a large scale. In this paper, we introduce an online cloud-based system for automatic glaucoma screening through the use of medical image-based pattern classification technologies. It is designed in a hybrid cloud pattern to offer both accessibility and enhanced security. Raw data including patient's medical condition and fundus image, and resultant medical reports are collected and distributed through the public cloud tier. In the private cloud tier, automatic analysis and assessment of colour retinal fundus images are performed. The ubiquitous anywhere access nature of the system through the cloud platform facilitates a more efficient and cost-effective means of glaucoma screening, allowing the disease to be detected earlier and enabling early intervention for more efficient intervention and disease management.

  4. Comprehensive eye evaluation algorithm

    Science.gov (United States)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  5. Particle swarm optimization for automatic creation of complex graphic characters

    International Nuclear Information System (INIS)

    Fister, Iztok; Perc, Matjaž; Ljubič, Karin; Kamal, Salahuddin M.; Iglesias, Andres; Fister, Iztok

    2015-01-01

    Nature-inspired algorithms are a very promising tool for solving the hardest problems in computer sciences and mathematics. These algorithms are typically inspired by the fascinating behavior at display in biological systems, such as bee swarms or fish schools. So far, these algorithms have been applied in many practical applications. In this paper, we present a simple particle swarm optimization, which allows automatic creation of complex two-dimensional graphic characters. The method involves constructing the base characters, optimizing the modifications of the base characters with the particle swarm optimization algorithm, and finally generating the graphic characters from the solution. We demonstrate the effectiveness of our approach with the creation of simple snowman, but we also outline in detail how more complex characters can be created

  6. Automatic gender determination from 3D digital maxillary tooth plaster models based on the random forest algorithm and discrete cosine transform.

    Science.gov (United States)

    Akkoç, Betül; Arslan, Ahmet; Kök, Hatice

    2017-05-01

    One of the first stages in the identification of an individual is gender determination. Through gender determination, the search spectrum can be reduced. In disasters such as accidents or fires, which can render identification somewhat difficult, durable teeth are an important source for identification. This study proposes a smart system that can automatically determine gender using 3D digital maxillary tooth plaster models. The study group was composed of 40 Turkish individuals (20 female, 20 male) between the ages of 21 and 24. Using the iterative closest point (ICP) algorithm, tooth models were aligned, and after the segmentation process, models were transformed into depth images. The local discrete cosine transform (DCT) was used in the process of feature extraction, and the random forest (RF) algorithm was used for the process of classification. Classification was performed using 30 different seeds for random generator values and 10-fold cross-validation. A value of 85.166% was obtained for average classification accuracy (CA) and a value of 91.75% for the area under the ROC curve (AUC). A multi-disciplinary study is performed here that includes computer sciences, medicine and dentistry. A smart system is proposed for the determination of gender from 3D digital models of maxillary tooth plaster models. This study has the capacity to extend the field of gender determination from teeth. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Automatic classification of journalistic documents on the Internet1

    Directory of Open Access Journals (Sweden)

    Elias OLIVEIRA

    Full Text Available Abstract Online journalism is increasing every day. There are many news agencies, newspapers, and magazines using digital publication in the global network. Documents published online are available to users, who use search engines to find them. In order to deliver documents that are relevant to the search, they must be indexed and classified. Due to the vast number of documents published online every day, a lot of research has been carried out to find ways to facilitate automatic document classification. The objective of the present study is to describe an experimental approach for the automatic classification of journalistic documents published on the Internet using the Vector Space Model for document representation. The model was tested based on a real journalism database, using algorithms that have been widely reported in the literature. This article also describes the metrics used to assess the performance of these algorithms and their required configurations. The results obtained show the efficiency of the method used and justify further research to find ways to facilitate the automatic classification of documents.

  8. Deep Learning-Based Data Forgery Detection in Automatic Generation Control

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Fengli [Univ. of Arkansas, Fayetteville, AR (United States); Li, Qinghua [Univ. of Arkansas, Fayetteville, AR (United States)

    2017-10-09

    Automatic Generation Control (AGC) is a key control system in the power grid. It is used to calculate the Area Control Error (ACE) based on frequency and tie-line power flow between balancing areas, and then adjust power generation to maintain the power system frequency in an acceptable range. However, attackers might inject malicious frequency or tie-line power flow measurements to mislead AGC to do false generation correction which will harm the power grid operation. Such attacks are hard to be detected since they do not violate physical power system models. In this work, we propose algorithms based on Neural Network and Fourier Transform to detect data forgery attacks in AGC. Different from the few previous work that rely on accurate load prediction to detect data forgery, our solution only uses the ACE data already available in existing AGC systems. In particular, our solution learns the normal patterns of ACE time series and detects abnormal patterns caused by artificial attacks. Evaluations on the real ACE dataset show that our methods have high detection accuracy.

  9. An analytical fuzzy-based approach to ?-gain optimal control of input-affine nonlinear systems using Newton-type algorithm

    Science.gov (United States)

    Milic, Vladimir; Kasac, Josip; Novakovic, Branko

    2015-10-01

    This paper is concerned with ?-gain optimisation of input-affine nonlinear systems controlled by analytic fuzzy logic system. Unlike the conventional fuzzy-based strategies, the non-conventional analytic fuzzy control method does not require an explicit fuzzy rule base. As the first contribution of this paper, we prove, by using the Stone-Weierstrass theorem, that the proposed fuzzy system without rule base is universal approximator. The second contribution of this paper is an algorithm for solving a finite-horizon minimax problem for ?-gain optimisation. The proposed algorithm consists of recursive chain rule for first- and second-order derivatives, Newton's method, multi-step Adams method and automatic differentiation. Finally, the results of this paper are evaluated on a second-order nonlinear system.

  10. Moving Object Tracking and Avoidance Algorithm for Differential Driving AGV Based on Laser Measurement Technology

    Directory of Open Access Journals (Sweden)

    Pandu Sandi Pratama

    2012-12-01

    Full Text Available This paper proposed an algorithm to track the obstacle position and avoid the moving objects for differential driving Automatic Guided Vehicles (AGV system in industrial environment. This algorithm has several abilities such as: to detect the moving objects, to predict the velocity and direction of moving objects, to predict the collision possibility and to plan the avoidance maneuver. For sensing the local environment and positioning, the laser measurement system LMS-151 and laser navigation system NAV-200 are applied. Based on the measurement results of the sensors, the stationary and moving obstacles are detected and the collision possibility is calculated. The velocity and direction of the obstacle are predicted using Kalman filter algorithm. Collision possibility, time, and position can be calculated by comparing the AGV movement and obstacle prediction result obtained by Kalman filter. Finally the avoidance maneuver using the well known tangent Bug algorithm is decided based on the calculation data. The effectiveness of proposed algorithm is verified using simulation and experiment. Several examples of experiment conditions are presented using stationary obstacle, and moving obstacles. The simulation and experiment results show that the AGV can detect and avoid the obstacles successfully in all experimental condition. [Keywords— Obstacle avoidance, AGV, differential drive, laser measurement system, laser navigation system].

  11. Time-domain analysis of planar microstrip devices using a generalized Yee-algorithm based on unstructured grids

    Science.gov (United States)

    Gedney, Stephen D.; Lansing, Faiza

    1993-01-01

    The generalized Yee-algorithm is presented for the temporal full-wave analysis of planar microstrip devices. This algorithm has the significant advantage over the traditional Yee-algorithm in that it is based on unstructured and irregular grids. The robustness of the generalized Yee-algorithm is that structures that contain curved conductors or complex three-dimensional geometries can be more accurately, and much more conveniently modeled using standard automatic grid generation techniques. This generalized Yee-algorithm is based on the the time-marching solution of the discrete form of Maxwell's equations in their integral form. To this end, the electric and magnetic fields are discretized over a dual, irregular, and unstructured grid. The primary grid is assumed to be composed of general fitted polyhedra distributed throughout the volume. The secondary grid (or dual grid) is built up of the closed polyhedra whose edges connect the centroid's of adjacent primary cells, penetrating shared faces. Faraday's law and Ampere's law are used to update the fields normal to the primary and secondary grid faces, respectively. Subsequently, a correction scheme is introduced to project the normal fields onto the grid edges. It is shown that this scheme is stable, maintains second-order accuracy, and preserves the divergenceless nature of the flux densities. Finally, for computational efficiency the algorithm is structured as a series of sparse matrix-vector multiplications. Based on this scheme, the generalized Yee-algorithm has been implemented on vector and parallel high performance computers in a highly efficient manner.

  12. Automatic food detection in egocentric images using artificial intelligence technology

    Science.gov (United States)

    Our objective was to develop an artificial intelligence (AI)-based algorithm which can automatically detect food items from images acquired by an egocentric wearable camera for dietary assessment. To study human diet and lifestyle, large sets of egocentric images were acquired using a wearable devic...

  13. Opposition-Based Adaptive Fireworks Algorithm

    Directory of Open Access Journals (Sweden)

    Chibing Gong

    2016-07-01

    Full Text Available A fireworks algorithm (FWA is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA. The purpose of this paper is to add opposition-based learning (OBL to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based adaptive fireworks algorithm (OAFWA. The final results conclude that OAFWA significantly outperformed EFWA and AFWA in terms of solution accuracy. Additionally, OAFWA was compared with a bat algorithm (BA, differential evolution (DE, self-adapting control parameters in differential evolution (jDE, a firefly algorithm (FA, and a standard particle swarm optimization 2011 (SPSO2011 algorithm. The research results indicate that OAFWA ranks the highest of the six algorithms for both solution accuracy and runtime cost.

  14. AUTOMATIC LUNG NODULE DETECTION BASED ON STATISTICAL REGION MERGING AND SUPPORT VECTOR MACHINES

    Directory of Open Access Journals (Sweden)

    Elaheh Aghabalaei Khordehchi

    2017-06-01

    Full Text Available Lung cancer is one of the most common diseases in the world that can be treated if the lung nodules are detected in their early stages of growth. This study develops a new framework for computer-aided detection of pulmonary nodules thorough a fully-automatic analysis of Computed Tomography (CT images. In the present work, the multi-layer CT data is fed into a pre-processing step that exploits an adaptive diffusion-based smoothing algorithm in which the parameters are automatically tuned using an adaptation technique. After multiple levels of morphological filtering, the Regions of Interest (ROIs are extracted from the smoothed images. The Statistical Region Merging (SRM algorithm is applied to the ROIs in order to segment each layer of the CT data. Extracted segments in consecutive layers are then analyzed in such a way that if they intersect at more than a predefined number of pixels, they are labeled with a similar index. The boundaries of the segments in adjacent layers which have the same indices are then connected together to form three-dimensional objects as the nodule candidates. After extracting four spectral, one morphological, and one textural feature from all candidates, they are finally classified into nodules and non-nodules using the Support Vector Machine (SVM classifier. The proposed framework has been applied to two sets of lung CT images and its performance has been compared to that of nine other competing state-of-the-art methods. The considerable efficiency of the proposed approach has been proved quantitatively and validated by clinical experts as well.

  15. Operational Automatic Remote Sensing Image Understanding Systems: Beyond Geographic Object-Based and Object-Oriented Image Analysis (GEOBIA/GEOOIA. Part 2: Novel system Architecture, Information/Knowledge Representation, Algorithm Design and Implementation

    Directory of Open Access Journals (Sweden)

    Luigi Boschetti

    2012-09-01

    Full Text Available According to literature and despite their commercial success, state-of-the-art two-stage non-iterative geographic object-based image analysis (GEOBIA systems and three-stage iterative geographic object-oriented image analysis (GEOOIA systems, where GEOOIA/GEOBIA, remain affected by a lack of productivity, general consensus and research. To outperform the Quality Indexes of Operativeness (OQIs of existing GEOBIA/GEOOIA systems in compliance with the Quality Assurance Framework for Earth Observation (QA4EO guidelines, this methodological work is split into two parts. Based on an original multi-disciplinary Strengths, Weaknesses, Opportunities and Threats (SWOT analysis of the GEOBIA/GEOOIA approaches, the first part of this work promotes a shift of learning paradigm in the pre-attentive vision first stage of a remote sensing (RS image understanding system (RS-IUS, from sub-symbolic statistical model-based (inductive image segmentation to symbolic physical model-based (deductive image preliminary classification capable of accomplishing image sub-symbolic segmentation and image symbolic pre-classification simultaneously. In the present second part of this work, a novel hybrid (combined deductive and inductive RS-IUS architecture featuring a symbolic deductive pre-attentive vision first stage is proposed and discussed in terms of: (a computational theory (system design, (b information/knowledge representation, (c algorithm design and (d implementation. As proof-of-concept of symbolic physical model-based pre-attentive vision first stage, the spectral knowledge-based, operational, near real-time, multi-sensor, multi-resolution, application-independent Satellite Image Automatic Mapper™ (SIAM™ is selected from existing literature. To the best of these authors’ knowledge, this is the first time a symbolic syntactic inference system, like SIAM™, is made available to the RS community for operational use in a RS-IUS pre-attentive vision first stage

  16. Opposition-Based Adaptive Fireworks Algorithm

    OpenAIRE

    Chibing Gong

    2016-01-01

    A fireworks algorithm (FWA) is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA) proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA). The purpose of this paper is to add opposition-based learning (OBL) to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based a...

  17. An algorithm for determination of peak regions and baseline elimination in spectroscopic data

    International Nuclear Information System (INIS)

    Morhac, Miroslav

    2009-01-01

    In the paper we propose a new algorithm for the determination of peaks containing regions and their separation from peak-free regions. Further based on this algorithm we propose a new background elimination algorithm which allows more accurate estimate of the background beneath the peaks than the algorithms known so far. The algorithm is based on a clipping operation with the window adjustable automatically to the widths of identified peak regions. The illustrative examples presented in the paper prove in favor of the proposed algorithms.

  18. Semantic based cluster content discovery in description first clustering algorithm

    International Nuclear Information System (INIS)

    Khan, M.W.; Asif, H.M.S.

    2017-01-01

    In the field of data analytics grouping of like documents in textual data is a serious problem. A lot of work has been done in this field and many algorithms have purposed. One of them is a category of algorithms which firstly group the documents on the basis of similarity and then assign the meaningful labels to those groups. Description first clustering algorithm belong to the category in which the meaningful description is deduced first and then relevant documents are assigned to that description. LINGO (Label Induction Grouping Algorithm) is the algorithm of description first clustering category which is used for the automatic grouping of documents obtained from search results. It uses LSI (Latent Semantic Indexing); an IR (Information Retrieval) technique for induction of meaningful labels for clusters and VSM (Vector Space Model) for cluster content discovery. In this paper we present the LINGO while it is using LSI during cluster label induction and cluster content discovery phase. Finally, we compare results obtained from the said algorithm while it uses VSM and Latent semantic analysis during cluster content discovery phase. (author)

  19. Automatic markerless registration of point clouds with semantic-keypoint-based 4-points congruent sets

    Science.gov (United States)

    Ge, Xuming

    2017-08-01

    The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.

  20. Development of Regenerative Braking Co-operative Control System for Automatic Transmission-based Hybrid Electric Vehicle using Electronic Wedge Brake

    OpenAIRE

    Ko, Jiweon; Ko, Sungyeon; Bak, Yongsun; Jang, Mijeong; Yoo, Byoungsoo; Cheon, Jaeseung; Kim, Hyunsoo

    2013-01-01

    This research proposes a regenerative braking co-operative control system for the automatic transmission (AT)-based hybrid electric vehicle (HEV). The brake system of the subject HEV consists of the regenerative braking and the electronic wedge brake (EWB) friction braking for the front wheel, and the hydraulic friction braking for the rear wheel. A regenerative braking co-operative control algorithm is suggested for the regenerative braking and friction braking, which distributes the braking...

  1. Towards Automatic Trunk Classification on Young Conifers

    DEFF Research Database (Denmark)

    Petri, Stig; Immerkær, John

    2009-01-01

    In the garden nursery industry providing young Nordmann firs for Christmas tree plantations, there is a rising interest in automatic classification of their products to ensure consistently high quality and reduce the cost of manual labor. This paper describes a fully automatic single-view algorit...... performance of the algorithm by incorporating color information into the data considered by the dynamic programming algorithm....

  2. Automatic generation of smart earthquake-resistant building system: Hybrid system of base-isolation and building-connection

    Directory of Open Access Journals (Sweden)

    M. Kasagi

    2016-02-01

    Full Text Available A base-isolated building may sometimes exhibit an undesirable large response to a long-duration, long-period earthquake ground motion and a connected building system without base-isolation may show a large response to a near-fault (rather high-frequency earthquake ground motion. To overcome both deficiencies, a new hybrid control system of base-isolation and building-connection is proposed and investigated. In this new hybrid building system, a base-isolated building is connected to a stiffer free wall with oil dampers. It has been demonstrated in a preliminary research that the proposed hybrid system is effective both for near-fault (rather high-frequency and long-duration, long-period earthquake ground motions and has sufficient redundancy and robustness for a broad range of earthquake ground motions.An automatic generation algorithm of this kind of smart structures of base-isolation and building-connection hybrid systems is presented in this paper. It is shown that, while the proposed algorithm does not work well in a building without the connecting-damper system, it works well in the proposed smart hybrid system with the connecting damper system.

  3. Automatized Parameterization of DFTB Using Particle Swarm Optimization.

    Science.gov (United States)

    Chou, Chien-Pin; Nishimura, Yoshifumi; Fan, Chin-Chai; Mazur, Grzegorz; Irle, Stephan; Witek, Henryk A

    2016-01-12

    We present a novel density-functional tight-binding (DFTB) parametrization toolkit developed to optimize the parameters of various DFTB models in a fully automatized fashion. The main features of the algorithm, based on the particle swarm optimization technique, are discussed, and a number of initial pilot applications of the developed methodology to molecular and solid systems are presented.

  4. Optimization of IBF parameters based on adaptive tool-path algorithm

    Science.gov (United States)

    Deng, Wen Hui; Chen, Xian Hua; Jin, Hui Liang; Zhong, Bo; Hou, Jin; Li, An Qi

    2018-03-01

    As a kind of Computer Controlled Optical Surfacing(CCOS) technology. Ion Beam Figuring(IBF) has obvious advantages in the control of surface accuracy, surface roughness and subsurface damage. The superiority and characteristics of IBF in optical component processing are analyzed from the point of view of removal mechanism. For getting more effective and automatic tool path with the information of dwell time, a novel algorithm is proposed in this thesis. Based on the removal functions made through our IBF equipment and the adaptive tool-path, optimized parameters are obtained through analysis the residual error that would be created in the polishing process. A Φ600 mm plane reflector element was used to be a simulation instance. The simulation result shows that after four combinations of processing, the surface accuracy of PV (Peak Valley) value and the RMS (Root Mean Square) value was reduced to 4.81 nm and 0.495 nm from 110.22 nm and 13.998 nm respectively in the 98% aperture. The result shows that the algorithm and optimized parameters provide a good theoretical for high precision processing of IBF.

  5. 14 CFR 99.43 - Contiguous U.S. ADIZ.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 2 2010-01-01 2010-01-01 false Contiguous U.S. ADIZ. 99.43 Section 99.43 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIR TRAFFIC... Zones § 99.43 Contiguous U.S. ADIZ. The area bounded by a line from 43°15′N, 65°55′W; 44°21′N; 67°16′W...

  6. Data-driven automatic parking constrained control for four-wheeled mobile vehicles

    OpenAIRE

    Wenxu Yan; Jing Deng; Dezhi Xu

    2016-01-01

    In this article, a novel data-driven constrained control scheme is proposed for automatic parking systems. The design of the proposed scheme only depends on the steering angle and the orientation angle of the car, and it does not involve any model information of the car. Therefore, the proposed scheme-based automatic parking system is applicable to different kinds of cars. In order to further reduce the desired trajectory coordinate tracking errors, a coordinates compensation algorithm is als...

  7. An intelligent identification algorithm for the monoclonal picking instrument

    Science.gov (United States)

    Yan, Hua; Zhang, Rongfu; Yuan, Xujun; Wang, Qun

    2017-11-01

    The traditional colony selection is mainly operated by manual mode, which takes on low efficiency and strong subjectivity. Therefore, it is important to develop an automatic monoclonal-picking instrument. The critical stage of the automatic monoclonal-picking and intelligent optimal selection is intelligent identification algorithm. An auto-screening algorithm based on Support Vector Machine (SVM) is proposed in this paper, which uses the supervised learning method, which combined with the colony morphological characteristics to classify the colony accurately. Furthermore, through the basic morphological features of the colony, system can figure out a series of morphological parameters step by step. Through the establishment of maximal margin classifier, and based on the analysis of the growth trend of the colony, the selection of the monoclonal colony was carried out. The experimental results showed that the auto-screening algorithm could screen out the regular colony from the other, which meets the requirement of various parameters.

  8. Artificial Mangrove Species Mapping Using Pléiades-1: An Evaluation of Pixel-Based and Object-Based Classifications with Selected Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Dezhi Wang

    2018-02-01

    Full Text Available In the dwindling natural mangrove today, mangrove reforestation projects are conducted worldwide to prevent further losses. Due to monoculture and the low survival rate of artificial mangroves, it is necessary to pay attention to mapping and monitoring them dynamically. Remote sensing techniques have been widely used to map mangrove forests due to their capacity for large-scale, accurate, efficient, and repetitive monitoring. This study evaluated the capability of a 0.5-m Pléiades-1 in classifying artificial mangrove species using both pixel-based and object-based classification schemes. For comparison, three machine learning algorithms—decision tree (DT, support vector machine (SVM, and random forest (RF—were used as the classifiers in the pixel-based and object-based classification procedure. The results showed that both the pixel-based and object-based approaches could recognize the major discriminations between the four major artificial mangrove species. However, the object-based method had a better overall accuracy than the pixel-based method on average. For pixel-based image analysis, SVM produced the highest overall accuracy (79.63%; for object-based image analysis, RF could achieve the highest overall accuracy (82.40%, and it was also the best machine learning algorithm for classifying artificial mangroves. The patches produced by object-based image analysis approaches presented a more generalized appearance and could contiguously depict mangrove species communities. When the same machine learning algorithms were compared by McNemar’s test, a statistically significant difference in overall classification accuracy between the pixel-based and object-based classifications only existed in the RF algorithm. Regarding species, monoculture and dominant mangrove species Sonneratia apetala group 1 (SA1 as well as partly mixed and regular shape mangrove species Hibiscus tiliaceus (HT could well be identified. However, for complex and easily

  9. Dynamic route guidance algorithm based algorithm based on artificial immune system

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    To improve the performance of the K-shortest paths search in intelligent traffic guidance systems,this paper proposes an optimal search algorithm based on the intelligent optimization search theory and the memphor mechanism of vertebrate immune systems.This algorithm,applied to the urban traffic network model established by the node-expanding method,can expediently realize K-shortest paths search in the urban traffic guidance systems.Because of the immune memory and global parallel search ability from artificial immune systems,K shortest paths can be found without any repeat,which indicates evidently the superiority of the algorithm to the conventional ones.Not only does it perform a better parallelism,the algorithm also prevents premature phenomenon that often occurs in genetic algorithms.Thus,it is especially suitable for real-time requirement of the traffic guidance system and other engineering optimal applications.A case study verifies the efficiency and the practicability of the algorithm aforementioned.

  10. Research algorithm for synthesis of double conjugation optical systems in the Gauss region

    Directory of Open Access Journals (Sweden)

    A. B. Ostrun

    2014-01-01

    Full Text Available The article focuses on the research of variable magnification optical systems of sophistic class - so-called double conjugation systems. When the magnification changes, they provide two pairs of fixed conjugate planes, namely object and image, as well as entrance and exit pupils. Similar systems are used in microscopy and complex schemes, where it is necessary to conform the pupils of contiguous removable optical components. Synthesis of double conjugation systems in Gauss region is not an easy task. To ensure complete immobility of the exit pupil in the system there should be three movable components or components with variable optical power.Analysis of the literature shows that the design of double conjugation optical system in the paraxial region has been neglected, all methods are not completely universal and suitable for automation.Based on the foregoing, the research and development of a universal method for automated synthesis of double conjugation systems in Gauss region formulated as an objective of the present work seem to be a challenge.To achieve this goal a universal algorithm is used. It is based on the fact that the output coordinates of paraxial rays are multilinear functions of optical surfaces and of axial thicknesses between surfaces. It allows us to create and solve a system of multilinear equations in semi-automatic mode to achieve the chosen values of paraxial characteristics.As a basic scheme for the synthesis a five-component system has been chosen with extreme fixed components and three mobile "internal" ones. The system was considered in two extreme states of moving parts. Initial values of axial thicknesses were taken from Hopkins' patent. Optical force five components were considered unknown. For calculation the system of five equations was created, which allowed us to obtain a certain back focal length, to provide the specified focal length and a fixed position of the exit pupil at a fixed entrance pupil.The scheme

  11. Automatic Classification of Sub-Techniques in Classical Cross-Country Skiing Using a Machine Learning Algorithm on Micro-Sensor Data

    Directory of Open Access Journals (Sweden)

    Ole Marius Hoel Rindal

    2017-12-01

    Full Text Available The automatic classification of sub-techniques in classical cross-country skiing provides unique possibilities for analyzing the biomechanical aspects of outdoor skiing. This is currently possible due to the miniaturization and flexibility of wearable inertial measurement units (IMUs that allow researchers to bring the laboratory to the field. In this study, we aimed to optimize the accuracy of the automatic classification of classical cross-country skiing sub-techniques by using two IMUs attached to the skier’s arm and chest together with a machine learning algorithm. The novelty of our approach is the reliable detection of individual cycles using a gyroscope on the skier’s arm, while a neural network machine learning algorithm robustly classifies each cycle to a sub-technique using sensor data from an accelerometer on the chest. In this study, 24 datasets from 10 different participants were separated into the categories training-, validation- and test-data. Overall, we achieved a classification accuracy of 93.9% on the test-data. Furthermore, we illustrate how an accurate classification of sub-techniques can be combined with data from standard sports equipment including position, altitude, speed and heart rate measuring systems. Combining this information has the potential to provide novel insight into physiological and biomechanical aspects valuable to coaches, athletes and researchers.

  12. Clinical evaluation of semi-automatic open-source algorithmic software segmentation of the mandibular bone: Practical feasibility and assessment of a new course of action.

    Science.gov (United States)

    Wallner, Jürgen; Hochegger, Kerstin; Chen, Xiaojun; Mischak, Irene; Reinbacher, Knut; Pau, Mauro; Zrnc, Tomislav; Schwenzer-Zimmerer, Katja; Zemann, Wolfgang; Schmalstieg, Dieter; Egger, Jan

    2018-01-01

    Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However-due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice. In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance. Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p 0.94) for any of the comparison made between the two groups. Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a

  13. Automatic geospatial information Web service composition based on ontology interface matching

    Science.gov (United States)

    Xu, Xianbin; Wu, Qunyong; Wang, Qinmin

    2008-10-01

    With Web services technology the functions of WebGIS can be presented as a kind of geospatial information service, and helped to overcome the limitation of the information-isolated situation in geospatial information sharing field. Thus Geospatial Information Web service composition, which conglomerates outsourced services working in tandem to offer value-added service, plays the key role in fully taking advantage of geospatial information services. This paper proposes an automatic geospatial information web service composition algorithm that employed the ontology dictionary WordNet to analyze semantic distances among the interfaces. Through making matching between input/output parameters and the semantic meaning of pairs of service interfaces, a geospatial information web service chain can be created from a number of candidate services. A practice of the algorithm is also proposed and the result of it shows the feasibility of this algorithm and the great promise in the emerging demand for geospatial information web service composition.

  14. Model-Based Fault Diagnosis Techniques Design Schemes, Algorithms and Tools

    CERN Document Server

    Ding, Steven X

    2013-01-01

    Guaranteeing a high system performance over a wide operating range is an important issue surrounding the design of automatic control systems with successively increasing complexity. As a key technology in the search for a solution, advanced fault detection and identification (FDI) is receiving considerable attention. This book introduces basic model-based FDI schemes, advanced analysis and design algorithms, and mathematical and control-theoretic tools. This second edition of Model-Based Fault Diagnosis Techniques contains: ·         new material on fault isolation and identification, and fault detection in feedback control loops; ·         extended and revised treatment of systematic threshold determination for systems with both deterministic unknown inputs and stochastic noises; addition of the continuously-stirred tank heater as a representative process-industrial benchmark; and ·         enhanced discussion of residual evaluation in stochastic processes. Model-based Fault Diagno...

  15. Automatic Structure-Based Code Generation from Coloured Petri Nets

    DEFF Research Database (Denmark)

    Kristensen, Lars Michael; Westergaard, Michael

    2010-01-01

    Automatic code generation based on Coloured Petri Net (CPN) models is challenging because CPNs allow for the construction of abstract models that intermix control flow and data processing, making translation into conventional programming constructs difficult. We introduce Process-Partitioned CPNs...... (PP-CPNs) which is a subclass of CPNs equipped with an explicit separation of process control flow, message passing, and access to shared and local data. We show how PP-CPNs caters for a four phase structure-based automatic code generation process directed by the control flow of processes....... The viability of our approach is demonstrated by applying it to automatically generate an Erlang implementation of the Dynamic MANET On-demand (DYMO) routing protocol specified by the Internet Engineering Task Force (IETF)....

  16. Automatic computation of 2D cardiac measurements from B-mode echocardiography

    Science.gov (United States)

    Park, JinHyeong; Feng, Shaolei; Zhou, S. Kevin

    2012-03-01

    We propose a robust and fully automatic algorithm which computes the 2D echocardiography measurements recommended by America Society of Echocardiography. The algorithm employs knowledge-based imaging technologies which can learn the expert's knowledge from the training images and expert's annotation. Based on the models constructed from the learning stage, the algorithm searches initial location of the landmark points for the measurements by utilizing heart structure of left ventricle including mitral valve aortic valve. It employs the pseudo anatomic M-mode image generated by accumulating the line images in 2D parasternal long axis view along the time to refine the measurement landmark points. The experiment results with large volume of data show that the algorithm runs fast and is robust comparable to expert.

  17. Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm

    Institute of Scientific and Technical Information of China (English)

    Haidong Xu; Mingyan Jiang; Kun Xu

    2015-01-01

    The artificial bee colony (ABC) algorithm is a com-petitive stochastic population-based optimization algorithm. How-ever, the ABC algorithm does not use the social information and lacks the knowledge of the problem structure, which leads to in-sufficiency in both convergent speed and searching precision. Archimedean copula estimation of distribution algorithm (ACEDA) is a relatively simple, time-economic and multivariate correlated EDA. This paper proposes a novel hybrid algorithm based on the ABC algorithm and ACEDA cal ed Archimedean copula estima-tion of distribution based on the artificial bee colony (ACABC) algorithm. The hybrid algorithm utilizes ACEDA to estimate the distribution model and then uses the information to help artificial bees to search more efficiently in the search space. Six bench-mark functions are introduced to assess the performance of the ACABC algorithm on numerical function optimization. Experimen-tal results show that the ACABC algorithm converges much faster with greater precision compared with the ABC algorithm, ACEDA and the global best (gbest)-guided ABC (GABC) algorithm in most of the experiments.

  18. Neural Bases of Automaticity

    Science.gov (United States)

    Servant, Mathieu; Cassey, Peter; Woodman, Geoffrey F.; Logan, Gordon D.

    2018-01-01

    Automaticity allows us to perform tasks in a fast, efficient, and effortless manner after sufficient practice. Theories of automaticity propose that across practice processing transitions from being controlled by working memory to being controlled by long-term memory retrieval. Recent event-related potential (ERP) studies have sought to test this…

  19. A Plane Target Detection Algorithm in Remote Sensing Images based on Deep Learning Network Technology

    Science.gov (United States)

    Shuxin, Li; Zhilong, Zhang; Biao, Li

    2018-01-01

    Plane is an important target category in remote sensing targets and it is of great value to detect the plane targets automatically. As remote imaging technology developing continuously, the resolution of the remote sensing image has been very high and we can get more detailed information for detecting the remote sensing targets automatically. Deep learning network technology is the most advanced technology in image target detection and recognition, which provided great performance improvement in the field of target detection and recognition in the everyday scenes. We combined the technology with the application in the remote sensing target detection and proposed an algorithm with end to end deep network, which can learn from the remote sensing images to detect the targets in the new images automatically and robustly. Our experiments shows that the algorithm can capture the feature information of the plane target and has better performance in target detection with the old methods.

  20. Enhanced TDMA Based Anti-Collision Algorithm with a Dynamic Frame Size Adjustment Strategy for Mobile RFID Readers

    Directory of Open Access Journals (Sweden)

    Kwang Cheol Shin

    2009-02-01

    Full Text Available In the fields of production, manufacturing and supply chain management, Radio Frequency Identification (RFID is regarded as one of the most important technologies. Nowadays, Mobile RFID, which is often installed in carts or forklift trucks, is increasingly being applied to the search for and checkout of items in warehouses, supermarkets, libraries and other industrial fields. In using Mobile RFID, since the readers are continuously moving, they can interfere with each other when they attempt to read the tags. In this study, we suggest a Time Division Multiple Access (TDMA based anti-collision algorithm for Mobile RFID readers. Our algorithm automatically adjusts the frame size of each reader without using manual parameters by adopting the dynamic frame size adjustment strategy when collisions occur at a reader. Through experiments on a simulated environment for Mobile RFID readers, we show that the proposed method improves the number of successful transmissions by about 228% on average, compared with Colorwave, a representative TDMA based anti-collision algorithm.

  1. Enhanced TDMA Based Anti-Collision Algorithm with a Dynamic Frame Size Adjustment Strategy for Mobile RFID Readers.

    Science.gov (United States)

    Shin, Kwang Cheol; Park, Seung Bo; Jo, Geun Sik

    2009-01-01

    In the fields of production, manufacturing and supply chain management, Radio Frequency Identification (RFID) is regarded as one of the most important technologies. Nowadays, Mobile RFID, which is often installed in carts or forklift trucks, is increasingly being applied to the search for and checkout of items in warehouses, supermarkets, libraries and other industrial fields. In using Mobile RFID, since the readers are continuously moving, they can interfere with each other when they attempt to read the tags. In this study, we suggest a Time Division Multiple Access (TDMA) based anti-collision algorithm for Mobile RFID readers. Our algorithm automatically adjusts the frame size of each reader without using manual parameters by adopting the dynamic frame size adjustment strategy when collisions occur at a reader. Through experiments on a simulated environment for Mobile RFID readers, we show that the proposed method improves the number of successful transmissions by about 228% on average, compared with Colorwave, a representative TDMA based anti-collision algorithm.

  2. Design of Underwater Robot Lines Based on a Hybrid Automatic Optimization Strategy

    Institute of Scientific and Technical Information of China (English)

    Wenjing Lyu; Weilin Luo

    2014-01-01

    In this paper, a hybrid automatic optimization strategy is proposed for the design of underwater robot lines. Isight is introduced as an integration platform. The construction of this platform is based on the user programming and several commercial software including UG6.0, GAMBIT2.4.6 and FLUENT12.0. An intelligent parameter optimization method, the particle swarm optimization, is incorporated into the platform. To verify the strategy proposed, a simulation is conducted on the underwater robot model 5470, which originates from the DTRC SUBOFF project. With the automatic optimization platform, the minimal resistance is taken as the optimization goal;the wet surface area as the constraint condition; the length of the fore-body, maximum body radius and after-body’s minimum radius as the design variables. With the CFD calculation, the RANS equations and the standard turbulence model are used for direct numerical simulation. By analyses of the simulation results, it is concluded that the platform is of high efficiency and feasibility. Through the platform, a variety of schemes for the design of the lines are generated and the optimal solution is achieved. The combination of the intelligent optimization algorithm and the numerical simulation ensures a global optimal solution and improves the efficiency of the searching solutions.

  3. Multiple Sclerosis Identification Based on Fractional Fourier Entropy and a Modified Jaya Algorithm

    Directory of Open Access Journals (Sweden)

    Shui-Hua Wang

    2018-04-01

    Full Text Available Aim: Currently, identifying multiple sclerosis (MS by human experts may come across the problem of “normal-appearing white matter”, which causes a low sensitivity. Methods: In this study, we presented a computer vision based approached to identify MS in an automatic way. This proposed method first extracted the fractional Fourier entropy map from a specified brain image. Afterwards, it sent the features to a multilayer perceptron trained by a proposed improved parameter-free Jaya algorithm. We used cost-sensitivity learning to handle the imbalanced data problem. Results: The 10 × 10-fold cross validation showed our method yielded a sensitivity of 97.40 ± 0.60%, a specificity of 97.39 ± 0.65%, and an accuracy of 97.39 ± 0.59%. Conclusions: We validated by experiments that the proposed improved Jaya performs better than plain Jaya algorithm and other latest bioinspired algorithms in terms of classification performance and training speed. In addition, our method is superior to four state-of-the-art MS identification approaches.

  4. Automatic detection of artifacts in converted S3D video

    Science.gov (United States)

    Bokov, Alexander; Vatolin, Dmitriy; Zachesov, Anton; Belous, Alexander; Erofeev, Mikhail

    2014-03-01

    In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage.

  5. Automatic contact in DYNA3D for vehicle crashworthiness

    International Nuclear Information System (INIS)

    Whirley, R.G.; Engelmann, B.E.

    1994-01-01

    This paper presents a new formulation for the automatic definition and treatment of mechanical contact in explicit, nonlinear, finite element analysis. Automatic contact offers the benefits of significantly reduced model construction time and fewer opportunities for user error, but faces significant challenges in reliability and computational costs. The authors have used a new four-step automatic contact algorithm. Key aspects of the proposed method include (1) automatic identification of adjacent and opposite surfaces in the global search phase, and (2) the use of a smoothly varying surface normal that allows a consistent treatment of shell intersection and corner contact conditions without ad hoc rules. Three examples are given to illustrate the performance of the newly proposed algorithm in the public DYNA3D code

  6. A novel scheme for automatic nonrigid image registration using deformation invariant feature and geometric constraint

    Science.gov (United States)

    Deng, Zhipeng; Lei, Lin; Zhou, Shilin

    2015-10-01

    Automatic image registration is a vital yet challenging task, particularly for non-rigid deformation images which are more complicated and common in remote sensing images, such as distorted UAV (unmanned aerial vehicle) images or scanning imaging images caused by flutter. Traditional non-rigid image registration methods are based on the correctly matched corresponding landmarks, which usually needs artificial markers. It is a rather challenging task to locate the accurate position of the points and get accurate homonymy point sets. In this paper, we proposed an automatic non-rigid image registration algorithm which mainly consists of three steps: To begin with, we introduce an automatic feature point extraction method based on non-linear scale space and uniform distribution strategy to extract the points which are uniform distributed along the edge of the image. Next, we propose a hybrid point matching algorithm using DaLI (Deformation and Light Invariant) descriptor and local affine invariant geometric constraint based on triangulation which is constructed by K-nearest neighbor algorithm. Based on the accurate homonymy point sets, the two images are registrated by the model of TPS (Thin Plate Spline). Our method is demonstrated by three deliberately designed experiments. The first two experiments are designed to evaluate the distribution of point set and the correctly matching rate on synthetic data and real data respectively. The last experiment is designed on the non-rigid deformation remote sensing images and the three experimental results demonstrate the accuracy, robustness, and efficiency of the proposed algorithm compared with other traditional methods.

  7. Depth data research of GIS based on clustering analysis algorithm

    Science.gov (United States)

    Xiong, Yan; Xu, Wenli

    2018-03-01

    The data of GIS have spatial distribution. Geographic data has both spatial characteristics and attribute characteristics, and also changes with time. Therefore, the amount of data is very large. Nowadays, many industries and departments in the society are using GIS. However, without proper data analysis and mining scheme, GIS will not exert its maximum effectiveness and will waste a lot of data. In this paper, we use the geographic information demand of a national security department as the experimental object, combining the characteristics of GIS data, taking into account the characteristics of time, space, attributes and so on, and using cluster analysis algorithm. We further study the mining scheme for depth data, and get the algorithm model. This algorithm can automatically classify sample data, and then carry out exploratory analysis. The research shows that the algorithm model and the information mining scheme can quickly find hidden depth information from the surface data of GIS, thus improving the efficiency of the security department. This algorithm can also be extended to other fields.

  8. A 3-Step Algorithm Using Region-Based Active Contours for Video Objects Detection

    Directory of Open Access Journals (Sweden)

    Stéphanie Jehan-Besson

    2002-06-01

    Full Text Available We propose a 3-step algorithm for the automatic detection of moving objects in video sequences using region-based active contours. First, we introduce a very full general framework for region-based active contours with a new Eulerian method to compute the evolution equation of the active contour from a criterion including both region-based and boundary-based terms. This framework can be easily adapted to various applications, thanks to the introduction of functions named descriptors of the different regions. With this new Eulerian method based on shape optimization principles, we can easily take into account the case of descriptors depending upon features globally attached to the regions. Second, we propose a 3-step algorithm for detection of moving objects, with a static or a mobile camera, using region-based active contours. The basic idea is to hierarchically associate temporal and spatial information. The active contour evolves with successively three sets of descriptors: a temporal one, and then two spatial ones. The third spatial descriptor takes advantage of the segmentation of the image in intensity homogeneous regions. User interaction is reduced to the choice of a few parameters at the beginning of the process. Some experimental results are supplied.

  9. Successive approximation algorithm for cancellation of artifacts in DSA images

    International Nuclear Information System (INIS)

    Funakami, Raiko; Hiroshima, Kyoichi; Nishino, Junji

    2000-01-01

    In this paper, we propose an algorithm for cancellation of artifacts in DSA images. We have already proposed an automatic registration method based on the detection of local movements. When motion of the object is large, it is difficult to estimate the exact movement, and the cancellation of artifacts may therefore fail. The algorithm we propose here is based on a simple rigid model. We present the results of applying the proposed method to a series of experimental X-ray images, as well as the results of applying the algorithm as preprocessing for a registration method based on local movement. (author)

  10. A HYBRID HEURISTIC ALGORITHM FOR THE CLUSTERED TRAVELING SALESMAN PROBLEM

    Directory of Open Access Journals (Sweden)

    Mário Mestria

    2016-04-01

    Full Text Available ABSTRACT This paper proposes a hybrid heuristic algorithm, based on the metaheuristics Greedy Randomized Adaptive Search Procedure, Iterated Local Search and Variable Neighborhood Descent, to solve the Clustered Traveling Salesman Problem (CTSP. Hybrid Heuristic algorithm uses several variable neighborhood structures combining the intensification (using local search operators and diversification (constructive heuristic and perturbation routine. In the CTSP, the vertices are partitioned into clusters and all vertices of each cluster have to be visited contiguously. The CTSP is -hard since it includes the well-known Traveling Salesman Problem (TSP as a special case. Our hybrid heuristic is compared with three heuristics from the literature and an exact method. Computational experiments are reported for different classes of instances. Experimental results show that the proposed hybrid heuristic obtains competitive results within reasonable computational time.

  11. Developing Subdomain Allocation Algorithms Based on Spatial and Communicational Constraints to Accelerate Dust Storm Simulation

    Science.gov (United States)

    Gui, Zhipeng; Yu, Manzhu; Yang, Chaowei; Jiang, Yunfeng; Chen, Songqing; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Hassan, Mohammed Anowarul; Jin, Baoxuan

    2016-01-01

    Dust storm has serious disastrous impacts on environment, human health, and assets. The developments and applications of dust storm models have contributed significantly to better understand and predict the distribution, intensity and structure of dust storms. However, dust storm simulation is a data and computing intensive process. To improve the computing performance, high performance computing has been widely adopted by dividing the entire study area into multiple subdomains and allocating each subdomain on different computing nodes in a parallel fashion. Inappropriate allocation may introduce imbalanced task loads and unnecessary communications among computing nodes. Therefore, allocation is a key factor that may impact the efficiency of parallel process. An allocation algorithm is expected to consider the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire simulation. This research introduces three algorithms to optimize the allocation by considering the spatial and communicational constraints: 1) an Integer Linear Programming (ILP) based algorithm from combinational optimization perspective; 2) a K-Means and Kernighan-Lin combined heuristic algorithm (K&K) integrating geometric and coordinate-free methods by merging local and global partitioning; 3) an automatic seeded region growing based geometric and local partitioning algorithm (ASRG). The performance and effectiveness of the three algorithms are compared based on different factors. Further, we adopt the K&K algorithm as the demonstrated algorithm for the experiment of dust model simulation with the non-hydrostatic mesoscale model (NMM-dust) and compared the performance with the MPI default sequential allocation. The results demonstrate that K&K method significantly improves the simulation performance with better subdomain allocation. This method can also be adopted for other relevant atmospheric and numerical

  12. Tracking Positioning Algorithm for Direction of Arrival Based on Direction Lock Loop

    Directory of Open Access Journals (Sweden)

    Xiu-Zhi Cheng

    2015-06-01

    Full Text Available In order to solve the problem of poor real-time performance, low accuracy and high computational complexity in the traditional process of locating and tracking of Direction of Arrival (DOA of moving targets, this paper proposes a DOA algorithm based on the Direction Lock Loop (DILL which adopts Lock Loop structure to realize the estimation and location of DOA and can adjust the direction automatically along with the changes of a signal’s angular variation to track the position of the signal. Meanwhile, to reduce the influence of nonlinearity and noise on its performance, the UKF filter is designed for eliminating interference of the estimated target signal to improve accuracy of the signal tracking and stability of the system. Simulation results prove that the algorithm can not only get a high resolution DOA estimate signal, but can also locate and track multiple mobile targets effectively with enhanced accuracy, efficiency and stability.

  13. Stabilization Algorithms for Automatic Control of the Trajectory Movement of Quadcopter

    Directory of Open Access Journals (Sweden)

    KeKe Gen

    2015-01-01

    Full Text Available The article considers an automatic quadcopter routing task. The quadcopter is an unmanned aerial vehicle (UAV, which has four engines. Currently, such already widely used vehicles are controlled, mainly, from the operator’s control panel. A relevant task is to develop a quadcopter control system that enables an autonomous flight. The aim of this paper is to study the possibility for solving this problem using an algorithm of the stabilization and trajectory control.A mathematical model of the quadrocopter is the fairly complicated non-linear system, which can be obtained by using the Matlab Simulink and Universal Mechanism software systems simultaneously. Comparison of the simulation results in two software packages, i.e. Matlab wherein the nonlinear system of equations is modeled and UM wherein the flight path and other parameters are calculated according to transmitted forces and moments may prove correctness of the model used.Synthesis of controllers for the orientation and stabilization subsystem and trajectory control subsystem, is performed on traditional principles, in particular using the PID controllers and method based on Lyapunov functions known in the literature as "backstepping." The most appropriate controls are selected by comparing the simulation results. Responses to the stepped impacts and to tracking the given paths have been simulated. It has been found that the flight path of a quadcopter almost coincides with designated routing, changes of coordinates for the quadcopter mass center of two controllers under comparison are almost the same, but a deviation range of the angular position for the controller backstepping is much smaller than that of for the PID controller.

  14. Automatic Matching of Large Scale Images and Terrestrial LIDAR Based on App Synergy of Mobile Phone

    Science.gov (United States)

    Xia, G.; Hu, C.

    2018-04-01

    The digitalization of Cultural Heritage based on ground laser scanning technology has been widely applied. High-precision scanning and high-resolution photography of cultural relics are the main methods of data acquisition. The reconstruction with the complete point cloud and high-resolution image requires the matching of image and point cloud, the acquisition of the homonym feature points, the data registration, etc. However, the one-to-one correspondence between image and corresponding point cloud depends on inefficient manual search. The effective classify and management of a large number of image and the matching of large image and corresponding point cloud will be the focus of the research. In this paper, we propose automatic matching of large scale images and terrestrial LiDAR based on APP synergy of mobile phone. Firstly, we develop an APP based on Android, take pictures and record related information of classification. Secondly, all the images are automatically grouped with the recorded information. Thirdly, the matching algorithm is used to match the global and local image. According to the one-to-one correspondence between the global image and the point cloud reflection intensity image, the automatic matching of the image and its corresponding laser radar point cloud is realized. Finally, the mapping relationship between global image, local image and intensity image is established according to homonym feature point. So we can establish the data structure of the global image, the local image in the global image, the local image corresponding point cloud, and carry on the visualization management and query of image.

  15. Algorithmic Approach to Abstracting Linear Systems by Timed Automata

    DEFF Research Database (Denmark)

    Sloth, Christoffer; Wisniewski, Rafael

    2011-01-01

    This paper proposes an LMI-based algorithm for abstracting dynamical systems by timed automata, which enables automatic formal verification of linear systems. The proposed abstraction is based on partitioning the state space of the system using positive invariant sets, generated by Lyapunov...... functions. This partitioning ensures that the vector field of the dynamical system is transversal to all facets of the cells, which induces some desirable properties of the abstraction. The algorithm is based on identifying intersections of level sets of quadratic Lyapunov functions, and determining...

  16. Quality-aware features-based noise level estimator for block matching and three-dimensional filtering algorithm

    Science.gov (United States)

    Xu, Shaoping; Hu, Lingyan; Yang, Xiaohui

    2016-01-01

    The performance of conventional denoising algorithms is usually controlled by one or several parameters whose optimal settings depend on the contents of the processed images and the characteristics of the noises. Among these parameters, noise level is a fundamental parameter that is always assumed to be known by most of the existing denoising algorithms (so-called nonblind denoising algorithms), which largely limits the applicability of these nonblind denoising algorithms in many applications. Moreover, these nonblind algorithms do not always achieve the best denoised images in visual quality even when fed with the actual noise level parameter. To address these shortcomings, in this paper we propose a new quality-aware features-based noise level estimator (NLE), which consists of quality-aware features extraction and optimal noise level parameter prediction. First, considering that image local contrast features convey important structural information that is closely related to image perceptual quality, we utilize the marginal statistics of two local contrast operators, i.e., the gradient magnitude and the Laplacian of Gaussian (LOG), to extract quality-aware features. The proposed quality-aware features have very low computational complexity, making them well suited for time-constrained applications. Then we propose a learning-based framework where the noise level parameter is estimated based on the quality-aware features. Based on the proposed NLE, we develop a blind block matching and three-dimensional filtering (BBM3D) denoising algorithm which is capable of effectively removing additive white Gaussian noise, even coupled with impulse noise. The noise level parameter of the BBM3D algorithm is automatically tuned according to the quality-aware features, guaranteeing the best performance. As such, the classical block matching and three-dimensional algorithm can be transformed into a blind one in an unsupervised manner. Experimental results demonstrate that the

  17. Synthesis of digital locomotive receiver of automatic locomotive signaling

    Directory of Open Access Journals (Sweden)

    K. V. Goncharov

    2013-02-01

    Full Text Available Purpose. Automatic locomotive signaling of continuous type with a numeric coding (ALSN has several disadvantages: a small number of signal indications, low noise stability, high inertia and low functional flexibility. Search for new and more advanced methods of signal processing for automatic locomotive signaling, synthesis of the noise proof digital locomotive receiver are essential. Methodology. The proposed algorithm of detection and identification locomotive signaling codes is based on the definition of mutual correlations of received oscillation and reference signals. For selecting threshold levels of decision element the following criterion has been formulated: the locomotive receiver should maximum set the correct solution for a given probability of dangerous errors. Findings. It has been found that the random nature of the ALSN signal amplitude does not affect the detection algorithm. However, the distribution law and numeric characteristics of signal amplitude affect the probability of errors, and should be considered when selecting a threshold levels According to obtained algorithm of detection and identification ALSN signals the digital locomotive receiver has been synthesized. It contains band pass filter, peak limiter, normalizing amplifier with automatic gain control circuit, analog to digital converter and digital signal processor. Originality. The ALSN system is improved by the way of the transfer of technical means to modern microelectronic element base, more perfect methods of detection and identification codes of locomotive signaling are applied. Practical value. Use of digital technology in the construction of the locomotive receiver ALSN will expand its functionality, will increase the noise immunity and operation stability of the locomotive signal system in conditions of various destabilizing factors.

  18. Automatic Conversion of a Conceptual Model to a Standard Multi-view Web Services Definition

    Directory of Open Access Journals (Sweden)

    Anass Misbah

    2018-03-01

    Full Text Available Information systems are becoming more and more heterogeneous and here comes the need to have more generic transformation algorithms and more automatic generation Meta rules. In fact, the large number of terminals, devices, operating systems, platforms and environments require a high level of adaptation. Therefore, it is becoming more and more difficult to validate, generate and implement manually models, designs and codes.Web services are one of the technologies that are used massively nowadays; hence, it is considered as one of technologies that require the most automatic rules of validation and automation. Many previous works have dealt with Web services by proposing new concepts such as Multi-view Web services, standard WSDL implementation of Multi-view Web services and even further Generic Meta rules for automatic generation of Multi-view Web services.In this work we will propose a new way of generating Multi-view Web ser-vices, which is based on an engine algorithm that takes as input both an initial Conceptual Model and user’s matrix and then unroll a generic algorithm to gen-erate dynamically a validated set of points of view. This set of points of view will be transformed to a standard WSDL implementation of Multi-view Web services by means of the automatic transformation Meta rules.

  19. Automatic segmentation of the lateral geniculate nucleus: Application to control and glaucoma patients.

    Science.gov (United States)

    Wang, Jieqiong; Miao, Wen; Li, Jing; Li, Meng; Zhen, Zonglei; Sabel, Bernhard; Xian, Junfang; He, Huiguang

    2015-11-30

    The lateral geniculate nucleus (LGN) is a key relay center of the visual system. Because the LGN morphology is affected by different diseases, it is of interest to analyze its morphology by segmentation. However, existing LGN segmentation methods are non-automatic, inefficient and prone to experimenters' bias. To address these problems, we proposed an automatic LGN segmentation algorithm based on T1-weighted imaging. First, the prior information of LGN was used to create a prior mask. Then region growing was applied to delineate LGN. We evaluated this automatic LGN segmentation method by (1) comparison with manually segmented LGN, (2) anatomically locating LGN in the visual system via LGN-based tractography, (3) application to control and glaucoma patients. The similarity coefficients of automatic segmented LGN and manually segmented one are 0.72 (0.06) for the left LGN and 0.77 (0.07) for the right LGN. LGN-based tractography shows the subcortical pathway seeding from LGN passes the optic tract and also reaches V1 through the optic radiation, which is consistent with the LGN location in the visual system. In addition, LGN asymmetry as well as LGN atrophy along with age is observed in normal controls. The investigation of glaucoma effects on LGN volumes demonstrates that the bilateral LGN volumes shrink in patients. The automatic LGN segmentation is objective, efficient, valid and applicable. Experiment results proved the validity and applicability of the algorithm. Our method will speed up the research on visual system and greatly enhance studies of different vision-related diseases. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Global Distribution Adjustment and Nonlinear Feature Transformation for Automatic Colorization

    Directory of Open Access Journals (Sweden)

    Terumasa Aoki

    2018-01-01

    Full Text Available Automatic colorization is generally classified into two groups: propagation-based methods and reference-based methods. In reference-based automatic colorization methods, color image(s are used as reference(s to reconstruct original color of a gray target image. The most important task here is to find the best matching pairs for all pixels between reference and target images in order to transfer color information from reference to target pixels. A lot of attractive local feature-based image matching methods have already been developed for the last two decades. Unfortunately, as far as we know, there are no optimal matching methods for automatic colorization because the requirements for pixel matching in automatic colorization are wholly different from those for traditional image matching. To design an efficient matching algorithm for automatic colorization, clustering pixel with low computational cost and generating descriptive feature vector are the most important challenges to be solved. In this paper, we present a novel method to address these two problems. In particular, our work concentrates on solving the second problem (designing a descriptive feature vector; namely, we will discuss how to learn a descriptive texture feature using scaled sparse texture feature combining with a nonlinear transformation to construct an optimal feature descriptor. Our experimental results show our proposed method outperforms the state-of-the-art methods in terms of robustness for color reconstruction for automatic colorization applications.

  1. Automatic approach to deriving fuzzy slope positions

    Science.gov (United States)

    Zhu, Liang-Jun; Zhu, A.-Xing; Qin, Cheng-Zhi; Liu, Jun-Zhi

    2018-03-01

    Fuzzy characterization of slope positions is important for geographic modeling. Most of the existing fuzzy classification-based methods for fuzzy characterization require extensive user intervention in data preparation and parameter setting, which is tedious and time-consuming. This paper presents an automatic approach to overcoming these limitations in the prototype-based inference method for deriving fuzzy membership value (or similarity) to slope positions. The key contribution is a procedure for finding the typical locations and setting the fuzzy inference parameters for each slope position type. Instead of being determined totally by users in the prototype-based inference method, in the proposed approach the typical locations and fuzzy inference parameters for each slope position type are automatically determined by a rule set based on prior domain knowledge and the frequency distributions of topographic attributes. Furthermore, the preparation of topographic attributes (e.g., slope gradient, curvature, and relative position index) is automated, so the proposed automatic approach has only one necessary input, i.e., the gridded digital elevation model of the study area. All compute-intensive algorithms in the proposed approach were speeded up by parallel computing. Two study cases were provided to demonstrate that this approach can properly, conveniently and quickly derive the fuzzy slope positions.

  2. Automatic Segmentation of Vessels in In-Vivo Ultrasound Scans

    DEFF Research Database (Denmark)

    Tamimi-Sarnikowski, Philip; Brink-Kjær, Andreas; Moshavegh, Ramin

    2017-01-01

    presents a fully automatic segmentation algorithm, for robustly segmenting the vessel lumen in longitudinal B-mode ultrasound images. The automatic segmentation is performed using a combination of B-mode and power Doppler images. The proposed algorithm includes a series of preprocessing steps, and performs......Ultrasound has become highly popular to monitor atherosclerosis, by scanning the carotid artery. The screening involves measuring the thickness of the vessel wall and diameter of the lumen. An automatic segmentation of the vessel lumen, can enable the determination of lumen diameter. This paper...... a vessel segmentation by use of the marker-controlled watershed transform. The ultrasound images used in the study were acquired using the bk3000 ultrasound scanner (BK Ultrasound, Herlev, Denmark) with two transducers ”8L2 Linear” and ”10L2w Wide Linear” (BK Ultrasound, Herlev, Denmark). The algorithm...

  3. Dynamic Price Vector Formation Model-Based Automatic Demand Response Strategy for PV-Assisted EV Charging Stations

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Qifang; Wang, Fei; Hodge, Bri-Mathias; Zhang, Jianhua; Li, Zhigang; Shafie-Khah, Miadreza; Catalao, Joao P. S.

    2017-11-01

    A real-time price (RTP)-based automatic demand response (ADR) strategy for PV-assisted electric vehicle (EV) Charging Station (PVCS) without vehicle to grid is proposed. The charging process is modeled as a dynamic linear program instead of the normal day-ahead and real-time regulation strategy, to capture the advantages of both global and real-time optimization. Different from conventional price forecasting algorithms, a dynamic price vector formation model is proposed based on a clustering algorithm to form an RTP vector for a particular day. A dynamic feasible energy demand region (DFEDR) model considering grid voltage profiles is designed to calculate the lower and upper bounds. A deduction method is proposed to deal with the unknown information of future intervals, such as the actual stochastic arrival and departure times of EVs, which make the DFEDR model suitable for global optimization. Finally, both the comparative cases articulate the advantages of the developed methods and the validity in reducing electricity costs, mitigating peak charging demand, and improving PV self-consumption of the proposed strategy are verified through simulation scenarios.

  4. An automatic algorithm for blink-artifact suppression based on iterative template matching: application to single channel recording of cortical auditory evoked potentials

    Science.gov (United States)

    Valderrama, Joaquin T.; de la Torre, Angel; Van Dun, Bram

    2018-02-01

    Objective. Artifact reduction in electroencephalogram (EEG) signals is usually necessary to carry out data analysis appropriately. Despite the large amount of denoising techniques available with a multichannel setup, there is a lack of efficient algorithms that remove (not only detect) blink-artifacts from a single channel EEG, which is of interest in many clinical and research applications. This paper describes and evaluates the iterative template matching and suppression (ITMS), a new method proposed for detecting and suppressing the artifact associated with the blink activity from a single channel EEG. Approach. The approach of ITMS consists of (a) an iterative process in which blink-events are detected and the blink-artifact waveform of the analyzed subject is estimated, (b) generation of a signal modeling the blink-artifact, and (c) suppression of this signal from the raw EEG. The performance of ITMS is compared with the multi-window summation of derivatives within a window (MSDW) technique using both synthesized and real EEG data. Main results. Results suggest that ITMS presents an adequate performance in detecting and suppressing blink-artifacts from a single channel EEG. When applied to the analysis of cortical auditory evoked potentials (CAEPs), ITMS provides a significant quality improvement in the resulting responses, i.e. in a cohort of 30 adults, the mean correlation coefficient improved from 0.37 to 0.65 when the blink-artifacts were detected and suppressed by ITMS. Significance. ITMS is an efficient solution to the problem of denoising blink-artifacts in single-channel EEG applications, both in clinical and research fields. The proposed ITMS algorithm is stable; automatic, since it does not require human intervention; low-invasive, because the EEG segments not contaminated by blink-artifacts remain unaltered; and easy to implement, as can be observed in the Matlab script implemeting the algorithm provided as supporting material.

  5. Research on the Automatic Fusion Strategy of Fixed Value Boundary Based on the Weak Coupling Condition of Grid Partition

    Science.gov (United States)

    Wang, X. Y.; Dou, J. M.; Shen, H.; Li, J.; Yang, G. S.; Fan, R. Q.; Shen, Q.

    2018-03-01

    With the continuous strengthening of power grids, the network structure is becoming more and more complicated. An open and regional data modeling is used to complete the calculation of the protection fixed value based on the local region. At the same time, a high precision, quasi real-time boundary fusion technique is needed to seamlessly integrate the various regions so as to constitute an integrated fault computing platform which can conduct transient stability analysis of covering the whole network with high accuracy and multiple modes, deal with the impact results of non-single fault, interlocking fault and build “the first line of defense” of the power grid. The boundary fusion algorithm in this paper is an automatic fusion algorithm based on the boundary accurate coupling of the networking power grid partition, which takes the actual operation mode for qualification, complete the boundary coupling algorithm of various weak coupling partition based on open-loop mode, improving the fusion efficiency, truly reflecting its transient stability level, and effectively solving the problems of too much data, too many difficulties of partition fusion, and no effective fusion due to mutually exclusive conditions. In this paper, the basic principle of fusion process is introduced firstly, and then the method of boundary fusion customization is introduced by scene description. Finally, an example is given to illustrate the specific algorithm on how it effectively implements the boundary fusion after grid partition and to verify the accuracy and efficiency of the algorithm.

  6. Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Fernando A. Auat Cheein

    2013-01-01

    Full Text Available Process modeling by means of Gaussian-based algorithms often suffers from redundant information which usually increases the estimation computational complexity without significantly improving the estimation performance. In this article, a non-arbitrary measurement selection criterion for Gaussian-based algorithms is proposed. The measurement selection criterion is based on the determination of the most significant measurement from both an estimation convergence perspective and the covariance matrix associated with the measurement. The selection criterion is independent from the nature of the measured variable. This criterion is used in conjunction with three Gaussian-based algorithms: the EIF (Extended Information Filter, the EKF (Extended Kalman Filter and the UKF (Unscented Kalman Filter. Nevertheless, the measurement selection criterion shown herein can also be applied to other Gaussian-based algorithms. Although this work is focused on environment modeling, the results shown herein can be applied to other Gaussian-based algorithm implementations. Mathematical descriptions and implementation results that validate the proposal are also included in this work.

  7. Towards Automatic Personalized Content Generation for Platform Games

    DEFF Research Database (Denmark)

    Shaker, Noor; Yannakakis, Georgios N.; Togelius, Julian

    2010-01-01

    In this paper, we show that personalized levels can be automatically generated for platform games. We build on previous work, where models were derived that predicted player experience based on features of level design and on playing styles. These models are constructed using preference learning...... mechanism using both algorithmic and human players. The results indicate that the adaptation mechanism effectively optimizes level design parameters for particular players....

  8. Method for the detection of flaws in a tube proximate a contiguous member

    International Nuclear Information System (INIS)

    Holt, A.E.; Wehrmeister, A.E.; Whaley, H.L.

    1979-01-01

    A method for deriving the eddy current signature of a flaw in a tube proximate a contiguous member which is obscured in a composite signature of the flaw and contiguous member comprises subtracting from the composite signature a reference eddy current signature generated by scanning a reference or facsimile tube and contiguous member. The method is particularly applicable to detecting flaws in the tubes of heat exchangers of fossil fuel and nuclear power plants to enable the detection of flaws which would otherwise be obscured by contiguous members such as support plates supporting the tubes. (U.K.)

  9. Reliable clarity automatic-evaluation method for optical remote sensing images

    Science.gov (United States)

    Qin, Bangyong; Shang, Ren; Li, Shengyang; Hei, Baoqin; Liu, Zhiwen

    2015-10-01

    Image clarity, which reflects the sharpness degree at the edge of objects in images, is an important quality evaluate index for optical remote sensing images. Scholars at home and abroad have done a lot of work on estimation of image clarity. At present, common clarity-estimation methods for digital images mainly include frequency-domain function methods, statistical parametric methods, gradient function methods and edge acutance methods. Frequency-domain function method is an accurate clarity-measure approach. However, its calculation process is complicate and cannot be carried out automatically. Statistical parametric methods and gradient function methods are both sensitive to clarity of images, while their results are easy to be affected by the complex degree of images. Edge acutance method is an effective approach for clarity estimate, while it needs picking out the edges manually. Due to the limits in accuracy, consistent or automation, these existing methods are not applicable to quality evaluation of optical remote sensing images. In this article, a new clarity-evaluation method, which is based on the principle of edge acutance algorithm, is proposed. In the new method, edge detection algorithm and gradient search algorithm are adopted to automatically search the object edges in images. Moreover, The calculation algorithm for edge sharpness has been improved. The new method has been tested with several groups of optical remote sensing images. Compared with the existing automatic evaluation methods, the new method perform better both in accuracy and consistency. Thus, the new method is an effective clarity evaluation method for optical remote sensing images.

  10. Validation of a knowledge-based boundary detection algorithm: a multicenter study

    International Nuclear Information System (INIS)

    Groch, M.W.; Erwin, W.D.; Murphy, P.H.; Ali, A.; Moore, W.; Ford, P.; Qian Jianzhong; Barnett, C.A.; Lette, J.

    1996-01-01

    A completely operator-independent boundary detection algorithm for multigated blood pool (MGBP) studies has been evaluated at four medical centers. The knowledge-based boundary detector (KBBD) algorithm is nondeterministic, utilizing a priori domain knowledge in the form of rule sets for the localization of cardiac chambers and image features, providing a case-by-case method for the identification and boundary definition of the left ventricle (LV). The nondeterministic algorithm employs multiple processing pathways, where KBBD rules have been designed for conventional (CONV) imaging geometries (nominal 45 LAO, nonzoom) as well as for highly zoomed and/or caudally tilted (ZOOM) studies. The resultant ejection fractions (LVEF) from the KBBD program have been compared with the standard LVEF calculations in 253 total cases in four institutions, 157 utilizing CONV geometry and 96 utilizing ZOOM geometries. The criteria for success was a KBBD boundary adequately defined over the LV as judged by an experienced observer, and the correlation of KBBD LVEFs to the standard calculation of LVEFs for the institution. The overall success rate for all institutions combined was 99.2%, with an overall correlation coefficient of r=0.95 (P<0.001). The individual success rates and EF correlations (r), for CONV and ZOOM geometers were: 98%, r=0.93 (CONV) and 100%, r=0.95 (ZOOM). The KBBD algorithm can be adapted to varying clinical situations, employing automatic processing using artificial intelligence, with performance close to that of a human operator. (orig.)

  11. A Noise-Assisted Data Analysis Method for Automatic EOG-Based Sleep Stage Classification Using Ensemble Learning.

    Science.gov (United States)

    Olesen, Alexander Neergaard; Christensen, Julie A E; Sorensen, Helge B D; Jennum, Poul J

    2016-08-01

    Reducing the number of recording modalities for sleep staging research can benefit both researchers and patients, under the condition that they provide as accurate results as conventional systems. This paper investigates the possibility of exploiting the multisource nature of the electrooculography (EOG) signals by presenting a method for automatic sleep staging using the complete ensemble empirical mode decomposition with adaptive noise algorithm, and a random forest classifier. It achieves a high overall accuracy of 82% and a Cohen's kappa of 0.74 indicating substantial agreement between automatic and manual scoring.

  12. A block matching-based registration algorithm for localization of locally advanced lung tumors

    Energy Technology Data Exchange (ETDEWEB)

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D., E-mail: gdhugo@vcu.edu [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia, 23298 (United States)

    2014-04-15

    Purpose: To implement and evaluate a block matching-based registration (BMR) algorithm for locally advanced lung tumor localization during image-guided radiotherapy. Methods: Small (1 cm{sup 3}), nonoverlapping image subvolumes (“blocks”) were automatically identified on the planning image to cover the tumor surface using a measure of the local intensity gradient. Blocks were independently and automatically registered to the on-treatment image using a rigid transform. To improve speed and robustness, registrations were performed iteratively from coarse to fine image resolution. At each resolution, all block displacements having a near-maximum similarity score were stored. From this list, a single displacement vector for each block was iteratively selected which maximized the consistency of displacement vectors across immediately neighboring blocks. These selected displacements were regularized using a median filter before proceeding to registrations at finer image resolutions. After evaluating all image resolutions, the global rigid transform of the on-treatment image was computed using a Procrustes analysis, providing the couch shift for patient setup correction. This algorithm was evaluated for 18 locally advanced lung cancer patients, each with 4–7 weekly on-treatment computed tomography scans having physician-delineated gross tumor volumes. Volume overlap (VO) and border displacement errors (BDE) were calculated relative to the nominal physician-identified targets to establish residual error after registration. Results: Implementation of multiresolution registration improved block matching accuracy by 39% compared to registration using only the full resolution images. By also considering multiple potential displacements per block, initial errors were reduced by 65%. Using the final implementation of the BMR algorithm, VO was significantly improved from 77% ± 21% (range: 0%–100%) in the initial bony alignment to 91% ± 8% (range: 56%–100%;p < 0

  13. A block matching-based registration algorithm for localization of locally advanced lung tumors

    International Nuclear Information System (INIS)

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D.

    2014-01-01

    Purpose: To implement and evaluate a block matching-based registration (BMR) algorithm for locally advanced lung tumor localization during image-guided radiotherapy. Methods: Small (1 cm 3 ), nonoverlapping image subvolumes (“blocks”) were automatically identified on the planning image to cover the tumor surface using a measure of the local intensity gradient. Blocks were independently and automatically registered to the on-treatment image using a rigid transform. To improve speed and robustness, registrations were performed iteratively from coarse to fine image resolution. At each resolution, all block displacements having a near-maximum similarity score were stored. From this list, a single displacement vector for each block was iteratively selected which maximized the consistency of displacement vectors across immediately neighboring blocks. These selected displacements were regularized using a median filter before proceeding to registrations at finer image resolutions. After evaluating all image resolutions, the global rigid transform of the on-treatment image was computed using a Procrustes analysis, providing the couch shift for patient setup correction. This algorithm was evaluated for 18 locally advanced lung cancer patients, each with 4–7 weekly on-treatment computed tomography scans having physician-delineated gross tumor volumes. Volume overlap (VO) and border displacement errors (BDE) were calculated relative to the nominal physician-identified targets to establish residual error after registration. Results: Implementation of multiresolution registration improved block matching accuracy by 39% compared to registration using only the full resolution images. By also considering multiple potential displacements per block, initial errors were reduced by 65%. Using the final implementation of the BMR algorithm, VO was significantly improved from 77% ± 21% (range: 0%–100%) in the initial bony alignment to 91% ± 8% (range: 56%–100%;p < 0.001). Left

  14. Semantic Cuing and the Scale Insensitivity of Recency and Contiguity

    Science.gov (United States)

    Polyn, Sean M.; Erlikhman, Gennady; Kahana, Michael J.

    2012-01-01

    In recalling a set of previously experienced events, people exhibit striking effects of recency, contiguity, and similarity: Recent items tend to be recalled best and first, and items that were studied in neighboring positions or that are similar to one another in some other way tend to evoke one another during recall. Effects of recency and contiguity have most often been investigated in tasks that require people to recall random word lists. Similarity effects have most often been studied in tasks that require people to recall categorized word lists. Here we examine recency and contiguity effects in lists composed of items drawn from 3 distinct taxonomic categories and in which items from a given category are temporally separated from one another by items from other categories, all of which are tested for recall. We find evidence for long-term recency and for long-range contiguity, bolstering support for temporally sensitive models of memory and highlighting the importance of understanding the interaction between temporal and semantic information during memory search. PMID:21299330

  15. A Pressure Plate-Based Method for the Automatic Assessment of Foot Strike Patterns During Running.

    Science.gov (United States)

    Santuz, Alessandro; Ekizos, Antonis; Arampatzis, Adamantios

    2016-05-01

    The foot strike pattern (FSP, description of how the foot touches the ground at impact) is recognized to be a predictor of both performance and injury risk. The objective of the current investigation was to validate an original foot strike pattern assessment technique based on the numerical analysis of foot pressure distribution. We analyzed the strike patterns during running of 145 healthy men and women (85 male, 60 female). The participants ran on a treadmill with integrated pressure plate at three different speeds: preferred (shod and barefoot 2.8 ± 0.4 m/s), faster (shod 3.5 ± 0.6 m/s) and slower (shod 2.3 ± 0.3 m/s). A custom-designed algorithm allowed the automatic footprint recognition and FSP evaluation. Incomplete footprints were simultaneously identified and corrected from the software itself. The widely used technique of analyzing high-speed video recordings was checked for its reliability and has been used to validate the numerical technique. The automatic numerical approach showed a good conformity with the reference video-based technique (ICC = 0.93, p < 0.01). The great improvement in data throughput and the increased completeness of results allow the use of this software as a powerful feedback tool in a simple experimental setup.

  16. Non-contiguous spinal injury in cervical spinal trauma: evaluation with cervical spine MRI

    International Nuclear Information System (INIS)

    Choi, Soo Jung; Shin, Myung Jin; Kim, Sung Moon; Bae, Sang Jin

    2004-01-01

    We wished to evaluate the incidence of non-contiguous spinal injury in the cervicothoracic junction (CTJ) or the upper thoracic spines on cervical spinal MR images in the patients with cervical spinal injuries. Seventy-five cervical spine MR imagings for acute cervical spinal injury were retrospectively reviewed (58 men and 17 women, mean age: 35.3, range: 18-81 years). They were divided into three groups based on the mechanism of injury; axial compression, hyperflexion or hyperextension injury, according to the findings on the MR and CT images. On cervical spine MR images, we evaluated the presence of non-contiguous spinal injury in the CTJ or upper thoracic spine with regard to the presence of marrow contusion or fracture, ligament injury, traumatic disc herniation and spinal cord injury. Twenty-one cases (28%) showed CTJ or upper thoracic spinal injuries (C7-T5) on cervical spinal MR images that were separated from the cervical spinal injuries. Seven of 21 cases revealed overt fractures in the CTJs or upper thoracic spines. Ligament injury in these regions was found in three cases. Traumatic disc herniation and spinal cord injury in these regions were shown in one and two cases, respectively. The incidence of the non-contiguous spinal injuries in CTJ or upper thoracic spines was higher in the axial compression injury group (35.5%) than in the hyperflexion injury group (26.9%) or the hyperextension (25%) injury group. However, there was no statistical significance (ρ > 0.05). Cervical spinal MR revealed non-contiguous CTJ or upper thoracic spinal injuries in 28% of the patients with cervical spinal injury. The mechanism of cervical spinal injury did not significantly affect the incidence of the non-contiguous CTJ or upper thoracic spinal injury

  17. Non-contiguous spinal injury in cervical spinal trauma: evaluation with cervical spine MRI

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Soo Jung; Shin, Myung Jin; Kim, Sung Moon [University of Ulsan College of Medicine, Seoul (Korea, Republic of); Bae, Sang Jin [Sanggyepaik Hospital, Inje University, Seoul (Korea, Republic of)

    2004-12-15

    We wished to evaluate the incidence of non-contiguous spinal injury in the cervicothoracic junction (CTJ) or the upper thoracic spines on cervical spinal MR images in the patients with cervical spinal injuries. Seventy-five cervical spine MR imagings for acute cervical spinal injury were retrospectively reviewed (58 men and 17 women, mean age: 35.3, range: 18-81 years). They were divided into three groups based on the mechanism of injury; axial compression, hyperflexion or hyperextension injury, according to the findings on the MR and CT images. On cervical spine MR images, we evaluated the presence of non-contiguous spinal injury in the CTJ or upper thoracic spine with regard to the presence of marrow contusion or fracture, ligament injury, traumatic disc herniation and spinal cord injury. Twenty-one cases (28%) showed CTJ or upper thoracic spinal injuries (C7-T5) on cervical spinal MR images that were separated from the cervical spinal injuries. Seven of 21 cases revealed overt fractures in the CTJs or upper thoracic spines. Ligament injury in these regions was found in three cases. Traumatic disc herniation and spinal cord injury in these regions were shown in one and two cases, respectively. The incidence of the non-contiguous spinal injuries in CTJ or upper thoracic spines was higher in the axial compression injury group (35.5%) than in the hyperflexion injury group (26.9%) or the hyperextension (25%) injury group. However, there was no statistical significance ({rho} > 0.05). Cervical spinal MR revealed non-contiguous CTJ or upper thoracic spinal injuries in 28% of the patients with cervical spinal injury. The mechanism of cervical spinal injury did not significantly affect the incidence of the non-contiguous CTJ or upper thoracic spinal injury.

  18. Facilitating coronary artery evaluation in MDCT using a 3D automatic vessel segmentation tool

    International Nuclear Information System (INIS)

    Fawad Khan, M.; Gurung, Jessen; Maataoui, Adel; Brehmer, Boris; Herzog, Christopher; Vogl, Thomas J.; Wesarg, Stefan; Dogan, Selami; Ackermann, Hanns; Assmus, Birgit

    2006-01-01

    The purpose of this study was to investigate a 3D coronary artery segmentation algorithm using 16-row MDCT data sets. Fifty patients underwent cardiac CT (Sensation 16, Siemens) and coronary angiography. Automatic and manual detection of coronary artery stenosis was performed. A 3D coronary artery segmentation algorithm (Fraunhofer Institute for Computer Graphics, Darmstadt) was used for automatic evaluation. All significant stenoses (>50%) in vessels >1.5 mm in diameter were protocoled. Each detection tool was used by one reader who was blinded to the results of the other detection method and the results of coronary angiography. Sensitivity and specificity were determined for automatic and manual detection as well as was the time for both CT-based evaluation methods. The overall sensitivity and specificity of the automatic and manual approach were 93.1 vs. 95.83% and 86.1 vs. 81.9%. The time required for automatic evaluation was significantly shorter than with the manual approach, i.e., 246.04±43.17 s for the automatic approach and 526.88±45.71 s for the manual approach (P<0.0001). In 94% of the coronary artery branches, automatic detection required less time than the manual approach. Automatic coronary vessel evaluation is feasible. It reduces the time required for cardiac CT evaluation with similar sensitivity and specificity as well as facilitates the evaluation of MDCT coronary angiography in a standardized fashion. (orig.)

  19. Automatic detection of AutoPEEP during controlled mechanical ventilation

    Directory of Open Access Journals (Sweden)

    Nguyen Quang-Thang

    2012-06-01

    Full Text Available Abstract Background Dynamic hyperinflation, hereafter called AutoPEEP (auto-positive end expiratory pressure with some slight language abuse, is a frequent deleterious phenomenon in patients undergoing mechanical ventilation. Although not readily quantifiable, AutoPEEP can be recognized on the expiratory portion of the flow waveform. If expiratory flow does not return to zero before the next inspiration, AutoPEEP is present. This simple detection however requires the eye of an expert clinician at the patient’s bedside. An automatic detection of AutoPEEP should be helpful to optimize care. Methods In this paper, a platform for automatic detection of AutoPEEP based on the flow signal available on most of recent mechanical ventilators is introduced. The detection algorithms are developed on the basis of robust non-parametric hypothesis testings that require no prior information on the signal distribution. In particular, two detectors are proposed: one is based on SNT (Signal Norm Testing and the other is an extension of SNT in the sequential framework. The performance assessment was carried out on a respiratory system analog and ex-vivo on various retrospectively acquired patient curves. Results The experiment results have shown that the proposed algorithm provides relevant AutoPEEP detection on both simulated and real data. The analysis of clinical data has shown that the proposed detectors can be used to automatically detect AutoPEEP with an accuracy of 93% and a recall (sensitivity of 90%. Conclusions The proposed platform provides an automatic early detection of AutoPEEP. Such functionality can be integrated in the currently used mechanical ventilator for continuous monitoring of the patient-ventilator interface and, therefore, alleviate the clinician task.

  20. Application of image recognition-based automatic hyphae detection in fungal keratitis.

    Science.gov (United States)

    Wu, Xuelian; Tao, Yuan; Qiu, Qingchen; Wu, Xinyi

    2018-03-01

    The purpose of this study is to evaluate the accuracy of two methods in diagnosis of fungal keratitis, whereby one method is automatic hyphae detection based on images recognition and the other method is corneal smear. We evaluate the sensitivity and specificity of the method in diagnosis of fungal keratitis, which is automatic hyphae detection based on image recognition. We analyze the consistency of clinical symptoms and the density of hyphae, and perform quantification using the method of automatic hyphae detection based on image recognition. In our study, 56 cases with fungal keratitis (just single eye) and 23 cases with bacterial keratitis were included. All cases underwent the routine inspection of slit lamp biomicroscopy, corneal smear examination, microorganism culture and the assessment of in vivo confocal microscopy images before starting medical treatment. Then, we recognize the hyphae images of in vivo confocal microscopy by using automatic hyphae detection based on image recognition to evaluate its sensitivity and specificity and compare with the method of corneal smear. The next step is to use the index of density to assess the severity of infection, and then find the correlation with the patients' clinical symptoms and evaluate consistency between them. The accuracy of this technology was superior to corneal smear examination (p hyphae detection of image recognition was 89.29%, and the specificity was 95.65%. The area under the ROC curve was 0.946. The correlation coefficient between the grading of the severity in the fungal keratitis by the automatic hyphae detection based on image recognition and the clinical grading is 0.87. The technology of automatic hyphae detection based on image recognition was with high sensitivity and specificity, able to identify fungal keratitis, which is better than the method of corneal smear examination. This technology has the advantages when compared with the conventional artificial identification of confocal

  1. Efficient RNA structure comparison algorithms.

    Science.gov (United States)

    Arslan, Abdullah N; Anandan, Jithendar; Fry, Eric; Monschke, Keith; Ganneboina, Nitin; Bowerman, Jason

    2017-12-01

    Recently proposed relative addressing-based ([Formula: see text]) RNA secondary structure representation has important features by which an RNA structure database can be stored into a suffix array. A fast substructure search algorithm has been proposed based on binary search on this suffix array. Using this substructure search algorithm, we present a fast algorithm that finds the largest common substructure of given multiple RNA structures in [Formula: see text] format. The multiple RNA structure comparison problem is NP-hard in its general formulation. We introduced a new problem for comparing multiple RNA structures. This problem has more strict similarity definition and objective, and we propose an algorithm that solves this problem efficiently. We also develop another comparison algorithm that iteratively calls this algorithm to locate nonoverlapping large common substructures in compared RNAs. With the new resulting tools, we improved the RNASSAC website (linked from http://faculty.tamuc.edu/aarslan ). This website now also includes two drawing tools: one specialized for preparing RNA substructures that can be used as input by the search tool, and another one for automatically drawing the entire RNA structure from a given structure sequence.

  2. All-automatic swimmer tracking system based on an optimized scaled composite JTC technique

    Science.gov (United States)

    Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.

    2016-04-01

    In this paper, an all-automatic optimized JTC based swimmer tracking system is proposed and evaluated on real video database outcome from national and international swimming competitions (French National Championship, Limoges 2015, FINA World Championships, Barcelona 2013 and Kazan 2015). First, we proposed to calibrate the swimming pool using the DLT algorithm (Direct Linear Transformation). DLT calculates the homography matrix given a sufficient set of correspondence points between pixels and metric coordinates: i.e. DLT takes into account the dimensions of the swimming pool and the type of the swim. Once the swimming pool is calibrated, we extract the lane. Then we apply a motion detection approach to detect globally the swimmer in this lane. Next, we apply our optimized Scaled Composite JTC which consists of creating an adapted input plane that contains the predicted region and the head reference image. This latter is generated using a composite filter of fin images chosen from the database. The dimension of this reference will be scaled according to the ratio between the head's dimension and the width of the swimming lane. Finally, applying the proposed approach improves the performances of our previous tracking method by adding a detection module in order to achieve an all-automatic swimmer tracking system.

  3. Automatic Sleep Scoring in Normals and in Individuals with Neurodegenerative Disorders According to New International Sleep Scoring Criteria

    DEFF Research Database (Denmark)

    Jensen, Peter S.; Sørensen, Helge Bjarup Dissing; Leonthin, Helle

    2010-01-01

    The aim of this study was to develop a fully automatic sleep scoring algorithm on the basis of a reproduction of new international sleep scoring criteria from the American Academy of Sleep Medicine. A biomedical signal processing algorithm was developed, allowing for automatic sleep depth....... Based on an observed reliability of the manual scorer of 92.5% (Cohen's Kappa: 0.87) in the normal group and 85.3% (Cohen's Kappa: 0.73) in the abnormal group, this study concluded that although the developed algorithm was capable of scoring normal sleep with an accuracy around the manual interscorer...... reliability, it failed in accurately scoring abnormal sleep as encountered for the Parkinson disease/multiple system atrophy patients....

  4. Automatic sleep scoring in normals and in individuals with neurodegenerative disorders according to new international sleep scoring criteria

    DEFF Research Database (Denmark)

    Jensen, Peter S; Sorensen, Helge B D; Jennum, Poul

    2010-01-01

    The aim of this study was to develop a fully automatic sleep scoring algorithm on the basis of a reproduction of new international sleep scoring criteria from the American Academy of Sleep Medicine. A biomedical signal processing algorithm was developed, allowing for automatic sleep depth....... Based on an observed reliability of the manual scorer of 92.5% (Cohen's Kappa: 0.87) in the normal group and 85.3% (Cohen's Kappa: 0.73) in the abnormal group, this study concluded that although the developed algorithm was capable of scoring normal sleep with an accuracy around the manual interscorer...... reliability, it failed in accurately scoring abnormal sleep as encountered for the Parkinson disease/multiple system atrophy patients....

  5. Normalization based K means Clustering Algorithm

    OpenAIRE

    Virmani, Deepali; Taneja, Shweta; Malhotra, Geetika

    2015-01-01

    K-means is an effective clustering technique used to separate similar data into groups based on initial centroids of clusters. In this paper, Normalization based K-means clustering algorithm(N-K means) is proposed. Proposed N-K means clustering algorithm applies normalization prior to clustering on the available data as well as the proposed approach calculates initial centroids based on weights. Experimental results prove the betterment of proposed N-K means clustering algorithm over existing...

  6. Smartphone based automatic organ validation in ultrasound video.

    Science.gov (United States)

    Vaish, Pallavi; Bharath, R; Rajalakshmi, P

    2017-07-01

    Telesonography involves transmission of ultrasound video from remote areas to the doctors for getting diagnosis. Due to the lack of trained sonographers in remote areas, the ultrasound videos scanned by these untrained persons do not contain the proper information that is required by a physician. As compared to standard methods for video transmission, mHealth driven systems need to be developed for transmitting valid medical videos. To overcome this problem, we are proposing an organ validation algorithm to evaluate the ultrasound video based on the content present. This will guide the semi skilled person to acquire the representative data from patient. Advancement in smartphone technology allows us to perform high medical image processing on smartphone. In this paper we have developed an Application (APP) for a smartphone which can automatically detect the valid frames (which consist of clear organ visibility) in an ultrasound video and ignores the invalid frames (which consist of no-organ visibility), and produces a compressed sized video. This is done by extracting the GIST features from the Region of Interest (ROI) of the frame and then classifying the frame using SVM classifier with quadratic kernel. The developed application resulted with the accuracy of 94.93% in classifying valid and invalid images.

  7. Automatic Image Alignment and Stitching of Medical Images with Seam Blending

    OpenAIRE

    Abhinav Kumar; Raja Sekhar Bandaru; B Madhusudan Rao; Saket Kulkarni; Nilesh Ghatpande

    2010-01-01

    This paper proposes an algorithm which automatically aligns and stitches the component medical images (fluoroscopic) with varying degrees of overlap into a single composite image. The alignment method is based on similarity measure between the component images. As applied here the technique is intensity based rather than feature based. It works well in domains where feature based methods have difficulty, yet more robust than traditional correlation. Component images are stitched together usin...

  8. Dicty_cDB: Contig-U16279-1 [Dicty_cDB

    Lifescience Database Archive (English)

    Full Text Available ( AB254080 |pid:none) Streptomyces kanamyceticus kanam... 47 0.002 CP000964_3229( CP000964 |pid:none) Klebsiella pneumoni...nkkmtkpvasyeldekrfltllgkligetenlqnrppalipiednag rhviealtpylkanggvleleqvhcdpvnypkrgniiie... letters Score E Sequences producing significant alignments: (bits) Value Contig-U16279-1 (Contig-U16279-1Q....................................................done Score E Sequences producing significant alignments: (bits) Val....................................done Score E Sequences producing significant alignments: (bits) Val

  9. Novel Automatic Filter-Class Feature Selection for Machine Learning Regression

    DEFF Research Database (Denmark)

    Wollsen, Morten Gill; Hallam, John; Jørgensen, Bo Nørregaard

    2017-01-01

    With the increased focus on application of Big Data in all sectors of society, the performance of machine learning becomes essential. Efficient machine learning depends on efficient feature selection algorithms. Filter feature selection algorithms are model-free and therefore very fast, but require...... model in the feature selection process. PCA is often used in machine learning litterature and can be considered the default feature selection method. RDESF outperformed PCA in both experiments in both prediction error and computational speed. RDESF is a new step into filter-based automatic feature...

  10. A new methodology for automatic detection of reference points in 3D cephalometry: A pilot study.

    Science.gov (United States)

    Ed-Dhahraouy, Mohammed; Riri, Hicham; Ezzahmouly, Manal; Bourzgui, Farid; El Moutaoukkil, Abdelmajid

    2018-04-05

    The aim of this study was to develop a new method for an automatic detection of reference points in 3D cephalometry to overcome the limits of 2D cephalometric analyses. A specific application was designed using the C++ language for automatic and manual identification of 21 (reference) points on the craniofacial structures. Our algorithm is based on the implementation of an anatomical and geometrical network adapted to the craniofacial structure. This network was constructed based on the anatomical knowledge of the 3D cephalometric (reference) points. The proposed algorithm was tested on five CBCT images. The proposed approach for the automatic 3D cephalometric identification was able to detect 21 points with a mean error of 2.32mm. In this pilot study, we propose an automated methodology for the identification of the 3D cephalometric (reference) points. A larger sample will be implemented in the future to assess the method validity and reliability. Copyright © 2018 CEO. Published by Elsevier Masson SAS. All rights reserved.

  11. Algorithms for Cytoplasm Segmentation of Fluorescence Labelled Cells

    Directory of Open Access Journals (Sweden)

    Carolina Wählby

    2002-01-01

    Full Text Available Automatic cell segmentation has various applications in cytometry, and while the nucleus is often very distinct and easy to identify, the cytoplasm provides a lot more challenge. A new combination of image analysis algorithms for segmentation of cells imaged by fluorescence microscopy is presented. The algorithm consists of an image pre‐processing step, a general segmentation and merging step followed by a segmentation quality measurement. The quality measurement consists of a statistical analysis of a number of shape descriptive features. Objects that have features that differ to that of correctly segmented single cells can be further processed by a splitting step. By statistical analysis we therefore get a feedback system for separation of clustered cells. After the segmentation is completed, the quality of the final segmentation is evaluated. By training the algorithm on a representative set of training images, the algorithm is made fully automatic for subsequent images created under similar conditions. Automatic cytoplasm segmentation was tested on CHO‐cells stained with calcein. The fully automatic method showed between 89% and 97% correct segmentation as compared to manual segmentation.

  12. Using RGB-D sensors and evolutionary algorithms for the optimization of workstation layouts.

    Science.gov (United States)

    Diego-Mas, Jose Antonio; Poveda-Bautista, Rocio; Garzon-Leal, Diana

    2017-11-01

    RGB-D sensors can collect postural data in an automatized way. However, the application of these devices in real work environments requires overcoming problems such as lack of accuracy or body parts' occlusion. This work presents the use of RGB-D sensors and genetic algorithms for the optimization of workstation layouts. RGB-D sensors are used to capture workers' movements when they reach objects on workbenches. Collected data are then used to optimize workstation layout by means of genetic algorithms considering multiple ergonomic criteria. Results show that typical drawbacks of using RGB-D sensors for body tracking are not a problem for this application, and that the combination with intelligent algorithms can automatize the layout design process. The procedure described can be used to automatically suggest new layouts when workers or processes of production change, to adapt layouts to specific workers based on their ways to do the tasks, or to obtain layouts simultaneously optimized for several production processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Implementation of perceptual aspects in a face recognition algorithm

    International Nuclear Information System (INIS)

    Crenna, F; Bovio, L; Rossi, G B; Zappa, E; Testa, R; Gasparetto, M

    2013-01-01

    Automatic face recognition is a biometric technique particularly appreciated in security applications. In fact face recognition presents the opportunity to operate at a low invasive level without the collaboration of the subjects under tests, with face images gathered either from surveillance systems or from specific cameras located in strategic points. The automatic recognition algorithms perform a measurement, on the face images, of a set of specific characteristics of the subject and provide a recognition decision based on the measurement results. Unfortunately several quantities may influence the measurement of the face geometry such as its orientation, the lighting conditions, the expression and so on, affecting the recognition rate. On the other hand human recognition of face is a very robust process far less influenced by the surrounding conditions. For this reason it may be interesting to insert perceptual aspects in an automatic facial-based recognition algorithm to improve its robustness. This paper presents a first study in this direction investigating the correlation between the results of a perception experiment and the facial geometry, estimated by means of the position of a set of repere points

  14. Algorithm for predicting macular dysfunction based on moment invariants classification of the foveal avascular zone in functional retinal images

    Directory of Open Access Journals (Sweden)

    Angélica Moises Arthur

    2017-12-01

    Full Text Available Abstract Introduction A new method for segmenting and quantifying the macular area based on morphological alternating sequential filtering (ASF is proposed. Previous studies show that persons with diabetes present alterations in the foveal avascular zone (FAZ prior to the appearance of retinopathy. Thus, a proper characterization of FAZ using a method of automatic classification and prediction is a supportive and complementary tool for medical evaluation of the macular region, and may be useful for possible early treatment of eye diseases in persons without diabetic retinopathy. Methods We obtained high-resolution retinal images using a non-invasive functional imaging system called Retinal Function Imager to generate a series of combined capillary perfusion maps. We filtered sequentially the macular images to reduce the complexity by ASF. Then we segmented the FAZ using watershed transform from an automatic selection of markers. Using Hu’s moment invariants as a descriptor, we can automatically classify and categorize each FAZ. Results The FAZ differences between non-diabetic volunteers and diabetic subjects were automatically distinguished by the proposed system with an accuracy of 81%. Conclusion This is an innovative method to classify FAZ using a fully automatic algorithm for segmentation (based on morphological operators and for the classification (based on descriptor formed by Hu’s moments despite the presence of edema or other structures. This is an alternative tool for eye exams, which may contribute to the analysis and evaluation of FAZ morphology, promoting the prevention of macular impairment in diabetics without retinopathy.

  15. A heads-up no-limit Texas Hold'em poker player: Discretized betting models and automatically generated equilibrium-finding programs

    DEFF Research Database (Denmark)

    Gilpin, Andrew G.; Sandholm, Tuomas; Sørensen, Troels Bjerre

    2008-01-01

    choices in the game. Second, we employ potential-aware automated abstraction algorithms for identifying strategically similar situations in order to decrease the size of the game tree. Third, we develop a new technique for automatically generating the source code of an equilibrium-finding algorithm from...... an XML-based description of a game. This automatically generated program is more efficient than what would be possible with a general-purpose equilibrium-finding program. Finally, we present results from the AAAI-07 Computer Poker Competition, in which Tartanian placed second out of ten entries....

  16. Automatic paper sliceform design from 3D solid models.

    Science.gov (United States)

    Le-Nguyen, Tuong-Vu; Low, Kok-Lim; Ruiz, Conrado; Le, Sang N

    2013-11-01

    A paper sliceform or lattice-style pop-up is a form of papercraft that uses two sets of parallel paper patches slotted together to make a foldable structure. The structure can be folded flat, as well as fully opened (popped-up) to make the two sets of patches orthogonal to each other. Automatic design of paper sliceforms is still not supported by existing computational models and remains a challenge. We propose novel geometric formulations of valid paper sliceform designs that consider the stability, flat-foldability and physical realizability of the designs. Based on a set of sufficient construction conditions, we also present an automatic algorithm for generating valid sliceform designs that closely depict the given 3D solid models. By approximating the input models using a set of generalized cylinders, our method significantly reduces the search space for stable and flat-foldable sliceforms. To ensure the physical realizability of the designs, the algorithm automatically generates slots or slits on the patches such that no two cycles embedded in two different patches are interlocking each other. This guarantees local pairwise assembility between patches, which is empirically shown to lead to global assembility. Our method has been demonstrated on a number of example models, and the output designs have been successfully made into real paper sliceforms.

  17. Investigation of an automatic trim algorithm for restructurable aircraft control

    Science.gov (United States)

    Weiss, J.; Eterno, J.; Grunberg, D.; Looze, D.; Ostroff, A.

    1986-01-01

    This paper develops and solves an automatic trim problem for restructurable aircraft control. The trim solution is applied as a feed-forward control to reject measurable disturbances following control element failures. Disturbance rejection and command following performances are recovered through the automatic feedback control redesign procedure described by Looze et al. (1985). For this project the existence of a failure detection mechanism is assumed, and methods to cope with potential detection and identification inaccuracies are addressed.

  18. Automatic Tamil lyric generation based on ontological interpretation ...

    Indian Academy of Sciences (India)

    This system proposes an -gram based approach to automatic Tamil lyric generation, by the ontological semantic interpretation of the input scene. The approach is based on identifying the semantics conveyed in the scenario, thereby making the system understand the situation and generate lyrics accordingly. The heart of ...

  19. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array

    Directory of Open Access Journals (Sweden)

    Yu-Fei Gao

    2017-04-01

    Full Text Available This paper investigates a two-dimensional angle of arrival (2D AOA estimation algorithm for the electromagnetic vector sensor (EMVS array based on Type-2 block component decomposition (BCD tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD method.

  20. Anomaly Detection for Aviation Safety Based on an Improved KPCA Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaoyu Zhang

    2017-01-01

    Full Text Available Thousands of flights datasets should be analyzed per day for a moderate sized fleet; therefore, flight datasets are very large. In this paper, an improved kernel principal component analysis (KPCA method is proposed to search for signatures of anomalies in flight datasets through the squared prediction error statistics, in which the number of principal components and the confidence for the confidence limit are automatically determined by OpenMP-based K-fold cross-validation algorithm and the parameter in the radial basis function (RBF is optimized by GPU-based kernel learning method. Performed on Nvidia GeForce GTX 660, the computation of the proposed GPU-based RBF parameter is 112.9 times (average 82.6 times faster than that of sequential CPU task execution. The OpenMP-based K-fold cross-validation process for training KPCA anomaly detection model becomes 2.4 times (average 1.5 times faster than that of sequential CPU task execution. Experiments show that the proposed approach can effectively detect the anomalies with the accuracy of 93.57% and false positive alarm rate of 1.11%.