WorldWideScience

Sample records for supervised automated algorithm

  1. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm

    Directory of Open Access Journals (Sweden)

    Ricardo Andres Pizarro

    2016-12-01

    Full Text Available High-resolution three-dimensional magnetic resonance imaging (3D-MRI is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM algorithm in the quality assessment of structural brain images, using global and region of interest (ROI automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.

  2. Automated Quality Assessment of Structural Magnetic Resonance Brain Images Based on a Supervised Machine Learning Algorithm.

    Science.gov (United States)

    Pizarro, Ricardo A; Cheng, Xi; Barnett, Alan; Lemaitre, Herve; Verchinski, Beth A; Goldman, Aaron L; Xiao, Ena; Luo, Qian; Berman, Karen F; Callicott, Joseph H; Weinberger, Daniel R; Mattay, Venkata S

    2016-01-01

    High-resolution three-dimensional magnetic resonance imaging (3D-MRI) is being increasingly used to delineate morphological changes underlying neuropsychiatric disorders. Unfortunately, artifacts frequently compromise the utility of 3D-MRI yielding irreproducible results, from both type I and type II errors. It is therefore critical to screen 3D-MRIs for artifacts before use. Currently, quality assessment involves slice-wise visual inspection of 3D-MRI volumes, a procedure that is both subjective and time consuming. Automating the quality rating of 3D-MRI could improve the efficiency and reproducibility of the procedure. The present study is one of the first efforts to apply a support vector machine (SVM) algorithm in the quality assessment of structural brain images, using global and region of interest (ROI) automated image quality features developed in-house. SVM is a supervised machine-learning algorithm that can predict the category of test datasets based on the knowledge acquired from a learning dataset. The performance (accuracy) of the automated SVM approach was assessed, by comparing the SVM-predicted quality labels to investigator-determined quality labels. The accuracy for classifying 1457 3D-MRI volumes from our database using the SVM approach is around 80%. These results are promising and illustrate the possibility of using SVM as an automated quality assessment tool for 3D-MRI.

  3. Results of Evolution Supervised by Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Lorentz JÄNTSCHI

    2010-09-01

    Full Text Available The efficiency of a genetic algorithm is frequently assessed using a series of operators of evolution like crossover operators, mutation operators or other dynamic parameters. The present paper aimed to review the main results of evolution supervised by genetic algorithms used to identify solutions to agricultural and horticultural hard problems and to discuss the results of using a genetic algorithms on structure-activity relationships in terms of behavior of evolution supervised by genetic algorithms. A genetic algorithm had been developed and implemented in order to identify the optimal solution in term of estimation power of a multiple linear regression approach for structure-activity relationships. Three survival and three selection strategies (proportional, deterministic and tournament were investigated in order to identify the best survival-selection strategy able to lead to the model with higher estimation power. The Molecular Descriptors Family for structure characterization of a sample of 206 polychlorinated biphenyls with measured octanol-water partition coefficients was used as case study. Evolution using different selection and survival strategies proved to create populations of genotypes living in the evolution space with different diversity and variability. Under a series of criteria of comparisons these populations proved to be grouped and the groups were showed to be statistically different one to each other. The conclusions about genetic algorithm evolution according to a number of criteria were also highlighted.

  4. A supervised contextual classifier based on a region-growth algorithm

    DEFF Research Database (Denmark)

    Lira, Jorge; Maletti, Gabriela Mariel

    2002-01-01

    A supervised classification scheme to segment optical multi-spectral images has been developed. In this classifier, an automated region-growth algorithm delineates the training sets. This algorithm handles three parameters: an initial pixel seed, a window size and a threshold for each class. A su...

  5. QUEST : Eliminating online supervised learning for efficient classification algorithms

    NARCIS (Netherlands)

    Zwartjes, Ardjan; Havinga, Paul J.M.; Smit, Gerard J.M.; Hurink, Johann L.

    2016-01-01

    In this work, we introduce QUEST (QUantile Estimation after Supervised Training), an adaptive classification algorithm for Wireless Sensor Networks (WSNs) that eliminates the necessity for online supervised learning. Online processing is important for many sensor network applications. Transmitting

  6. [Algorithm for the automated processing of rheosignals].

    Science.gov (United States)

    Odinets, G S

    1988-01-01

    Algorithm for rheosignals recognition for a microprocessing device with a representation apparatus and with automated and manual cursor control was examined. The algorithm permits to automate rheosignals registrating and processing taking into account their changeability.

  7. Assessment of various supervised learning algorithms using different performance metrics

    Science.gov (United States)

    Susheel Kumar, S. M.; Laxkar, Deepak; Adhikari, Sourav; Vijayarajan, V.

    2017-11-01

    Our work brings out comparison based on the performance of supervised machine learning algorithms on a binary classification task. The supervised machine learning algorithms which are taken into consideration in the following work are namely Support Vector Machine(SVM), Decision Tree(DT), K Nearest Neighbour (KNN), Naïve Bayes(NB) and Random Forest(RF). This paper mostly focuses on comparing the performance of above mentioned algorithms on one binary classification task by analysing the Metrics such as Accuracy, F-Measure, G-Measure, Precision, Misclassification Rate, False Positive Rate, True Positive Rate, Specificity, Prevalence.

  8. Automated training for algorithms that learn from genomic data.

    Science.gov (United States)

    Cilingir, Gokcen; Broschat, Shira L

    2015-01-01

    Supervised machine learning algorithms are used by life scientists for a variety of objectives. Expert-curated public gene and protein databases are major resources for gathering data to train these algorithms. While these data resources are continuously updated, generally, these updates are not incorporated into published machine learning algorithms which thereby can become outdated soon after their introduction. In this paper, we propose a new model of operation for supervised machine learning algorithms that learn from genomic data. By defining these algorithms in a pipeline in which the training data gathering procedure and the learning process are automated, one can create a system that generates a classifier or predictor using information available from public resources. The proposed model is explained using three case studies on SignalP, MemLoci, and ApicoAP in which existing machine learning models are utilized in pipelines. Given that the vast majority of the procedures described for gathering training data can easily be automated, it is possible to transform valuable machine learning algorithms into self-evolving learners that benefit from the ever-changing data available for gene products and to develop new machine learning algorithms that are similarly capable.

  9. QUEST: Eliminating Online Supervised Learning for Efficient Classification Algorithms

    Directory of Open Access Journals (Sweden)

    Ardjan Zwartjes

    2016-10-01

    Full Text Available In this work, we introduce QUEST (QUantile Estimation after Supervised Training, an adaptive classification algorithm for Wireless Sensor Networks (WSNs that eliminates the necessity for online supervised learning. Online processing is important for many sensor network applications. Transmitting raw sensor data puts high demands on the battery, reducing network life time. By merely transmitting partial results or classifications based on the sampled data, the amount of traffic on the network can be significantly reduced. Such classifications can be made by learning based algorithms using sampled data. An important issue, however, is the training phase of these learning based algorithms. Training a deployed sensor network requires a lot of communication and an impractical amount of human involvement. QUEST is a hybrid algorithm that combines supervised learning in a controlled environment with unsupervised learning on the location of deployment. Using the SITEX02 dataset, we demonstrate that the presented solution works with a performance penalty of less than 10% in 90% of the tests. Under some circumstances, it even outperforms a network of classifiers completely trained with supervised learning. As a result, the need for on-site supervised learning and communication for training is completely eliminated by our solution.

  10. QUEST: Eliminating Online Supervised Learning for Efficient Classification Algorithms.

    Science.gov (United States)

    Zwartjes, Ardjan; Havinga, Paul J M; Smit, Gerard J M; Hurink, Johann L

    2016-10-01

    In this work, we introduce QUEST (QUantile Estimation after Supervised Training), an adaptive classification algorithm for Wireless Sensor Networks (WSNs) that eliminates the necessity for online supervised learning. Online processing is important for many sensor network applications. Transmitting raw sensor data puts high demands on the battery, reducing network life time. By merely transmitting partial results or classifications based on the sampled data, the amount of traffic on the network can be significantly reduced. Such classifications can be made by learning based algorithms using sampled data. An important issue, however, is the training phase of these learning based algorithms. Training a deployed sensor network requires a lot of communication and an impractical amount of human involvement. QUEST is a hybrid algorithm that combines supervised learning in a controlled environment with unsupervised learning on the location of deployment. Using the SITEX02 dataset, we demonstrate that the presented solution works with a performance penalty of less than 10% in 90% of the tests. Under some circumstances, it even outperforms a network of classifiers completely trained with supervised learning. As a result, the need for on-site supervised learning and communication for training is completely eliminated by our solution.

  11. Robust Semi-Supervised Manifold Learning Algorithm for Classification

    Directory of Open Access Journals (Sweden)

    Mingxia Chen

    2018-01-01

    Full Text Available In the recent years, manifold learning methods have been widely used in data classification to tackle the curse of dimensionality problem, since they can discover the potential intrinsic low-dimensional structures of the high-dimensional data. Given partially labeled data, the semi-supervised manifold learning algorithms are proposed to predict the labels of the unlabeled points, taking into account label information. However, these semi-supervised manifold learning algorithms are not robust against noisy points, especially when the labeled data contain noise. In this paper, we propose a framework for robust semi-supervised manifold learning (RSSML to address this problem. The noisy levels of the labeled points are firstly predicted, and then a regularization term is constructed to reduce the impact of labeled points containing noise. A new robust semi-supervised optimization model is proposed by adding the regularization term to the traditional semi-supervised optimization model. Numerical experiments are given to show the improvement and efficiency of RSSML on noisy data sets.

  12. A Supervised Classification Algorithm for Note Onset Detection

    Directory of Open Access Journals (Sweden)

    Douglas Eck

    2007-01-01

    Full Text Available This paper presents a novel approach to detecting onsets in music audio files. We use a supervised learning algorithm to classify spectrogram frames extracted from digital audio as being onsets or nononsets. Frames classified as onsets are then treated with a simple peak-picking algorithm based on a moving average. We present two versions of this approach. The first version uses a single neural network classifier. The second version combines the predictions of several networks trained using different hyperparameters. We describe the details of the algorithm and summarize the performance of both variants on several datasets. We also examine our choice of hyperparameters by describing results of cross-validation experiments done on a custom dataset. We conclude that a supervised learning approach to note onset detection performs well and warrants further investigation.

  13. Supervised learning for the automated transcription of spacer classification from spoligotype films

    Directory of Open Access Journals (Sweden)

    Abernethy Neil

    2009-08-01

    Full Text Available Abstract Background Molecular genotyping of bacteria has revolutionized the study of tuberculosis epidemiology, yet these established laboratory techniques typically require subjective and laborious interpretation by trained professionals. In the context of a Tuberculosis Case Contact study in The Gambia we used a reverse hybridization laboratory assay called spoligotype analysis. To facilitate processing of spoligotype images we have developed tools and algorithms to automate the classification and transcription of these data directly to a database while allowing for manual editing. Results Features extracted from each of the 1849 spots on a spoligo film were classified using two supervised learning algorithms. A graphical user interface allows manual editing of the classification, before export to a database. The application was tested on ten films of differing quality and the results of the best classifier were compared to expert manual classification, giving a median correct classification rate of 98.1% (inter quartile range: 97.1% to 99.2%, with an automated processing time of less than 1 minute per film. Conclusion The software implementation offers considerable time savings over manual processing whilst allowing expert editing of the automated classification. The automatic upload of the classification to a database reduces the chances of transcription errors.

  14. A numeric comparison of variable selection algorithms for supervised learning

    International Nuclear Information System (INIS)

    Palombo, G.; Narsky, I.

    2009-01-01

    Datasets in modern High Energy Physics (HEP) experiments are often described by dozens or even hundreds of input variables. Reducing a full variable set to a subset that most completely represents information about data is therefore an important task in analysis of HEP data. We compare various variable selection algorithms for supervised learning using several datasets such as, for instance, imaging gamma-ray Cherenkov telescope (MAGIC) data found at the UCI repository. We use classifiers and variable selection methods implemented in the statistical package StatPatternRecognition (SPR), a free open-source C++ package developed in the HEP community ( (http://sourceforge.net/projects/statpatrec/)). For each dataset, we select a powerful classifier and estimate its learning accuracy on variable subsets obtained by various selection algorithms. When possible, we also estimate the CPU time needed for the variable subset selection. The results of this analysis are compared with those published previously for these datasets using other statistical packages such as R and Weka. We show that the most accurate, yet slowest, method is a wrapper algorithm known as generalized sequential forward selection ('Add N Remove R') implemented in SPR.

  15. Objectness Supervised Merging Algorithm for Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    Haifeng Sima

    2016-01-01

    Full Text Available Ideal color image segmentation needs both low-level cues and high-level semantic features. This paper proposes a two-hierarchy segmentation model based on merging homogeneous superpixels. First, a region growing strategy is designed for producing homogenous and compact superpixels in different partitions. Total variation smoothing features are adopted in the growing procedure for locating real boundaries. Before merging, we define a combined color-texture histogram feature for superpixels description and, meanwhile, a novel objectness feature is proposed to supervise the region merging procedure for reliable segmentation. Both color-texture histograms and objectness are computed to measure regional similarities between region pairs, and the mixed standard deviation of the union features is exploited to make stop criteria for merging process. Experimental results on the popular benchmark dataset demonstrate the better segmentation performance of the proposed model compared to other well-known segmentation algorithms.

  16. Automated segmentation of geographic atrophy in fundus autofluorescence images using supervised pixel classification.

    Science.gov (United States)

    Hu, Zhihong; Medioni, Gerard G; Hernandez, Matthias; Sadda, Srinivas R

    2015-01-01

    Geographic atrophy (GA) is a manifestation of the advanced or late stage of age-related macular degeneration (AMD). AMD is the leading cause of blindness in people over the age of 65 in the western world. The purpose of this study is to develop a fully automated supervised pixel classification approach for segmenting GA, including uni- and multifocal patches in fundus autofluorescene (FAF) images. The image features include region-wise intensity measures, gray-level co-occurrence matrix measures, and Gaussian filter banks. A [Formula: see text]-nearest-neighbor pixel classifier is applied to obtain a GA probability map, representing the likelihood that the image pixel belongs to GA. Sixteen randomly chosen FAF images were obtained from 16 subjects with GA. The algorithm-defined GA regions are compared with manual delineation performed by a certified image reading center grader. Eight-fold cross-validation is applied to evaluate the algorithm performance. The mean overlap ratio (OR), area correlation (Pearson's [Formula: see text]), accuracy (ACC), true positive rate (TPR), specificity (SPC), positive predictive value (PPV), and false discovery rate (FDR) between the algorithm- and manually defined GA regions are [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text], respectively.

  17. Experiments on Supervised Learning Algorithms for Text Categorization

    Science.gov (United States)

    Namburu, Setu Madhavi; Tu, Haiying; Luo, Jianhui; Pattipati, Krishna R.

    2005-01-01

    Modern information society is facing the challenge of handling massive volume of online documents, news, intelligence reports, and so on. How to use the information accurately and in a timely manner becomes a major concern in many areas. While the general information may also include images and voice, we focus on the categorization of text data in this paper. We provide a brief overview of the information processing flow for text categorization, and discuss two supervised learning algorithms, viz., support vector machines (SVM) and partial least squares (PLS), which have been successfully applied in other domains, e.g., fault diagnosis [9]. While SVM has been well explored for binary classification and was reported as an efficient algorithm for text categorization, PLS has not yet been applied to text categorization. Our experiments are conducted on three data sets: Reuter's- 21578 dataset about corporate mergers and data acquisitions (ACQ), WebKB and the 20-Newsgroups. Results show that the performance of PLS is comparable to SVM in text categorization. A major drawback of SVM for multi-class categorization is that it requires a voting scheme based on the results of pair-wise classification. PLS does not have this drawback and could be a better candidate for multi-class text categorization.

  18. Using an Agent-oriented Framework for Supervision, Diagnosis and Prognosis Applications in Advanced Automation Environments

    DEFF Research Database (Denmark)

    Thunem, Harald P-J; Thunem, Atoosa P-J; Lind, Morten

    2011-01-01

    This paper demonstrates how a generic agent-oriented framework can be used in advanced automation environments, for systems analysis in general and supervision, diagnosis and prognosis purposes in particular. The framework’s background and main application areas are briefly described. Next......-oriented supervision, diagnosis and prognosis purposes are equally explained. Finally, the paper sums up by also addressing plans for further enhancement and in that respect integration with other tailor-made tools for joint treatment of various modeling and analysis activities upon advanced automation environments....

  19. An evaluation of unsupervised and supervised learning algorithms for clustering landscape types in the United States

    Science.gov (United States)

    Wendel, Jochen; Buttenfield, Barbara P.; Stanislawski, Larry V.

    2016-01-01

    Knowledge of landscape type can inform cartographic generalization of hydrographic features, because landscape characteristics provide an important geographic context that affects variation in channel geometry, flow pattern, and network configuration. Landscape types are characterized by expansive spatial gradients, lacking abrupt changes between adjacent classes; and as having a limited number of outliers that might confound classification. The US Geological Survey (USGS) is exploring methods to automate generalization of features in the National Hydrography Data set (NHD), to associate specific sequences of processing operations and parameters with specific landscape characteristics, thus obviating manual selection of a unique processing strategy for every NHD watershed unit. A chronology of methods to delineate physiographic regions for the United States is described, including a recent maximum likelihood classification based on seven input variables. This research compares unsupervised and supervised algorithms applied to these seven input variables, to evaluate and possibly refine the recent classification. Evaluation metrics for unsupervised methods include the Davies–Bouldin index, the Silhouette index, and the Dunn index as well as quantization and topographic error metrics. Cross validation and misclassification rate analysis are used to evaluate supervised classification methods. The paper reports the comparative analysis and its impact on the selection of landscape regions. The compared solutions show problems in areas of high landscape diversity. There is some indication that additional input variables, additional classes, or more sophisticated methods can refine the existing classification.

  20. Adaptive Algorithms for Automated Processing of Document Images

    Science.gov (United States)

    2011-01-01

    ABSTRACT Title of dissertation: ADAPTIVE ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES Mudit Agrawal, Doctor of Philosophy, 2011...2011 4. TITLE AND SUBTITLE Adaptive Algorithms for Automated Processing of Document Images 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...ALGORITHMS FOR AUTOMATED PROCESSING OF DOCUMENT IMAGES by Mudit Agrawal Dissertation submitted to the Faculty of the Graduate School of the University

  1. A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI

    Science.gov (United States)

    Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953

  2. Classification and Diagnostic Output Prediction of Cancer Using Gene Expression Profiling and Supervised Machine Learning Algorithms

    DEFF Research Database (Denmark)

    Yoo, C.; Gernaey, Krist

    2008-01-01

    importance in the projection (VIP) information of the DPLS method. The power of the gene selection method and the proposed supervised hierarchical clustering method is illustrated on a three microarray data sets of leukemia, breast, and colon cancer. Supervised machine learning algorithms thus enable...

  3. THE QUASIPERIODIC AUTOMATED TRANSIT SEARCH ALGORITHM

    International Nuclear Information System (INIS)

    Carter, Joshua A.; Agol, Eric

    2013-01-01

    We present a new algorithm for detecting transiting extrasolar planets in time-series photometry. The Quasiperiodic Automated Transit Search (QATS) algorithm relaxes the usual assumption of strictly periodic transits by permitting a variable, but bounded, interval between successive transits. We show that this method is capable of detecting transiting planets with significant transit timing variations without any loss of significance— s mearing — as would be incurred with traditional algorithms; however, this is at the cost of a slightly increased stochastic background. The approximate times of transit are standard products of the QATS search. Despite the increased flexibility, we show that QATS has a run-time complexity that is comparable to traditional search codes and is comparably easy to implement. QATS is applicable to data having a nearly uninterrupted, uniform cadence and is therefore well suited to the modern class of space-based transit searches (e.g., Kepler, CoRoT). Applications of QATS include transiting planets in dynamically active multi-planet systems and transiting planets in stellar binary systems.

  4. A new supervised learning algorithm for spiking neurons.

    Science.gov (United States)

    Xu, Yan; Zeng, Xiaoqin; Zhong, Shuiming

    2013-06-01

    The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by the precise firing times of spikes. If only running time is considered, the supervised learning for a spiking neuron is equivalent to distinguishing the times of desired output spikes and the other time during the running process of the neuron through adjusting synaptic weights, which can be regarded as a classification problem. Based on this idea, this letter proposes a new supervised learning method for spiking neurons with temporal encoding; it first transforms the supervised learning into a classification problem and then solves the problem by using the perceptron learning rule. The experiment results show that the proposed method has higher learning accuracy and efficiency over the existing learning methods, so it is more powerful for solving complex and real-time problems.

  5. ALFA: an automated line fitting algorithm

    Science.gov (United States)

    Wesson, R.

    2016-03-01

    I present the automated line fitting algorithm, ALFA, a new code which can fit emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. In contrast to traditional emission line fitting methods which require the identification of spectral features suspected to be emission lines, ALFA instead uses a list of lines which are expected to be present to construct a synthetic spectrum. The parameters used to construct the synthetic spectrum are optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. I show that the results are in excellent agreement with those measured manually for a number of spectra. Where discrepancies exist, the manually measured fluxes are found to be less accurate than those returned by ALFA. Together with the code NEAT, ALFA provides a powerful way to rapidly extract physical information from observations, an increasingly vital function in the era of highly multiplexed spectroscopy. The two codes can deliver a reliable and comprehensive analysis of very large data sets in a few hours with little or no user interaction.

  6. A semi-supervised classification algorithm using the TAD-derived background as training data

    Science.gov (United States)

    Fan, Lei; Ambeau, Brittany; Messinger, David W.

    2013-05-01

    In general, spectral image classification algorithms fall into one of two categories: supervised and unsupervised. In unsupervised approaches, the algorithm automatically identifies clusters in the data without a priori information about those clusters (except perhaps the expected number of them). Supervised approaches require an analyst to identify training data to learn the characteristics of the clusters such that they can then classify all other pixels into one of the pre-defined groups. The classification algorithm presented here is a semi-supervised approach based on the Topological Anomaly Detection (TAD) algorithm. The TAD algorithm defines background components based on a mutual k-Nearest Neighbor graph model of the data, along with a spectral connected components analysis. Here, the largest components produced by TAD are used as regions of interest (ROI's),or training data for a supervised classification scheme. By combining those ROI's with a Gaussian Maximum Likelihood (GML) or a Minimum Distance to the Mean (MDM) algorithm, we are able to achieve a semi supervised classification method. We test this classification algorithm against data collected by the HyMAP sensor over the Cooke City, MT area and University of Pavia scene.

  7. Automated Spirometry Quality Assurance: Supervised Learning From Multiple Experts.

    Science.gov (United States)

    Velickovski, Filip; Ceccaroni, Luigi; Marti, Robert; Burgos, Felip; Gistau, Concepcion; Alsina-Restoy, Xavier; Roca, Josep

    2018-01-01

    Forced spirometry testing is gradually becoming available across different healthcare tiers including primary care. It has been demonstrated in earlier work that commercially available spirometers are not fully able to assure the quality of individual spirometry manoeuvres. Thus, a need to expand the availability of high-quality spirometry assessment beyond specialist pulmonary centres has arisen. In this paper, we propose a method to select and optimise a classifier using supervised learning techniques by learning from previously classified forced spirometry tests from a group of experts. Such a method is able to take into account the shape of the curve as an expert would during visual inspection. We evaluated the final classifier on a dataset put aside for evaluation yielding an area under the receiver operating characteristic curve of 0.88 and specificities of 0.91 and 0.86 for sensitivities of 0.60 and 0.82. Furthermore, other specificities and sensitivities along the receiver operating characteristic curve were close to the level of the experts when compared against each-other, and better than an earlier rules-based method assessed on the same dataset. We foresee key benefits in raising diagnostic quality, saving time, reducing cost, and also improving remote care and monitoring services for patients with chronic respiratory diseases in the future if a clinical decision support system with the encapsulated classifier is to be integrated into the work-flow of forced spirometry testing.

  8. Optimization of an NLEO-based algorithm for automated detection of spontaneous activity transients in early preterm EEG

    International Nuclear Information System (INIS)

    Palmu, Kirsi; Vanhatalo, Sampsa; Stevenson, Nathan; Wikström, Sverre; Hellström-Westas, Lena; Palva, J Matias

    2010-01-01

    We propose here a simple algorithm for automated detection of spontaneous activity transients (SATs) in early preterm electroencephalography (EEG). The parameters of the algorithm were optimized by supervised learning using a gold standard created from visual classification data obtained from three human raters. The generalization performance of the algorithm was estimated by leave-one-out cross-validation. The mean sensitivity of the optimized algorithm was 97% (range 91–100%) and specificity 95% (76–100%). The optimized algorithm makes it possible to systematically study brain state fluctuations of preterm infants. (note)

  9. Automated labelling of cancer textures in colorectal histopathology slides using quasi-supervised learning.

    Science.gov (United States)

    Onder, Devrim; Sarioglu, Sulen; Karacali, Bilge

    2013-04-01

    Quasi-supervised learning is a statistical learning algorithm that contrasts two datasets by computing estimate for the posterior probability of each sample in either dataset. This method has not been applied to histopathological images before. The purpose of this study is to evaluate the performance of the method to identify colorectal tissues with or without adenocarcinoma. Light microscopic digital images from histopathological sections were obtained from 30 colorectal radical surgery materials including adenocarcinoma and non-neoplastic regions. The texture features were extracted by using local histograms and co-occurrence matrices. The quasi-supervised learning algorithm operates on two datasets, one containing samples of normal tissues labelled only indirectly, and the other containing an unlabeled collection of samples of both normal and cancer tissues. As such, the algorithm eliminates the need for manually labelled samples of normal and cancer tissues for conventional supervised learning and significantly reduces the expert intervention. Several texture feature vector datasets corresponding to different extraction parameters were tested within the proposed framework. The Independent Component Analysis dimensionality reduction approach was also identified as the one improving the labelling performance evaluated in this series. In this series, the proposed method was applied to the dataset of 22,080 vectors with reduced dimensionality 119 from 132. Regions containing cancer tissue could be identified accurately having false and true positive rates up to 19% and 88% respectively without using manually labelled ground-truth datasets in a quasi-supervised strategy. The resulting labelling performances were compared to that of a conventional powerful supervised classifier using manually labelled ground-truth data. The supervised classifier results were calculated as 3.5% and 95% for the same case. The results in this series in comparison with the benchmark

  10. Validation of automated supervised segmentation of multibeam backscatter data from the Chatham Rise, New Zealand

    Science.gov (United States)

    Hillman, Jess I. T.; Lamarche, Geoffroy; Pallentin, Arne; Pecher, Ingo A.; Gorman, Andrew R.; Schneider von Deimling, Jens

    2018-06-01

    Using automated supervised segmentation of multibeam backscatter data to delineate seafloor substrates is a relatively novel technique. Low-frequency multibeam echosounders (MBES), such as the 12-kHz EM120, present particular difficulties since the signal can penetrate several metres into the seafloor, depending on substrate type. We present a case study illustrating how a non-targeted dataset may be used to derive information from multibeam backscatter data regarding distribution of substrate types. The results allow us to assess limitations associated with low frequency MBES where sub-bottom layering is present, and test the accuracy of automated supervised segmentation performed using SonarScope® software. This is done through comparison of predicted and observed substrate from backscatter facies-derived classes and substrate data, reinforced using quantitative statistical analysis based on a confusion matrix. We use sediment samples, video transects and sub-bottom profiles acquired on the Chatham Rise, east of New Zealand. Inferences on the substrate types are made using the Generic Seafloor Acoustic Backscatter (GSAB) model, and the extents of the backscatter classes are delineated by automated supervised segmentation. Correlating substrate data to backscatter classes revealed that backscatter amplitude may correspond to lithologies up to 4 m below the seafloor. Our results emphasise several issues related to substrate characterisation using backscatter classification, primarily because the GSAB model does not only relate to grain size and roughness properties of substrate, but also accounts for other parameters that influence backscatter. Better understanding these limitations allows us to derive first-order interpretations of sediment properties from automated supervised segmentation.

  11. A supervised framework for lesion segmentation and automated VLSM analyses in left hemispheric stroke

    Directory of Open Access Journals (Sweden)

    Dorian Pustina

    2015-05-01

    Full Text Available INTRODUCTION: Voxel-based lesion-symptom mapping (VLSM is conventionally performed using skill and knowledge of experts to manually delineate brain lesions. This process requires time, and is likely to have substantial inter-rater variability. Here, we propose a supervised machine learning framework for lesion segmentation capable of learning from a single modality and existing manual segmentations in order to delineate lesions in new patients. METHODS: Data from 60 patients with chronic stroke aphasia were utilized in the study (age: 59.7±11.5yrs, post-stroke interval: 5±2.9yrs, male/female ratio: 34/26. Using a single T1 image of each subject, additional features were created that provided complementary information, such as, difference from template, tissue segmentation, brain asymmetries, gradient magnitude, and deviances of these images from 80 age and gender matched controls. These features were fed into MRV-NRF (multi-resolution voxel-wise neighborhood random forest; Tustison et al., 2014 prediction algorithm implemented in ANTsR (Avants, 2015. The algorithm incorporates information from each voxel and its surrounding neighbors from all above features, in a hierarchy of random forest predictions from low to high resolution. The validity of the framework was tested with a 6-fold cross validation (i.e., train from 50 subjects, predict 10. The process was repeated ten times, producing ten segmentations for each subject, from which the average solution was binarized. Predicted lesions were compared to manually defined lesions, and VLSM models were built on 4 language measures: repetition and comprehension subscores from the WAB (Kertesz, 1982, WAB-AQ, and PNT naming accuracy (Roach, Schwartz, Martin, Grewal, & Brecher, 1996. RESULTS: Manual and predicted lesion size showed high correlation (r=0.96. Compared to manual lesions, the predicted lesions had a dice overlap of 0.72 (±0.14 STD, a case-wise maximum distance (Hausdorff of 21mm (±16

  12. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks.

    Science.gov (United States)

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper.

  13. Comparison of supervised machine learning algorithms for waterborne pathogen detection using mobile phone fluorescence microscopy

    Science.gov (United States)

    Ceylan Koydemir, Hatice; Feng, Steve; Liang, Kyle; Nadkarni, Rohan; Benien, Parul; Ozcan, Aydogan

    2017-06-01

    Giardia lamblia is a waterborne parasite that affects millions of people every year worldwide, causing a diarrheal illness known as giardiasis. Timely detection of the presence of the cysts of this parasite in drinking water is important to prevent the spread of the disease, especially in resource-limited settings. Here we provide extended experimental testing and evaluation of the performance and repeatability of a field-portable and cost-effective microscopy platform for automated detection and counting of Giardia cysts in water samples, including tap water, non-potable water, and pond water. This compact platform is based on our previous work, and is composed of a smartphone-based fluorescence microscope, a disposable sample processing cassette, and a custom-developed smartphone application. Our mobile phone microscope has a large field of view of 0.8 cm2 and weighs only 180 g, excluding the phone. A custom-developed smartphone application provides a user-friendly graphical interface, guiding the users to capture a fluorescence image of the sample filter membrane and analyze it automatically at our servers using an image processing algorithm and training data, consisting of >30,000 images of cysts and >100,000 images of other fluorescent particles that are captured, including, e.g. dust. The total time that it takes from sample preparation to automated cyst counting is less than an hour for each 10 ml of water sample that is tested. We compared the sensitivity and the specificity of our platform using multiple supervised classification models, including support vector machines and nearest neighbors, and demonstrated that a bootstrap aggregating (i.e. bagging) approach using raw image file format provides the best performance for automated detection of Giardia cysts. We evaluated the performance of this machine learning enabled pathogen detection device with water samples taken from different sources (e.g. tap water, non-potable water, pond water) and achieved a

  14. Comparison of supervised machine learning algorithms for waterborne pathogen detection using mobile phone fluorescence microscopy

    KAUST Repository

    Ceylan Koydemir, Hatice

    2017-06-14

    Giardia lamblia is a waterborne parasite that affects millions of people every year worldwide, causing a diarrheal illness known as giardiasis. Timely detection of the presence of the cysts of this parasite in drinking water is important to prevent the spread of the disease, especially in resource-limited settings. Here we provide extended experimental testing and evaluation of the performance and repeatability of a field-portable and cost-effective microscopy platform for automated detection and counting of Giardia cysts in water samples, including tap water, non-potable water, and pond water. This compact platform is based on our previous work, and is composed of a smartphone-based fluorescence microscope, a disposable sample processing cassette, and a custom-developed smartphone application. Our mobile phone microscope has a large field of view of ~0.8 cm2 and weighs only ~180 g, excluding the phone. A custom-developed smartphone application provides a user-friendly graphical interface, guiding the users to capture a fluorescence image of the sample filter membrane and analyze it automatically at our servers using an image processing algorithm and training data, consisting of >30,000 images of cysts and >100,000 images of other fluorescent particles that are captured, including, e.g. dust. The total time that it takes from sample preparation to automated cyst counting is less than an hour for each 10 ml of water sample that is tested. We compared the sensitivity and the specificity of our platform using multiple supervised classification models, including support vector machines and nearest neighbors, and demonstrated that a bootstrap aggregating (i.e. bagging) approach using raw image file format provides the best performance for automated detection of Giardia cysts. We evaluated the performance of this machine learning enabled pathogen detection device with water samples taken from different sources (e.g. tap water, non-potable water, pond water) and achieved

  15. Comparison of supervised machine learning algorithms for waterborne pathogen detection using mobile phone fluorescence microscopy

    Directory of Open Access Journals (Sweden)

    Ceylan Koydemir Hatice

    2017-06-01

    Full Text Available Giardia lamblia is a waterborne parasite that affects millions of people every year worldwide, causing a diarrheal illness known as giardiasis. Timely detection of the presence of the cysts of this parasite in drinking water is important to prevent the spread of the disease, especially in resource-limited settings. Here we provide extended experimental testing and evaluation of the performance and repeatability of a field-portable and cost-effective microscopy platform for automated detection and counting of Giardia cysts in water samples, including tap water, non-potable water, and pond water. This compact platform is based on our previous work, and is composed of a smartphone-based fluorescence microscope, a disposable sample processing cassette, and a custom-developed smartphone application. Our mobile phone microscope has a large field of view of ~0.8 cm2 and weighs only ~180 g, excluding the phone. A custom-developed smartphone application provides a user-friendly graphical interface, guiding the users to capture a fluorescence image of the sample filter membrane and analyze it automatically at our servers using an image processing algorithm and training data, consisting of >30,000 images of cysts and >100,000 images of other fluorescent particles that are captured, including, e.g. dust. The total time that it takes from sample preparation to automated cyst counting is less than an hour for each 10 ml of water sample that is tested. We compared the sensitivity and the specificity of our platform using multiple supervised classification models, including support vector machines and nearest neighbors, and demonstrated that a bootstrap aggregating (i.e. bagging approach using raw image file format provides the best performance for automated detection of Giardia cysts. We evaluated the performance of this machine learning enabled pathogen detection device with water samples taken from different sources (e.g. tap water, non-potable water, pond

  16. Comparison of supervised machine learning algorithms for waterborne pathogen detection using mobile phone fluorescence microscopy

    KAUST Repository

    Ceylan Koydemir, Hatice; Feng, Steve; Liang, Kyle; Nadkarni, Rohan; Benien, Parul; Ozcan, Aydogan

    2017-01-01

    Giardia lamblia is a waterborne parasite that affects millions of people every year worldwide, causing a diarrheal illness known as giardiasis. Timely detection of the presence of the cysts of this parasite in drinking water is important to prevent the spread of the disease, especially in resource-limited settings. Here we provide extended experimental testing and evaluation of the performance and repeatability of a field-portable and cost-effective microscopy platform for automated detection and counting of Giardia cysts in water samples, including tap water, non-potable water, and pond water. This compact platform is based on our previous work, and is composed of a smartphone-based fluorescence microscope, a disposable sample processing cassette, and a custom-developed smartphone application. Our mobile phone microscope has a large field of view of ~0.8 cm2 and weighs only ~180 g, excluding the phone. A custom-developed smartphone application provides a user-friendly graphical interface, guiding the users to capture a fluorescence image of the sample filter membrane and analyze it automatically at our servers using an image processing algorithm and training data, consisting of >30,000 images of cysts and >100,000 images of other fluorescent particles that are captured, including, e.g. dust. The total time that it takes from sample preparation to automated cyst counting is less than an hour for each 10 ml of water sample that is tested. We compared the sensitivity and the specificity of our platform using multiple supervised classification models, including support vector machines and nearest neighbors, and demonstrated that a bootstrap aggregating (i.e. bagging) approach using raw image file format provides the best performance for automated detection of Giardia cysts. We evaluated the performance of this machine learning enabled pathogen detection device with water samples taken from different sources (e.g. tap water, non-potable water, pond water) and achieved

  17. Design Automation Algorithm for Soft Robots

    Data.gov (United States)

    National Aeronautics and Space Administration — The majority of design to manufacturing today is still an ad hoc and empirical process. There is a direct need for a single, automated design and fabrication...

  18. Benchmarking protein classification algorithms via supervised cross-validation

    NARCIS (Netherlands)

    Kertész-Farkas, A.; Dhir, S.; Sonego, P.; Pacurar, M.; Netoteia, S.; Nijveen, H.; Kuzniar, A.; Leunissen, J.A.M.; Kocsor, A.; Pongor, S.

    2008-01-01

    Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold,

  19. Fall detection using supervised machine learning algorithms: A comparative study

    KAUST Repository

    Zerrouki, Nabil; Harrou, Fouzi; Houacine, Amrane; Sun, Ying

    2017-01-01

    Fall incidents are considered as the leading cause of disability and even mortality among older adults. To address this problem, fall detection and prevention fields receive a lot of intention over the past years and attracted many researcher efforts. We present in the current study an overall performance comparison between fall detection systems using the most popular machine learning approaches which are: Naïve Bayes, K nearest neighbor, neural network, and support vector machine. The analysis of the classification power associated to these most widely utilized algorithms is conducted on two fall detection databases namely FDD and URFD. Since the performance of the classification algorithm is inherently dependent on the features, we extracted and used the same features for all classifiers. The classification evaluation is conducted using different state of the art statistical measures such as the overall accuracy, the F-measure coefficient, and the area under ROC curve (AUC) value.

  20. Fall detection using supervised machine learning algorithms: A comparative study

    KAUST Repository

    Zerrouki, Nabil

    2017-01-05

    Fall incidents are considered as the leading cause of disability and even mortality among older adults. To address this problem, fall detection and prevention fields receive a lot of intention over the past years and attracted many researcher efforts. We present in the current study an overall performance comparison between fall detection systems using the most popular machine learning approaches which are: Naïve Bayes, K nearest neighbor, neural network, and support vector machine. The analysis of the classification power associated to these most widely utilized algorithms is conducted on two fall detection databases namely FDD and URFD. Since the performance of the classification algorithm is inherently dependent on the features, we extracted and used the same features for all classifiers. The classification evaluation is conducted using different state of the art statistical measures such as the overall accuracy, the F-measure coefficient, and the area under ROC curve (AUC) value.

  1. ASSESSMENT OF PERFORMANCES OF VARIOUS MACHINE LEARNING ALGORITHMS DURING AUTOMATED EVALUATION OF DESCRIPTIVE ANSWERS

    Directory of Open Access Journals (Sweden)

    C. Sunil Kumar

    2014-07-01

    Full Text Available Automation of descriptive answers evaluation is the need of the hour because of the huge increase in the number of students enrolling each year in educational institutions and the limited staff available to spare their time for evaluations. In this paper, we use a machine learning workbench called LightSIDE to accomplish auto evaluation and scoring of descriptive answers. We attempted to identify the best supervised machine learning algorithm given a limited training set sample size scenario. We evaluated performances of Bayes, SVM, Logistic Regression, Random forests, Decision stump and Decision trees algorithms. We confirmed SVM as best performing algorithm based on quantitative measurements across accuracy, kappa, training speed and prediction accuracy with supplied test set.

  2. Novel maximum-margin training algorithms for supervised neural networks.

    Science.gov (United States)

    Ludwig, Oswaldo; Nunes, Urbano

    2010-06-01

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by

  3. Fault Diagnosis of Supervision and Homogenization Distance Based on Local Linear Embedding Algorithm

    Directory of Open Access Journals (Sweden)

    Guangbin Wang

    2015-01-01

    Full Text Available In view of the problems of uneven distribution of reality fault samples and dimension reduction effect of locally linear embedding (LLE algorithm which is easily affected by neighboring points, an improved local linear embedding algorithm of homogenization distance (HLLE is developed. The method makes the overall distribution of sample points tend to be homogenization and reduces the influence of neighboring points using homogenization distance instead of the traditional Euclidean distance. It is helpful to choose effective neighboring points to construct weight matrix for dimension reduction. Because the fault recognition performance improvement of HLLE is limited and unstable, the paper further proposes a new local linear embedding algorithm of supervision and homogenization distance (SHLLE by adding the supervised learning mechanism. On the basis of homogenization distance, supervised learning increases the category information of sample points so that the same category of sample points will be gathered and the heterogeneous category of sample points will be scattered. It effectively improves the performance of fault diagnosis and maintains stability at the same time. A comparison of the methods mentioned above was made by simulation experiment with rotor system fault diagnosis, and the results show that SHLLE algorithm has superior fault recognition performance.

  4. Semi-supervised prediction of gene regulatory networks using machine learning algorithms.

    Science.gov (United States)

    Patel, Nihir; Wang, Jason T L

    2015-10-01

    Use of computational methods to predict gene regulatory networks (GRNs) from gene expression data is a challenging task. Many studies have been conducted using unsupervised methods to fulfill the task; however, such methods usually yield low prediction accuracies due to the lack of training data. In this article, we propose semi-supervised methods for GRN prediction by utilizing two machine learning algorithms, namely, support vector machines (SVM) and random forests (RF). The semi-supervised methods make use of unlabelled data for training. We investigated inductive and transductive learning approaches, both of which adopt an iterative procedure to obtain reliable negative training data from the unlabelled data. We then applied our semi-supervised methods to gene expression data of Escherichia coli and Saccharomyces cerevisiae, and evaluated the performance of our methods using the expression data. Our analysis indicated that the transductive learning approach outperformed the inductive learning approach for both organisms. However, there was no conclusive difference identified in the performance of SVM and RF. Experimental results also showed that the proposed semi-supervised methods performed better than existing supervised methods for both organisms.

  5. Automated Essay Grading using Machine Learning Algorithm

    Science.gov (United States)

    Ramalingam, V. V.; Pandian, A.; Chetry, Prateek; Nigam, Himanshu

    2018-04-01

    Essays are paramount for of assessing the academic excellence along with linking the different ideas with the ability to recall but are notably time consuming when they are assessed manually. Manual grading takes significant amount of evaluator’s time and hence it is an expensive process. Automated grading if proven effective will not only reduce the time for assessment but comparing it with human scores will also make the score realistic. The project aims to develop an automated essay assessment system by use of machine learning techniques by classifying a corpus of textual entities into small number of discrete categories, corresponding to possible grades. Linear regression technique will be utilized for training the model along with making the use of various other classifications and clustering techniques. We intend to train classifiers on the training set, make it go through the downloaded dataset, and then measure performance our dataset by comparing the obtained values with the dataset values. We have implemented our model using java.

  6. Automated Vectorization of Decision-Based Algorithms

    Science.gov (United States)

    James, Mark

    2006-01-01

    Virtually all existing vectorization algorithms are designed to only analyze the numeric properties of an algorithm and distribute those elements across multiple processors. This advances the state of the practice because it is the only known system, at the time of this reporting, that takes high-level statements and analyzes them for their decision properties and converts them to a form that allows them to automatically be executed in parallel. The software takes a high-level source program that describes a complex decision- based condition and rewrites it as a disjunctive set of component Boolean relations that can then be executed in parallel. This is important because parallel architectures are becoming more commonplace in conventional systems and they have always been present in NASA flight systems. This technology allows one to take existing condition-based code and automatically vectorize it so it naturally decomposes across parallel architectures.

  7. Automated lesion detection on MRI scans using combined unsupervised and supervised methods

    International Nuclear Information System (INIS)

    Guo, Dazhou; Fridriksson, Julius; Fillmore, Paul; Rorden, Christopher; Yu, Hongkai; Zheng, Kang; Wang, Song

    2015-01-01

    Accurate and precise detection of brain lesions on MR images (MRI) is paramount for accurately relating lesion location to impaired behavior. In this paper, we present a novel method to automatically detect brain lesions from a T1-weighted 3D MRI. The proposed method combines the advantages of both unsupervised and supervised methods. First, unsupervised methods perform a unified segmentation normalization to warp images from the native space into a standard space and to generate probability maps for different tissue types, e.g., gray matter, white matter and fluid. This allows us to construct an initial lesion probability map by comparing the normalized MRI to healthy control subjects. Then, we perform non-rigid and reversible atlas-based registration to refine the probability maps of gray matter, white matter, external CSF, ventricle, and lesions. These probability maps are combined with the normalized MRI to construct three types of features, with which we use supervised methods to train three support vector machine (SVM) classifiers for a combined classifier. Finally, the combined classifier is used to accomplish lesion detection. We tested this method using T1-weighted MRIs from 60 in-house stroke patients. Using leave-one-out cross validation, the proposed method can achieve an average Dice coefficient of 73.1 % when compared to lesion maps hand-delineated by trained neurologists. Furthermore, we tested the proposed method on the T1-weighted MRIs in the MICCAI BRATS 2012 dataset. The proposed method can achieve an average Dice coefficient of 66.5 % in comparison to the expert annotated tumor maps provided in MICCAI BRATS 2012 dataset. In addition, on these two test datasets, the proposed method shows competitive performance to three state-of-the-art methods, including Stamatakis et al., Seghier et al., and Sanjuan et al. In this paper, we introduced a novel automated procedure for lesion detection from T1-weighted MRIs by combining both an unsupervised and a

  8. Automated System for Teaching Computational Complexity of Algorithms Course

    Directory of Open Access Journals (Sweden)

    Vadim S. Roublev

    2017-01-01

    Full Text Available This article describes problems of designing automated teaching system for “Computational complexity of algorithms” course. This system should provide students with means to familiarize themselves with complex mathematical apparatus and improve their mathematical thinking in the respective area. The article introduces the technique of algorithms symbol scroll table that allows estimating lower and upper bounds of computational complexity. Further, we introduce a set of theorems that facilitate the analysis in cases when the integer rounding of algorithm parameters is involved and when analyzing the complexity of a sum. At the end, the article introduces a normal system of symbol transformations that allows one both to perform any symbol transformations and simplifies the automated validation of such transformations. The article is published in the authors’ wording.

  9. An immune-inspired semi-supervised algorithm for breast cancer diagnosis.

    Science.gov (United States)

    Peng, Lingxi; Chen, Wenbin; Zhou, Wubai; Li, Fufang; Yang, Jin; Zhang, Jiandong

    2016-10-01

    Breast cancer is the most frequently and world widely diagnosed life-threatening cancer, which is the leading cause of cancer death among women. Early accurate diagnosis can be a big plus in treating breast cancer. Researchers have approached this problem using various data mining and machine learning techniques such as support vector machine, artificial neural network, etc. The computer immunology is also an intelligent method inspired by biological immune system, which has been successfully applied in pattern recognition, combination optimization, machine learning, etc. However, most of these diagnosis methods belong to a supervised diagnosis method. It is very expensive to obtain labeled data in biology and medicine. In this paper, we seamlessly integrate the state-of-the-art research on life science with artificial intelligence, and propose a semi-supervised learning algorithm to reduce the need for labeled data. We use two well-known benchmark breast cancer datasets in our study, which are acquired from the UCI machine learning repository. Extensive experiments are conducted and evaluated on those two datasets. Our experimental results demonstrate the effectiveness and efficiency of our proposed algorithm, which proves that our algorithm is a promising automatic diagnosis method for breast cancer. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. A Novel Classification Algorithm Based on Incremental Semi-Supervised Support Vector Machine.

    Directory of Open Access Journals (Sweden)

    Fei Gao

    Full Text Available For current computational intelligence techniques, a major challenge is how to learn new concepts in changing environment. Traditional learning schemes could not adequately address this problem due to a lack of dynamic data selection mechanism. In this paper, inspired by human learning process, a novel classification algorithm based on incremental semi-supervised support vector machine (SVM is proposed. Through the analysis of prediction confidence of samples and data distribution in a changing environment, a "soft-start" approach, a data selection mechanism and a data cleaning mechanism are designed, which complete the construction of our incremental semi-supervised learning system. Noticeably, with the ingenious design procedure of our proposed algorithm, the computation complexity is reduced effectively. In addition, for the possible appearance of some new labeled samples in the learning process, a detailed analysis is also carried out. The results show that our algorithm does not rely on the model of sample distribution, has an extremely low rate of introducing wrong semi-labeled samples and can effectively make use of the unlabeled samples to enrich the knowledge system of classifier and improve the accuracy rate. Moreover, our method also has outstanding generalization performance and the ability to overcome the concept drift in a changing environment.

  11. A Recommendation Algorithm for Automating Corollary Order Generation

    Science.gov (United States)

    Klann, Jeffrey; Schadow, Gunther; McCoy, JM

    2009-01-01

    Manual development and maintenance of decision support content is time-consuming and expensive. We explore recommendation algorithms, e-commerce data-mining tools that use collective order history to suggest purchases, to assist with this. In particular, previous work shows corollary order suggestions are amenable to automated data-mining techniques. Here, an item-based collaborative filtering algorithm augmented with association rule interestingness measures mined suggestions from 866,445 orders made in an inpatient hospital in 2007, generating 584 potential corollary orders. Our expert physician panel evaluated the top 92 and agreed 75.3% were clinically meaningful. Also, at least one felt 47.9% would be directly relevant in guideline development. This automated generation of a rough-cut of corollary orders confirms prior indications about automated tools in building decision support content. It is an important step toward computerized augmentation to decision support development, which could increase development efficiency and content quality while automatically capturing local standards. PMID:20351875

  12. Automated Detection of Microaneurysms Using Scale-Adapted Blob Analysis and Semi-Supervised Learning

    Energy Technology Data Exchange (ETDEWEB)

    Adal, Kedir M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Sidebe, Desire [Univ. of Burgundy, Dijon (France); Ali, Sharib [Univ. of Burgundy, Dijon (France); Chaum, Edward [Univ. of Tennessee, Knoxville, TN (United States); Karnowski, Thomas Paul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Meriaudeau, Fabrice [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2014-01-07

    Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are then introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier to detect true MAs. The developed system is built using only few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images.

  13. Automated detection of microaneurysms using scale-adapted blob analysis and semi-supervised learning.

    Science.gov (United States)

    Adal, Kedir M; Sidibé, Désiré; Ali, Sharib; Chaum, Edward; Karnowski, Thomas P; Mériaudeau, Fabrice

    2014-04-01

    Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier which can detect true MAs. The developed system is built using only few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. SPECIAL LIBRARIES OF FRAGMENTS OF ALGORITHMIC NETWORKS TO AUTOMATE THE DEVELOPMENT OF ALGORITHMIC MODELS

    Directory of Open Access Journals (Sweden)

    V. E. Marley

    2015-01-01

    Full Text Available Summary. The concept of algorithmic models appeared from the algorithmic approach in which the simulated object, the phenomenon appears in the form of process, subject to strict rules of the algorithm, which placed the process of operation of the facility. Under the algorithmic model is the formalized description of the scenario subject specialist for the simulated process, the structure of which is comparable with the structure of the causal and temporal relationships between events of the process being modeled, together with all information necessary for its software implementation. To represent the structure of algorithmic models used algorithmic network. Normally, they were defined as loaded finite directed graph, the vertices which are mapped to operators and arcs are variables, bound by operators. The language of algorithmic networks has great features, the algorithms that it can display indifference the class of all random algorithms. In existing systems, automation modeling based on algorithmic nets, mainly used by operators working with real numbers. Although this reduces their ability, but enough for modeling a wide class of problems related to economy, environment, transport, technical processes. The task of modeling the execution of schedules and network diagrams is relevant and useful. There are many counting systems, network graphs, however, the monitoring process based analysis of gaps and terms of graphs, no analysis of prediction execution schedule or schedules. The library is designed to build similar predictive models. Specifying source data to obtain a set of projections from which to choose one and take it for a new plan.

  15. An Algorithm to Automate Yeast Segmentation and Tracking

    Science.gov (United States)

    Doncic, Andreas; Eser, Umut; Atay, Oguzhan; Skotheim, Jan M.

    2013-01-01

    Our understanding of dynamic cellular processes has been greatly enhanced by rapid advances in quantitative fluorescence microscopy. Imaging single cells has emphasized the prevalence of phenomena that can be difficult to infer from population measurements, such as all-or-none cellular decisions, cell-to-cell variability, and oscillations. Examination of these phenomena requires segmenting and tracking individual cells over long periods of time. However, accurate segmentation and tracking of cells is difficult and is often the rate-limiting step in an experimental pipeline. Here, we present an algorithm that accomplishes fully automated segmentation and tracking of budding yeast cells within growing colonies. The algorithm incorporates prior information of yeast-specific traits, such as immobility and growth rate, to segment an image using a set of threshold values rather than one specific optimized threshold. Results from the entire set of thresholds are then used to perform a robust final segmentation. PMID:23520484

  16. An algorithm to automate yeast segmentation and tracking.

    Directory of Open Access Journals (Sweden)

    Andreas Doncic

    Full Text Available Our understanding of dynamic cellular processes has been greatly enhanced by rapid advances in quantitative fluorescence microscopy. Imaging single cells has emphasized the prevalence of phenomena that can be difficult to infer from population measurements, such as all-or-none cellular decisions, cell-to-cell variability, and oscillations. Examination of these phenomena requires segmenting and tracking individual cells over long periods of time. However, accurate segmentation and tracking of cells is difficult and is often the rate-limiting step in an experimental pipeline. Here, we present an algorithm that accomplishes fully automated segmentation and tracking of budding yeast cells within growing colonies. The algorithm incorporates prior information of yeast-specific traits, such as immobility and growth rate, to segment an image using a set of threshold values rather than one specific optimized threshold. Results from the entire set of thresholds are then used to perform a robust final segmentation.

  17. Supervised chaos genetic algorithm based state of charge determination for LiFePO4 batteries in electric vehicles

    Science.gov (United States)

    Shen, Yanqing

    2018-04-01

    LiFePO4 battery is developed rapidly in electric vehicle, whose safety and functional capabilities are influenced greatly by the evaluation of available cell capacity. Added with adaptive switch mechanism, this paper advances a supervised chaos genetic algorithm based state of charge determination method, where a combined state space model is employed to simulate battery dynamics. The method is validated by the experiment data collected from battery test system. Results indicate that the supervised chaos genetic algorithm based state of charge determination method shows great performance with less computation complexity and is little influenced by the unknown initial cell state.

  18. Sampling algorithms for validation of supervised learning models for Ising-like systems

    Science.gov (United States)

    Portman, Nataliya; Tamblyn, Isaac

    2017-12-01

    In this paper, we build and explore supervised learning models of ferromagnetic system behavior, using Monte-Carlo sampling of the spin configuration space generated by the 2D Ising model. Given the enormous size of the space of all possible Ising model realizations, the question arises as to how to choose a reasonable number of samples that will form physically meaningful and non-intersecting training and testing datasets. Here, we propose a sampling technique called ;ID-MH; that uses the Metropolis-Hastings algorithm creating Markov process across energy levels within the predefined configuration subspace. We show that application of this method retains phase transitions in both training and testing datasets and serves the purpose of validation of a machine learning algorithm. For larger lattice dimensions, ID-MH is not feasible as it requires knowledge of the complete configuration space. As such, we develop a new ;block-ID; sampling strategy: it decomposes the given structure into square blocks with lattice dimension N ≤ 5 and uses ID-MH sampling of candidate blocks. Further comparison of the performance of commonly used machine learning methods such as random forests, decision trees, k nearest neighbors and artificial neural networks shows that the PCA-based Decision Tree regressor is the most accurate predictor of magnetizations of the Ising model. For energies, however, the accuracy of prediction is not satisfactory, highlighting the need to consider more algorithmically complex methods (e.g., deep learning).

  19. An Automated Summarization Assessment Algorithm for Identifying Summarizing Strategies.

    Directory of Open Access Journals (Sweden)

    Asad Abdi

    Full Text Available Summarization is a process to select important information from a source text. Summarizing strategies are the core cognitive processes in summarization activity. Since summarization can be important as a tool to improve comprehension, it has attracted interest of teachers for teaching summary writing through direct instruction. To do this, they need to review and assess the students' summaries and these tasks are very time-consuming. Thus, a computer-assisted assessment can be used to help teachers to conduct this task more effectively.This paper aims to propose an algorithm based on the combination of semantic relations between words and their syntactic composition to identify summarizing strategies employed by students in summary writing. An innovative aspect of our algorithm lies in its ability to identify summarizing strategies at the syntactic and semantic levels. The efficiency of the algorithm is measured in terms of Precision, Recall and F-measure. We then implemented the algorithm for the automated summarization assessment system that can be used to identify the summarizing strategies used by students in summary writing.

  20. An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform

    Science.gov (United States)

    2018-01-01

    ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform by Kwok F Tom Sensors and Electron...1 October 2016–30 September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a

  1. Semi-supervised spectral algorithms for community detection in complex networks based on equivalence of clustering methods

    Science.gov (United States)

    Ma, Xiaoke; Wang, Bingbo; Yu, Liang

    2018-01-01

    Community detection is fundamental for revealing the structure-functionality relationship in complex networks, which involves two issues-the quantitative function for community as well as algorithms to discover communities. Despite significant research on either of them, few attempt has been made to establish the connection between the two issues. To attack this problem, a generalized quantification function is proposed for community in weighted networks, which provides a framework that unifies several well-known measures. Then, we prove that the trace optimization of the proposed measure is equivalent with the objective functions of algorithms such as nonnegative matrix factorization, kernel K-means as well as spectral clustering. It serves as the theoretical foundation for designing algorithms for community detection. On the second issue, a semi-supervised spectral clustering algorithm is developed by exploring the equivalence relation via combining the nonnegative matrix factorization and spectral clustering. Different from the traditional semi-supervised algorithms, the partial supervision is integrated into the objective of the spectral algorithm. Finally, through extensive experiments on both artificial and real world networks, we demonstrate that the proposed method improves the accuracy of the traditional spectral algorithms in community detection.

  2. The Automated Assessment of Postural Stability: Balance Detection Algorithm.

    Science.gov (United States)

    Napoli, Alessandro; Glass, Stephen M; Tucker, Carole; Obeid, Iyad

    2017-12-01

    Impaired balance is a common indicator of mild traumatic brain injury, concussion and musculoskeletal injury. Given the clinical relevance of such injuries, especially in military settings, it is paramount to develop more accurate and reliable on-field evaluation tools. This work presents the design and implementation of the automated assessment of postural stability (AAPS) system, for on-field evaluations following concussion. The AAPS is a computer system, based on inexpensive off-the-shelf components and custom software, that aims to automatically and reliably evaluate balance deficits, by replicating a known on-field clinical test, namely, the Balance Error Scoring System (BESS). The AAPS main innovation is its balance error detection algorithm that has been designed to acquire data from a Microsoft Kinect ® sensor and convert them into clinically-relevant BESS scores, using the same detection criteria defined by the original BESS test. In order to assess the AAPS balance evaluation capability, a total of 15 healthy subjects (7 male, 8 female) were required to perform the BESS test, while simultaneously being tracked by a Kinect 2.0 sensor and a professional-grade motion capture system (Qualisys AB, Gothenburg, Sweden). High definition videos with BESS trials were scored off-line by three experienced observers for reference scores. AAPS performance was assessed by comparing the AAPS automated scores to those derived by three experienced observers. Our results show that the AAPS error detection algorithm presented here can accurately and precisely detect balance deficits with performance levels that are comparable to those of experienced medical personnel. Specifically, agreement levels between the AAPS algorithm and the human average BESS scores ranging between 87.9% (single-leg on foam) and 99.8% (double-leg on firm ground) were detected. Moreover, statistically significant differences in balance scores were not detected by an ANOVA test with alpha equal to 0

  3. Automated microaneurysm detection algorithms applied to diabetic retinopathy retinal images

    Directory of Open Access Journals (Sweden)

    Akara Sopharak

    2013-07-01

    Full Text Available Diabetic retinopathy is the commonest cause of blindness in working age people. It is characterised and graded by the development of retinal microaneurysms, haemorrhages and exudates. The damage caused by diabetic retinopathy can be prevented if it is treated in its early stages. Therefore, automated early detection can limit the severity of the disease, improve the follow-up management of diabetic patients and assist ophthalmologists in investigating and treating the disease more efficiently. This review focuses on microaneurysm detection as the earliest clinically localised characteristic of diabetic retinopathy, a frequently observed complication in both Type 1 and Type 2 diabetes. Algorithms used for microaneurysm detection from retinal images are reviewed. A number of features used to extract microaneurysm are summarised. Furthermore, a comparative analysis of reported methods used to automatically detect microaneurysms is presented and discussed. The performance of methods and their complexity are also discussed.

  4. Postprocessing algorithm for automated analysis of pelvic intraoperative neuromonitoring signals

    Directory of Open Access Journals (Sweden)

    Wegner Celine

    2016-09-01

    Full Text Available Two dimensional pelvic intraoperative neuromonitoring (pIONM® is based on electric stimulation of autonomic nerves under observation of electromyography of internal anal sphincter (IAS and manometry of urinary bladder. The method provides nerve identification and verification of its’ functional integrity. Currently pIONM® is gaining increased attention in times where preservation of function is becoming more and more important. Ongoing technical and methodological developments in experimental and clinical settings require further analysis of the obtained signals. This work describes a postprocessing algorithm for pIONM® signals, developed for automated analysis of huge amount of recorded data. The analysis routine includes a graphical representation of the recorded signals in the time and frequency domain, as well as a quantitative evaluation by means of features calculated from the time and frequency domain. The produced plots are summarized automatically in a PowerPoint presentation. The calculated features are filled into a standardized Excel-sheet, ready for statistical analysis.

  5. MED: a new non-supervised gene prediction algorithm for bacterial and archaeal genomes

    Directory of Open Access Journals (Sweden)

    Yang Yi-Fan

    2007-03-01

    Full Text Available Abstract Background Despite a remarkable success in the computational prediction of genes in Bacteria and Archaea, a lack of comprehensive understanding of prokaryotic gene structures prevents from further elucidation of differences among genomes. It continues to be interesting to develop new ab initio algorithms which not only accurately predict genes, but also facilitate comparative studies of prokaryotic genomes. Results This paper describes a new prokaryotic genefinding algorithm based on a comprehensive statistical model of protein coding Open Reading Frames (ORFs and Translation Initiation Sites (TISs. The former is based on a linguistic "Entropy Density Profile" (EDP model of coding DNA sequence and the latter comprises several relevant features related to the translation initiation. They are combined to form a so-called Multivariate Entropy Distance (MED algorithm, MED 2.0, that incorporates several strategies in the iterative program. The iterations enable us to develop a non-supervised learning process and to obtain a set of genome-specific parameters for the gene structure, before making the prediction of genes. Conclusion Results of extensive tests show that MED 2.0 achieves a competitive high performance in the gene prediction for both 5' and 3' end matches, compared to the current best prokaryotic gene finders. The advantage of the MED 2.0 is particularly evident for GC-rich genomes and archaeal genomes. Furthermore, the genome-specific parameters given by MED 2.0 match with the current understanding of prokaryotic genomes and may serve as tools for comparative genomic studies. In particular, MED 2.0 is shown to reveal divergent translation initiation mechanisms in archaeal genomes while making a more accurate prediction of TISs compared to the existing gene finders and the current GenBank annotation.

  6. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    Science.gov (United States)

    Long, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris

    2000-01-01

    Parallelized versions of genetic algorithms (GAs) are popular primarily for three reasons: the GA is an inherently parallel algorithm, typical GA applications are very compute intensive, and powerful computing platforms, especially Beowulf-style computing clusters, are becoming more affordable and easier to implement. In addition, the low communication bandwidth required allows the use of inexpensive networking hardware such as standard office ethernet. In this paper we describe a parallel GA and its use in automated high-level circuit design. Genetic algorithms are a type of trial-and-error search technique that are guided by principles of Darwinian evolution. Just as the genetic material of two living organisms can intermix to produce offspring that are better adapted to their environment, GAs expose genetic material, frequently strings of 1s and Os, to the forces of artificial evolution: selection, mutation, recombination, etc. GAs start with a pool of randomly-generated candidate solutions which are then tested and scored with respect to their utility. Solutions are then bred by probabilistically selecting high quality parents and recombining their genetic representations to produce offspring solutions. Offspring are typically subjected to a small amount of random mutation. After a pool of offspring is produced, this process iterates until a satisfactory solution is found or an iteration limit is reached. Genetic algorithms have been applied to a wide variety of problems in many fields, including chemistry, biology, and many engineering disciplines. There are many styles of parallelism used in implementing parallel GAs. One such method is called the master-slave or processor farm approach. In this technique, slave nodes are used solely to compute fitness evaluations (the most time consuming part). The master processor collects fitness scores from the nodes and performs the genetic operators (selection, reproduction, variation, etc.). Because of dependency

  7. An Automated Algorithm to Screen Massive Training Samples for a Global Impervious Surface Classification

    Science.gov (United States)

    Tan, Bin; Brown de Colstoun, Eric; Wolfe, Robert E.; Tilton, James C.; Huang, Chengquan; Smith, Sarah E.

    2012-01-01

    An algorithm is developed to automatically screen the outliers from massive training samples for Global Land Survey - Imperviousness Mapping Project (GLS-IMP). GLS-IMP is to produce a global 30 m spatial resolution impervious cover data set for years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. This unprecedented high resolution impervious cover data set is not only significant to the urbanization studies but also desired by the global carbon, hydrology, and energy balance researches. A supervised classification method, regression tree, is applied in this project. A set of accurate training samples is the key to the supervised classifications. Here we developed the global scale training samples from 1 m or so resolution fine resolution satellite data (Quickbird and Worldview2), and then aggregate the fine resolution impervious cover map to 30 m resolution. In order to improve the classification accuracy, the training samples should be screened before used to train the regression tree. It is impossible to manually screen 30 m resolution training samples collected globally. For example, in Europe only, there are 174 training sites. The size of the sites ranges from 4.5 km by 4.5 km to 8.1 km by 3.6 km. The amount training samples are over six millions. Therefore, we develop this automated statistic based algorithm to screen the training samples in two levels: site and scene level. At the site level, all the training samples are divided to 10 groups according to the percentage of the impervious surface within a sample pixel. The samples following in each 10% forms one group. For each group, both univariate and multivariate outliers are detected and removed. Then the screen process escalates to the scene level. A similar screen process but with a looser threshold is applied on the scene level considering the possible variance due to the site difference. We do not perform the screen process across the scenes because the scenes might vary due to

  8. ALGORITHM OF WORK OF SYSTEM OF MANAGEMENT BY AUTOMATED GEAR-BOXES CARS

    Directory of Open Access Journals (Sweden)

    O. Smirnov

    2009-01-01

    Full Text Available The development of algorithms of management system’s work by the automated gear-boxes vehicles is considered and the results of their practical use on the example of the KamAZ truck are considered.

  9. A fully automated algorithm of baseline correction based on wavelet feature points and segment interpolation

    Science.gov (United States)

    Qian, Fang; Wu, Yihui; Hao, Peng

    2017-11-01

    Baseline correction is a very important part of pre-processing. Baseline in the spectrum signal can induce uneven amplitude shifts across different wavenumbers and lead to bad results. Therefore, these amplitude shifts should be compensated before further analysis. Many algorithms are used to remove baseline, however fully automated baseline correction is convenient in practical application. A fully automated algorithm based on wavelet feature points and segment interpolation (AWFPSI) is proposed. This algorithm finds feature points through continuous wavelet transformation and estimates baseline through segment interpolation. AWFPSI is compared with three commonly introduced fully automated and semi-automated algorithms, using simulated spectrum signal, visible spectrum signal and Raman spectrum signal. The results show that AWFPSI gives better accuracy and has the advantage of easy use.

  10. Evaluation of an automated single-channel sleep staging algorithm

    Directory of Open Access Journals (Sweden)

    Wang Y

    2015-09-01

    Full Text Available Ying Wang,1 Kenneth A Loparo,1,2 Monica R Kelly,3 Richard F Kaplan1 1General Sleep Corporation, Euclid, OH, 2Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH, 3Department of Psychology, University of Arizona, Tucson, AZ, USA Background: We previously published the performance evaluation of an automated electroencephalography (EEG-based single-channel sleep–wake detection algorithm called Z-ALG used by the Zmachine® sleep monitoring system. The objective of this paper is to evaluate the performance of a new algorithm called Z-PLUS, which further differentiates sleep as detected by Z-ALG into Light Sleep, Deep Sleep, and Rapid Eye Movement (REM Sleep, against laboratory polysomnography (PSG using a consensus of expert visual scorers. Methods: Single night, in-lab PSG recordings from 99 subjects (52F/47M, 18–60 years, median age 32.7 years, including both normal sleepers and those reporting a variety of sleep complaints consistent with chronic insomnia, sleep apnea, and restless leg syndrome, as well as those taking selective serotonin reuptake inhibitor/serotonin–norepinephrine reuptake inhibitor antidepressant medications, previously evaluated using Z-ALG were re-examined using Z-PLUS. EEG data collected from electrodes placed at the differential-mastoids (A1–A2 were processed by Z-ALG to determine wake and sleep, then those epochs detected as sleep were further processed by Z-PLUS to differentiate into Light Sleep, Deep Sleep, and REM. EEG data were visually scored by multiple certified polysomnographic technologists according to the Rechtschaffen and Kales criterion, and then combined using a majority-voting rule to create a PSG Consensus score file for each of the 99 subjects. Z-PLUS output was compared to the PSG Consensus score files for both epoch-by-epoch (eg, sensitivity, specificity, and kappa and sleep stage-related statistics (eg, Latency to Deep Sleep, Latency to REM

  11. Diagnostic information system dynamics in the evaluation of machine learning algorithms for the supervision of energy efficiency of district heating-supplied buildings

    International Nuclear Information System (INIS)

    Kiluk, Sebastian

    2017-01-01

    Highlights: • Energy efficiency classification sustainability benefits from knowledge prediction. • Diagnostic classification can be validated with its dynamics and current data. • Diagnostic classification dynamics provides novelty extraction for knowledge update. • Data mining comparison can be performed with knowledge dynamics and uncertainty. • Diagnostic information refinement benefits form comparing classifiers dynamics. - Abstract: Modern ways of exploring the diagnostic knowledge provided by data mining and machine learning raise some concern about the ways of evaluating the quality of output knowledge, usually represented by information systems. Especially in district heating, the stationarity of efficiency models, and thus the relevance of diagnostic classification system, cannot be ensured due to the impact of social, economic or technological changes, which are hard to identify or predict. Therefore, data mining and machine learning have become an attractive strategy for automatically and continuously absorbing such dynamics. This paper presents a new method of evaluation and comparison of diagnostic information systems gathered algorithmically in district heating efficiency supervision based on exploring the evolution of information system and analyzing its dynamic features. The process of data mining and knowledge discovery was applied to the data acquired from district heating substations’ energy meters to provide the automated discovery of diagnostic knowledge base necessary for the efficiency supervision of district heating-supplied buildings. The implemented algorithm consists of several steps of processing the billing data, including preparation, segmentation, aggregation and knowledge discovery stage, where classes of abstract models representing energy efficiency constitute an information system representing diagnostic knowledge about the energy efficiency of buildings favorably operating under similar climate conditions and

  12. Automated Test Assembly for Cognitive Diagnosis Models Using a Genetic Algorithm

    Science.gov (United States)

    Finkelman, Matthew; Kim, Wonsuk; Roussos, Louis A.

    2009-01-01

    Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to…

  13. Predicting incomplete gene microarray data with the use of supervised learning algorithms

    CSIR Research Space (South Africa)

    Twala, B

    2010-10-01

    Full Text Available that prediction using supervised learning can be improved in probabilistic terms given incomplete microarray data. This imputation approach is based on the a priori probability of each value determined from the instances at that node of a decision tree (PDT...

  14. A semi-supervised segmentation algorithm as applied to k-means ...

    African Journals Online (AJOL)

    Segmentation (or partitioning) of data for the purpose of enhancing predictive modelling is a well-established practice in the banking industry. Unsupervised and supervised approaches are the two main streams of segmentation and examples exist where the application of these techniques improved the performance of ...

  15. Balancing Inverted Pendulum by Angle Sensing Using Fuzzy Logic Supervised PID Controller Optimized by Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Ashutosh K. AGARWAL

    2011-10-01

    Full Text Available Genetic algorithms are robust search techniques based on the principles of evolution. A genetic algorithm maintains a population of encoded solutions and guides the population towards the optimum solution. This important property of genetic algorithm is used in this paper to stabilize the Inverted pendulum system. This paper highlights the application and stability of inverted pendulum using PID controller with fuzzy logic genetic algorithm supervisor . There are a large number of well established search techniques in use within the information technology industry. We propose a method to control inverted pendulum steady state error and overshoot using genetic algorithm technique.

  16. Automation of a high risk medication regime algorithm in a home health care population.

    Science.gov (United States)

    Olson, Catherine H; Dierich, Mary; Westra, Bonnie L

    2014-10-01

    Create an automated algorithm for predicting elderly patients' medication-related risks for readmission and validate it by comparing results with a manual analysis of the same patient population. Outcome and Assessment Information Set (OASIS) and medication data were reused from a previous, manual study of 911 patients from 15 Medicare-certified home health care agencies. The medication data was converted into standardized drug codes using APIs managed by the National Library of Medicine (NLM), and then integrated in an automated algorithm that calculates patients' high risk medication regime scores (HRMRs). A comparison of the results between algorithm and manual process was conducted to determine how frequently the HRMR scores were derived which are predictive of readmission. HRMR scores are composed of polypharmacy (number of drugs), Potentially Inappropriate Medications (PIM) (drugs risky to the elderly), and Medication Regimen Complexity Index (MRCI) (complex dose forms, instructions or administration). The algorithm produced polypharmacy, PIM, and MRCI scores that matched with 99%, 87% and 99% of the scores, respectively, from the manual analysis. Imperfect match rates resulted from discrepancies in how drugs were classified and coded by the manual analysis vs. the automated algorithm. HRMR rules lack clarity, resulting in clinical judgments for manual coding that were difficult to replicate in the automated analysis. The high comparison rates for the three measures suggest that an automated clinical tool could use patients' medication records to predict their risks of avoidable readmissions. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Automated cell analysis tool for a genome-wide RNAi screen with support vector machine based supervised learning

    Science.gov (United States)

    Remmele, Steffen; Ritzerfeld, Julia; Nickel, Walter; Hesser, Jürgen

    2011-03-01

    RNAi-based high-throughput microscopy screens have become an important tool in biological sciences in order to decrypt mostly unknown biological functions of human genes. However, manual analysis is impossible for such screens since the amount of image data sets can often be in the hundred thousands. Reliable automated tools are thus required to analyse the fluorescence microscopy image data sets usually containing two or more reaction channels. The herein presented image analysis tool is designed to analyse an RNAi screen investigating the intracellular trafficking and targeting of acylated Src kinases. In this specific screen, a data set consists of three reaction channels and the investigated cells can appear in different phenotypes. The main issue of the image processing task is an automatic cell segmentation which has to be robust and accurate for all different phenotypes and a successive phenotype classification. The cell segmentation is done in two steps by segmenting the cell nuclei first and then using a classifier-enhanced region growing on basis of the cell nuclei to segment the cells. The classification of the cells is realized by a support vector machine which has to be trained manually using supervised learning. Furthermore, the tool is brightness invariant allowing different staining quality and it provides a quality control that copes with typical defects during preparation and acquisition. A first version of the tool has already been successfully applied for an RNAi-screen containing three hundred thousand image data sets and the SVM extended version is designed for additional screens.

  18. An Overview of the Automated Dispatch Controller Algorithms in the System Advisor Model (SAM)

    Energy Technology Data Exchange (ETDEWEB)

    DiOrio, Nicholas A [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-11-22

    Three automatic dispatch modes have been added to the battery model within the System Adviser Model. These controllers have been developed to perform peak shaving in an automated fashion, providing users with a way to see the benefit of reduced demand charges without manually programming a complicated dispatch control. A flexible input option allows more advanced interaction with the automated controller. This document will describe the algorithms in detail and present brief results on its use and limitations.

  19. Costs and consequences of automated algorithms versus manual grading for the detection of referable diabetic retinopathy.

    Science.gov (United States)

    Scotland, G S; McNamee, P; Fleming, A D; Goatman, K A; Philip, S; Prescott, G J; Sharp, P F; Williams, G J; Wykes, W; Leese, G P; Olson, J A

    2010-06-01

    To assess the cost-effectiveness of an improved automated grading algorithm for diabetic retinopathy against a previously described algorithm, and in comparison with manual grading. Efficacy of the alternative algorithms was assessed using a reference graded set of images from three screening centres in Scotland (1253 cases with observable/referable retinopathy and 6333 individuals with mild or no retinopathy). Screening outcomes and grading and diagnosis costs were modelled for a cohort of 180 000 people, with prevalence of referable retinopathy at 4%. Algorithm (b), which combines image quality assessment with detection algorithms for microaneurysms (MA), blot haemorrhages and exudates, was compared with a simpler algorithm (a) (using image quality assessment and MA/dot haemorrhage (DH) detection), and the current practice of manual grading. Compared with algorithm (a), algorithm (b) would identify an additional 113 cases of referable retinopathy for an incremental cost of pound 68 per additional case. Compared with manual grading, automated grading would be expected to identify between 54 and 123 fewer referable cases, for a grading cost saving between pound 3834 and pound 1727 per case missed. Extrapolation modelling over a 20-year time horizon suggests manual grading would cost between pound 25,676 and pound 267,115 per additional quality adjusted life year gained. Algorithm (b) is more cost-effective than the algorithm based on quality assessment and MA/DH detection. With respect to the value of introducing automated detection systems into screening programmes, automated grading operates within the recommended national standards in Scotland and is likely to be considered a cost-effective alternative to manual disease/no disease grading.

  20. A Solution Generator Algorithm for Decision Making based Automated Negotiation in the Construction Domain

    Directory of Open Access Journals (Sweden)

    Arazi Idrus

    2017-12-01

    Full Text Available In this paper, we present our work-in-progress of a proposed framework for automated negotiation in the construction domain. The proposed framework enables software agents to conduct negotiations and autonomously make value-based decisions. The framework consists of three main components which are, solution generator algorithm, negotiation algorithm, and conflict resolution algorithm. This paper extends the discussion on the solution generator algorithm that enables software agents to generate solutions and rank them from 1st to nth solution for the negotiation stage of the operation. The solution generator algorithm consists of three steps which are, review solutions, rank solutions, and form ranked solutions. For validation purpose, we present a scenario that utilizes the proposed algorithm to rank solutions. The validation shows that the algorithm is promising, however, it also highlights the conflict between different parties that needs further negotiation action.

  1. A novel automated spike sorting algorithm with adaptable feature extraction.

    Science.gov (United States)

    Bestel, Robert; Daus, Andreas W; Thielemann, Christiane

    2012-10-15

    To study the electrophysiological properties of neuronal networks, in vitro studies based on microelectrode arrays have become a viable tool for analysis. Although in constant progress, a challenging task still remains in this area: the development of an efficient spike sorting algorithm that allows an accurate signal analysis at the single-cell level. Most sorting algorithms currently available only extract a specific feature type, such as the principal components or Wavelet coefficients of the measured spike signals in order to separate different spike shapes generated by different neurons. However, due to the great variety in the obtained spike shapes, the derivation of an optimal feature set is still a very complex issue that current algorithms struggle with. To address this problem, we propose a novel algorithm that (i) extracts a variety of geometric, Wavelet and principal component-based features and (ii) automatically derives a feature subset, most suitable for sorting an individual set of spike signals. Thus, there is a new approach that evaluates the probability distribution of the obtained spike features and consequently determines the candidates most suitable for the actual spike sorting. These candidates can be formed into an individually adjusted set of spike features, allowing a separation of the various shapes present in the obtained neuronal signal by a subsequent expectation maximisation clustering algorithm. Test results with simulated data files and data obtained from chick embryonic neurons cultured on microelectrode arrays showed an excellent classification result, indicating the superior performance of the described algorithm approach. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Automated Photogrammetric Image Matching with Sift Algorithm and Delaunay Triangulation

    DEFF Research Database (Denmark)

    Karagiannis, Georgios; Antón Castro, Francesc/François; Mioc, Darka

    2016-01-01

    An algorithm for image matching of multi-sensor and multi-temporal satellite images is developed. The method is based on the SIFT feature detector proposed by Lowe in (Lowe, 1999). First, SIFT feature points are detected independently in two images (reference and sensed image). The features detec...... of each feature set for each image are computed. The isomorphism of the Delaunay triangulations is determined to guarantee the quality of the image matching. The algorithm is implemented in Matlab and tested on World-View 2, SPOT6 and TerraSAR-X image patches....

  3. New signal processing algorithms for automated external defibrillators

    OpenAIRE

    Irusta Zarandona, Unai

    2017-01-01

    [ES]La fibrilación ventricular (VF) es el primer ritmo registrado en el 40\\,\\% de las muertes súbitas por paro cardiorrespiratorio extrahospitalario (PCRE). El único tratamiento eficaz para la FV es la desfibrilación mediante una descarga eléctrica. Fuera del hospital, la descarga se administra mediante un desfibrilador externo automático (DEA), que previamente analiza el electrocardiograma (ECG) del paciente y comprueba si presenta un ritmo desfibrilable. La supervivencia en un caso de PCRE ...

  4. AUTOMATION PROGRAM FOR RECOGNITION OF ALGORITHM SOLUTION OF MATHEMATIC TASK

    Directory of Open Access Journals (Sweden)

    Denis N. Butorin

    2014-01-01

    Full Text Available In the article are been describing technology for manage of testing task in computer program. It was found for recognition of algorithm solution of mathematic task. There are been justifi ed the using hierarchical structure for a special set of testing questions. Also, there has been presented the release of the described tasks in the computer program openSEE. 

  5. AUTOMATION PROGRAM FOR RECOGNITION OF ALGORITHM SOLUTION OF MATHEMATIC TASK

    OpenAIRE

    Denis N. Butorin

    2014-01-01

    In the article are been describing technology for manage of testing task in computer program. It was found for recognition of algorithm solution of mathematic task. There are been justifi ed the using hierarchical structure for a special set of testing questions. Also, there has been presented the release of the described tasks in the computer program openSEE. 

  6. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    Science.gov (United States)

    Lohn, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris; Norvig, Peter (Technical Monitor)

    2000-01-01

    We describe a parallel genetic algorithm (GA) that automatically generates circuit designs using evolutionary search. A circuit-construction programming language is introduced and we show how evolution can generate practical analog circuit designs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. We present experimental results as applied to analog filter and amplifier design tasks.

  7. Automated detection and classification of cryptographic algorithms in binary programs through machine learning

    OpenAIRE

    Hosfelt, Diane Duros

    2015-01-01

    Threats from the internet, particularly malicious software (i.e., malware) often use cryptographic algorithms to disguise their actions and even to take control of a victim's system (as in the case of ransomware). Malware and other threats proliferate too quickly for the time-consuming traditional methods of binary analysis to be effective. By automating detection and classification of cryptographic algorithms, we can speed program analysis and more efficiently combat malware. This thesis wil...

  8. Statistical algorithm for automated signature analysis of power spectral density data

    International Nuclear Information System (INIS)

    Piety, K.R.

    1977-01-01

    A statistical algorithm has been developed and implemented on a minicomputer system for on-line, surveillance applications. Power spectral density (PSD) measurements on process signals are the performance signatures that characterize the ''health'' of the monitored equipment. Statistical methods provide a quantitative basis for automating the detection of anomalous conditions. The surveillance algorithm has been tested on signals from neutron sensors, proximeter probes, and accelerometers to determine its potential for monitoring nuclear reactors and rotating machinery

  9. Development of Nuclear Power Plant Safety Evaluation Method for the Automation Algorithm Application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Geun; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2016-10-15

    It is commonly believed that replacing human operators to the automated system would guarantee greater efficiency, lower workloads, and fewer human error. Conventional machine learning techniques are considered as not capable to handle complex situations in NPP. Due to these kinds of issues, automation is not actively adopted although human error probability drastically increases during abnormal situations in NPP due to overload of information, high workload, and short time available for diagnosis. Recently, new machine learning techniques, which are known as ‘deep learning’ techniques have been actively applied to many fields, and the deep learning technique-based artificial intelligences (AIs) are showing better performance than conventional AIs. In 2015, deep Q-network (DQN) which is one of the deep learning techniques was developed and applied to train AI that automatically plays various Atari 2800 games, and this AI surpassed the human-level playing in many kind of games. Also in 2016, ‘Alpha-Go’, which was developed by ‘Google Deepmind’ based on deep learning technique to play the game of Go (i.e. Baduk), was defeated Se-dol Lee who is the World Go champion with score of 4:1. By the effort for reducing human error in NPPs, the ultimate goal of this study is the development of automation algorithm which can cover various situations in NPPs. As the first part, quantitative and real-time NPP safety evaluation method is being developed in order to provide the training criteria for automation algorithm. For that, EWS concept of medical field was adopted, and the applicability is investigated in this paper. Practically, the application of full automation (i.e. fully replaces human operators) may requires much more time for the validation and investigation of side-effects after the development of automation algorithm, and so the adoption in the form of full automation will take long time.

  10. Development of Nuclear Power Plant Safety Evaluation Method for the Automation Algorithm Application

    International Nuclear Information System (INIS)

    Kim, Seung Geun; Seong, Poong Hyun

    2016-01-01

    It is commonly believed that replacing human operators to the automated system would guarantee greater efficiency, lower workloads, and fewer human error. Conventional machine learning techniques are considered as not capable to handle complex situations in NPP. Due to these kinds of issues, automation is not actively adopted although human error probability drastically increases during abnormal situations in NPP due to overload of information, high workload, and short time available for diagnosis. Recently, new machine learning techniques, which are known as ‘deep learning’ techniques have been actively applied to many fields, and the deep learning technique-based artificial intelligences (AIs) are showing better performance than conventional AIs. In 2015, deep Q-network (DQN) which is one of the deep learning techniques was developed and applied to train AI that automatically plays various Atari 2800 games, and this AI surpassed the human-level playing in many kind of games. Also in 2016, ‘Alpha-Go’, which was developed by ‘Google Deepmind’ based on deep learning technique to play the game of Go (i.e. Baduk), was defeated Se-dol Lee who is the World Go champion with score of 4:1. By the effort for reducing human error in NPPs, the ultimate goal of this study is the development of automation algorithm which can cover various situations in NPPs. As the first part, quantitative and real-time NPP safety evaluation method is being developed in order to provide the training criteria for automation algorithm. For that, EWS concept of medical field was adopted, and the applicability is investigated in this paper. Practically, the application of full automation (i.e. fully replaces human operators) may requires much more time for the validation and investigation of side-effects after the development of automation algorithm, and so the adoption in the form of full automation will take long time

  11. Novel Approaches for Diagnosing Melanoma Skin Lesions Through Supervised and Deep Learning Algorithms.

    Science.gov (United States)

    Premaladha, J; Ravichandran, K S

    2016-04-01

    Dermoscopy is a technique used to capture the images of skin, and these images are useful to analyze the different types of skin diseases. Malignant melanoma is a kind of skin cancer whose severity even leads to death. Earlier detection of melanoma prevents death and the clinicians can treat the patients to increase the chances of survival. Only few machine learning algorithms are developed to detect the melanoma using its features. This paper proposes a Computer Aided Diagnosis (CAD) system which equips efficient algorithms to classify and predict the melanoma. Enhancement of the images are done using Contrast Limited Adaptive Histogram Equalization technique (CLAHE) and median filter. A new segmentation algorithm called Normalized Otsu's Segmentation (NOS) is implemented to segment the affected skin lesion from the normal skin, which overcomes the problem of variable illumination. Fifteen features are derived and extracted from the segmented images are fed into the proposed classification techniques like Deep Learning based Neural Networks and Hybrid Adaboost-Support Vector Machine (SVM) algorithms. The proposed system is tested and validated with nearly 992 images (malignant & benign lesions) and it provides a high classification accuracy of 93 %. The proposed CAD system can assist the dermatologists to confirm the decision of the diagnosis and to avoid excisional biopsies.

  12. Separation of pulsar signals from noise using supervised machine learning algorithms

    Science.gov (United States)

    Bethapudi, S.; Desai, S.

    2018-04-01

    We evaluate the performance of four different machine learning (ML) algorithms: an Artificial Neural Network Multi-Layer Perceptron (ANN MLP), Adaboost, Gradient Boosting Classifier (GBC), and XGBoost, for the separation of pulsars from radio frequency interference (RFI) and other sources of noise, using a dataset obtained from the post-processing of a pulsar search pipeline. This dataset was previously used for the cross-validation of the SPINN-based machine learning engine, obtained from the reprocessing of the HTRU-S survey data (Morello et al., 2014). We have used the Synthetic Minority Over-sampling Technique (SMOTE) to deal with high-class imbalance in the dataset. We report a variety of quality scores from all four of these algorithms on both the non-SMOTE and SMOTE datasets. For all the above ML methods, we report high accuracy and G-mean for both the non-SMOTE and SMOTE cases. We study the feature importances using Adaboost, GBC, and XGBoost and also from the minimum Redundancy Maximum Relevance approach to report algorithm-agnostic feature ranking. From these methods, we find that the signal to noise of the folded profile to be the best feature. We find that all the ML algorithms report FPRs about an order of magnitude lower than the corresponding FPRs obtained in Morello et al. (2014), for the same recall value.

  13. An algorithm for automated layout of process description maps drawn in SBGN.

    Science.gov (United States)

    Genc, Begum; Dogrusoz, Ugur

    2016-01-01

    Evolving technology has increased the focus on genomics. The combination of today's advanced techniques with decades of molecular biology research has yielded huge amounts of pathway data. A standard, named the Systems Biology Graphical Notation (SBGN), was recently introduced to allow scientists to represent biological pathways in an unambiguous, easy-to-understand and efficient manner. Although there are a number of automated layout algorithms for various types of biological networks, currently none specialize on process description (PD) maps as defined by SBGN. We propose a new automated layout algorithm for PD maps drawn in SBGN. Our algorithm is based on a force-directed automated layout algorithm called Compound Spring Embedder (CoSE). On top of the existing force scheme, additional heuristics employing new types of forces and movement rules are defined to address SBGN-specific rules. Our algorithm is the only automatic layout algorithm that properly addresses all SBGN rules for drawing PD maps, including placement of substrates and products of process nodes on opposite sides, compact tiling of members of molecular complexes and extensively making use of nested structures (compound nodes) to properly draw cellular locations and molecular complex structures. As demonstrated experimentally, the algorithm results in significant improvements over use of a generic layout algorithm such as CoSE in addressing SBGN rules on top of commonly accepted graph drawing criteria. An implementation of our algorithm in Java is available within ChiLay library (https://github.com/iVis-at-Bilkent/chilay). ugur@cs.bilkent.edu.tr or dogrusoz@cbio.mskcc.org Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  14. Design principles and algorithms for automated air traffic management

    Science.gov (United States)

    Erzberger, Heinz

    1995-01-01

    This paper presents design principles and algorithm for building a real time scheduler. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high altitude airspace far from the airport and low altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time.

  15. Validation of an automated seizure detection algorithm for term neonates

    Science.gov (United States)

    Mathieson, Sean R.; Stevenson, Nathan J.; Low, Evonne; Marnane, William P.; Rennie, Janet M.; Temko, Andrey; Lightbody, Gordon; Boylan, Geraldine B.

    2016-01-01

    Objective The objective of this study was to validate the performance of a seizure detection algorithm (SDA) developed by our group, on previously unseen, prolonged, unedited EEG recordings from 70 babies from 2 centres. Methods EEGs of 70 babies (35 seizure, 35 non-seizure) were annotated for seizures by experts as the gold standard. The SDA was tested on the EEGs at a range of sensitivity settings. Annotations from the expert and SDA were compared using event and epoch based metrics. The effect of seizure duration on SDA performance was also analysed. Results Between sensitivity settings of 0.5 and 0.3, the algorithm achieved seizure detection rates of 52.6–75.0%, with false detection (FD) rates of 0.04–0.36 FD/h for event based analysis, which was deemed to be acceptable in a clinical environment. Time based comparison of expert and SDA annotations using Cohen’s Kappa Index revealed a best performing SDA threshold of 0.4 (Kappa 0.630). The SDA showed improved detection performance with longer seizures. Conclusion The SDA achieved promising performance and warrants further testing in a live clinical evaluation. Significance The SDA has the potential to improve seizure detection and provide a robust tool for comparing treatment regimens. PMID:26055336

  16. The effects of automated artifact removal algorithms on electroencephalography-based Alzheimer’s disease diagnosis

    Directory of Open Access Journals (Sweden)

    Raymundo eCassani

    2014-03-01

    Full Text Available Over the last decade, electroencephalography (EEG has emerged as a reliable tool for the diagnosis of cortical disorders such as Alzheimer's disease (AD. EEG signals, however, are susceptible to several artifacts, such as ocular, muscular, movement, and environmental. To overcome this limitation, existing diagnostic systems commonly depend on experienced clinicians to manually select artifact-free epochs from the collected multi-channel EEG data. Manual selection, however, is a tedious and time-consuming process, rendering the diagnostic system ``semi-automated. Notwithstanding, a number of EEG artifact removal algorithms have been proposed in the literature. The (disadvantages of using such algorithms in automated AD diagnostic systems, however, have not been documented; this paper aims to fill this gap. Here, we investigate the effects of three state-of-the-art automated artifact removal (AAR algorithms (both alone and in combination with each other on AD diagnostic systems based on four different classes of EEG features, namely, spectral, amplitude modulation rate of change, coherence, and phase. The three AAR algorithms tested are statistical artifact rejection (SAR, blind source separation based on second order blind identification and canonical correlation analysis (BSS-SOBI-CCA, and wavelet enhanced independent component analysis (wICA. Experimental results based on 20-channel resting-awake EEG data collected from 59 participants (20 patients with mild AD, 15 with moderate-to-severe AD, and 24 age-matched healthy controls showed the wICA algorithm alone outperforming other enhancement algorithm combinations across three tasks: diagnosis (control vs. mild vs. moderate, early detection (control vs. mild, and disease progression (mild vs. moderate, thus opening the doors for fully-automated systems that can assist clinicians with early detection of AD, as well as disease severity progression assessment.

  17. A new avenue for classification and prediction of olive cultivars using supervised and unsupervised algorithms.

    Directory of Open Access Journals (Sweden)

    Amir H Beiki

    Full Text Available Various methods have been used to identify cultivares of olive trees; herein we used different bioinformatics algorithms to propose new tools to classify 10 cultivares of olive based on RAPD and ISSR genetic markers datasets generated from PCR reactions. Five RAPD markers (OPA0a21, OPD16a, OP01a1, OPD16a1 and OPA0a8 and five ISSR markers (UBC841a4, UBC868a7, UBC841a14, U12BC807a and UBC810a13 selected as the most important markers by all attribute weighting models. K-Medoids unsupervised clustering run on SVM dataset was fully able to cluster each olive cultivar to the right classes. All trees (176 induced by decision tree models generated meaningful trees and UBC841a4 attribute clearly distinguished between foreign and domestic olive cultivars with 100% accuracy. Predictive machine learning algorithms (SVM and Naïve Bayes were also able to predict the right class of olive cultivares with 100% accuracy. For the first time, our results showed data mining techniques can be effectively used to distinguish between plant cultivares and proposed machine learning based systems in this study can predict new olive cultivars with the best possible accuracy.

  18. Applying Intelligent Algorithms to Automate the Identification of Error Factors.

    Science.gov (United States)

    Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han

    2018-05-03

    Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.

  19. Phenotyping for patient safety: algorithm development for electronic health record based automated adverse event and medical error detection in neonatal intensive care.

    Science.gov (United States)

    Li, Qi; Melton, Kristin; Lingren, Todd; Kirkendall, Eric S; Hall, Eric; Zhai, Haijun; Ni, Yizhao; Kaiser, Megan; Stoutenborough, Laura; Solti, Imre

    2014-01-01

    Although electronic health records (EHRs) have the potential to provide a foundation for quality and safety algorithms, few studies have measured their impact on automated adverse event (AE) and medical error (ME) detection within the neonatal intensive care unit (NICU) environment. This paper presents two phenotyping AE and ME detection algorithms (ie, IV infiltrations, narcotic medication oversedation and dosing errors) and describes manual annotation of airway management and medication/fluid AEs from NICU EHRs. From 753 NICU patient EHRs from 2011, we developed two automatic AE/ME detection algorithms, and manually annotated 11 classes of AEs in 3263 clinical notes. Performance of the automatic AE/ME detection algorithms was compared to trigger tool and voluntary incident reporting results. AEs in clinical notes were double annotated and consensus achieved under neonatologist supervision. Sensitivity, positive predictive value (PPV), and specificity are reported. Twelve severe IV infiltrates were detected. The algorithm identified one more infiltrate than the trigger tool and eight more than incident reporting. One narcotic oversedation was detected demonstrating 100% agreement with the trigger tool. Additionally, 17 narcotic medication MEs were detected, an increase of 16 cases over voluntary incident reporting. Automated AE/ME detection algorithms provide higher sensitivity and PPV than currently used trigger tools or voluntary incident-reporting systems, including identification of potential dosing and frequency errors that current methods are unequipped to detect. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  20. Automated Escape Guidance Algorithms for An Escape Vehicle

    Science.gov (United States)

    Flanary, Ronald; Hammen, David; Ito, Daigoro; Rabalais, Bruce; Rishikof, Brian; Siebold, Karl

    2002-01-01

    An escape vehicle was designed to provide an emergency evacuation for crew members living on a space station. For maximum escape capability, the escape vehicle needs to have the ability to safely evacuate a station in a contingency scenario such as an uncontrolled (e.g., tumbling) station. This emergency escape sequence will typically be divided into three events: The fust separation event (SEP1), the navigation reconstruction event, and the second separation event (SEP2). SEP1 is responsible for taking the spacecraft from its docking port to a distance greater than the maximum radius of the rotating station. The navigation reconstruction event takes place prior to the SEP2 event and establishes the orbital state to within the tolerance limits necessary for SEP2. The SEP2 event calculates and performs an avoidance burn to prevent station recontact during the next several orbits. This paper presents the tools and results for the whole separation sequence with an emphasis on the two separation events. The fust challenge includes collision avoidance during the escape sequence while the station is in an uncontrolled rotational state, with rotation rates of up to 2 degrees per second. The task of avoiding a collision may require the use of the Vehicle's de-orbit propulsion system for maximum thrust and minimum dwell time within the vicinity of the station vicinity. The thrust of the propulsion system is in a single direction, and can be controlled only by the attitude of the spacecraft. Escape algorithms based on a look-up table or analytical guidance can be implemented since the rotation rate and the angular momentum vector can be sensed onboard and a-priori knowledge of the position and relative orientation are available. In addition, crew intervention has been provided for in the event of unforeseen obstacles in the escape path. The purpose of the SEP2 burn is to avoid re-contact with the station over an extended period of time. Performing this maneuver properly

  1. Automated drusen detection in retinal images using analytical modelling algorithms

    Directory of Open Access Journals (Sweden)

    Manivannan Ayyakkannu

    2011-07-01

    Full Text Available Abstract Background Drusen are common features in the ageing macula associated with exudative Age-Related Macular Degeneration (ARMD. They are visible in retinal images and their quantitative analysis is important in the follow up of the ARMD. However, their evaluation is fastidious and difficult to reproduce when performed manually. Methods This article proposes a methodology for Automatic Drusen Deposits Detection and quantification in Retinal Images (AD3RI by using digital image processing techniques. It includes an image pre-processing method to correct the uneven illumination and to normalize the intensity contrast with smoothing splines. The drusen detection uses a gradient based segmentation algorithm that isolates drusen and provides basic drusen characterization to the modelling stage. The detected drusen are then fitted by Modified Gaussian functions, producing a model of the image that is used to evaluate the affected area. Twenty two images were graded by eight experts, with the aid of a custom made software and compared with AD3RI. This comparison was based both on the total area and on the pixel-to-pixel analysis. The coefficient of variation, the intraclass correlation coefficient, the sensitivity, the specificity and the kappa coefficient were calculated. Results The ground truth used in this study was the experts' average grading. In order to evaluate the proposed methodology three indicators were defined: AD3RI compared to the ground truth (A2G; each expert compared to the other experts (E2E and a standard Global Threshold method compared to the ground truth (T2G. The results obtained for the three indicators, A2G, E2E and T2G, were: coefficient of variation 28.8 %, 22.5 % and 41.1 %, intraclass correlation coefficient 0.92, 0.88 and 0.67, sensitivity 0.68, 0.67 and 0.74, specificity 0.96, 0.97 and 0.94, and kappa coefficient 0.58, 0.60 and 0.49, respectively. Conclusions The gradings produced by AD3RI obtained an agreement

  2. Improved automated lumen contour detection by novel multifrequency processing algorithm with current intravascular ultrasound system.

    Science.gov (United States)

    Kume, Teruyoshi; Kim, Byeong-Keuk; Waseda, Katsuhisa; Sathyanarayana, Shashidhar; Li, Wenguang; Teo, Tat-Jin; Yock, Paul G; Fitzgerald, Peter J; Honda, Yasuhiro

    2013-02-01

    The aim of this study was to evaluate a new fully automated lumen border tracing system based on a novel multifrequency processing algorithm. We developed the multifrequency processing method to enhance arterial lumen detection by exploiting the differential scattering characteristics of blood and arterial tissue. The implementation of the method can be integrated into current intravascular ultrasound (IVUS) hardware. This study was performed in vivo with conventional 40-MHz IVUS catheters (Atlantis SR Pro™, Boston Scientific Corp, Natick, MA) in 43 clinical patients with coronary artery disease. A total of 522 frames were randomly selected, and lumen areas were measured after automatically tracing lumen borders with the new tracing system and a commercially available tracing system (TraceAssist™) referred to as the "conventional tracing system." The data assessed by the two automated systems were compared with the results of manual tracings by experienced IVUS analysts. New automated lumen measurements showed better agreement with manual lumen area tracings compared with those of the conventional tracing system (correlation coefficient: 0.819 vs. 0.509). When compared against manual tracings, the new algorithm also demonstrated improved systematic error (mean difference: 0.13 vs. -1.02 mm(2) ) and random variability (standard deviation of difference: 2.21 vs. 4.02 mm(2) ) compared with the conventional tracing system. This preliminary study showed that the novel fully automated tracing system based on the multifrequency processing algorithm can provide more accurate lumen border detection than current automated tracing systems and thus, offer a more reliable quantitative evaluation of lumen geometry. Copyright © 2011 Wiley Periodicals, Inc.

  3. Supervised machine learning algorithms to diagnose stress for vehicle drivers based on physiological sensor signals.

    Science.gov (United States)

    Barua, Shaibal; Begum, Shahina; Ahmed, Mobyen Uddin

    2015-01-01

    Machine learning algorithms play an important role in computer science research. Recent advancement in sensor data collection in clinical sciences lead to a complex, heterogeneous data processing, and analysis for patient diagnosis and prognosis. Diagnosis and treatment of patients based on manual analysis of these sensor data are difficult and time consuming. Therefore, development of Knowledge-based systems to support clinicians in decision-making is important. However, it is necessary to perform experimental work to compare performances of different machine learning methods to help to select appropriate method for a specific characteristic of data sets. This paper compares classification performance of three popular machine learning methods i.e., case-based reasoning, neutral networks and support vector machine to diagnose stress of vehicle drivers using finger temperature and heart rate variability. The experimental results show that case-based reasoning outperforms other two methods in terms of classification accuracy. Case-based reasoning has achieved 80% and 86% accuracy to classify stress using finger temperature and heart rate variability. On contrary, both neural network and support vector machine have achieved less than 80% accuracy by using both physiological signals.

  4. Location-Based Self-Adaptive Routing Algorithm for Wireless Sensor Networks in Home Automation

    Directory of Open Access Journals (Sweden)

    Hong SeungHo

    2011-01-01

    Full Text Available The use of wireless sensor networks in home automation (WSNHA is attractive due to their characteristics of self-organization, high sensing fidelity, low cost, and potential for rapid deployment. Although the AODVjr routing algorithm in IEEE 802.15.4/ZigBee and other routing algorithms have been designed for wireless sensor networks, not all are suitable for WSNHA. In this paper, we propose a location-based self-adaptive routing algorithm for WSNHA called WSNHA-LBAR. It confines route discovery flooding to a cylindrical request zone, which reduces the routing overhead and decreases broadcast storm problems in the MAC layer. It also automatically adjusts the size of the request zone using a self-adaptive algorithm based on Bayes' theorem. This makes WSNHA-LBAR more adaptable to the changes of the network state and easier to implement. Simulation results show improved network reliability as well as reduced routing overhead.

  5. Frameworks for Performing on Cloud Automated Software Testing Using Swarm Intelligence Algorithm: Brief Survey

    Directory of Open Access Journals (Sweden)

    Mohammad Hossain

    2018-04-01

    Full Text Available This paper surveys on Cloud Based Automated Testing Software that is able to perform Black-box testing, White-box testing, as well as Unit and Integration Testing as a whole. In this paper, we discuss few of the available automated software testing frameworks on the cloud. These frameworks are found to be more efficient and cost effective because they execute test suites over a distributed cloud infrastructure. One of the framework effectiveness was attributed to having a module that accepts manual test cases from users and it prioritize them accordingly. Software testing, in general, accounts for as much as 50% of the total efforts of the software development project. To lessen the efforts, one the frameworks discussed in this paper used swarm intelligence algorithms. It uses the Ant Colony Algorithm for complete path coverage to minimize time and the Bee Colony Optimization (BCO for regression testing to ensure backward compatibility.

  6. Dataset exploited for the development and validation of automated cyanobacteria quantification algorithm, ACQUA

    Directory of Open Access Journals (Sweden)

    Emanuele Gandola

    2016-09-01

    Full Text Available The estimation and quantification of potentially toxic cyanobacteria in lakes and reservoirs are often used as a proxy of risk for water intended for human consumption and recreational activities. Here, we present data sets collected from three volcanic Italian lakes (Albano, Vico, Nemi that present filamentous cyanobacteria strains at different environments. Presented data sets were used to estimate abundance and morphometric characteristics of potentially toxic cyanobacteria comparing manual Vs. automated estimation performed by ACQUA (“ACQUA: Automated Cyanobacterial Quantification Algorithm for toxic filamentous genera using spline curves, pattern recognition and machine learning” (Gandola et al., 2016 [1]. This strategy was used to assess the algorithm performance and to set up the denoising algorithm. Abundance and total length estimations were used for software development, to this aim we evaluated the efficiency of statistical tools and mathematical algorithms, here described. The image convolution with the Sobel filter has been chosen to denoise input images from background signals, then spline curves and least square method were used to parameterize detected filaments and to recombine crossing and interrupted sections aimed at performing precise abundances estimations and morphometric measurements. Keywords: Comparing data, Filamentous cyanobacteria, Algorithm, Deoising, Natural sample

  7. Shockwave-Based Automated Vehicle Longitudinal Control Algorithm for Nonrecurrent Congestion Mitigation

    Directory of Open Access Journals (Sweden)

    Liuhui Zhao

    2017-01-01

    Full Text Available A shockwave-based speed harmonization algorithm for the longitudinal movement of automated vehicles is presented in this paper. In the advent of Connected/Automated Vehicle (C/AV environment, the proposed algorithm can be applied to capture instantaneous shockwaves constructed from vehicular speed profiles shared by individual equipped vehicles. With a continuous wavelet transform (CWT method, the algorithm detects abnormal speed drops in real-time and optimizes speed to prevent the shockwave propagating to the upstream traffic. A traffic simulation model is calibrated to evaluate the applicability and efficiency of the proposed algorithm. Based on 100% C/AV market penetration, the simulation results show that the CWT-based algorithm accurately detects abnormal speed drops. With the improved accuracy of abnormal speed drop detection, the simulation results also demonstrate that the congestion can be mitigated by reducing travel time and delay up to approximately 9% and 18%, respectively. It is also found that the shockwave caused by nonrecurrent congestion is quickly dissipated even with low market penetration.

  8. An automated cell-counting algorithm for fluorescently-stained cells in migration assays

    Directory of Open Access Journals (Sweden)

    Novielli Nicole M

    2011-10-01

    Full Text Available Abstract A cell-counting algorithm, developed in Matlab®, was created to efficiently count migrated fluorescently-stained cells on membranes from migration assays. At each concentration of cells used (10,000, and 100,000 cells, images were acquired at 2.5 ×, 5 ×, and 10 × objective magnifications. Automated cell counts strongly correlated to manual counts (r2 = 0.99, P

  9. An Automated Cropland Classification Algorithm (ACCA) for Tajikistan by Combining Landsat, MODIS, and Secondary Data

    OpenAIRE

    Thenkabail, Prasad S.; Wu, Zhuoting

    2012-01-01

    The overarching goal of this research was to develop and demonstrate an automated Cropland Classification Algorithm (ACCA) that will rapidly, routinely, and accurately classify agricultural cropland extent, areas, and characteristics (e.g., irrigated vs. rainfed) over large areas such as a country or a region through combination of multi-sensor remote sensing and secondary data. In this research, a rule-based ACCA was conceptualized, developed, and demonstrated for the country of Tajikistan u...

  10. Combined use of two supervised learning algorithms to model sea turtle behaviours from tri-axial acceleration data.

    Science.gov (United States)

    Jeantet, L; Dell'Amico, F; Forin-Wiart, M-A; Coutant, M; Bonola, M; Etienne, D; Gresser, J; Regis, S; Lecerf, N; Lefebvre, F; de Thoisy, B; Le Maho, Y; Brucker, M; Châtelain, N; Laesser, R; Crenner, F; Handrich, Y; Wilson, R; Chevallier, D

    2018-05-23

    Accelerometers are becoming ever more important sensors in animal-attached technology, providing data that allow determination of body posture and movement and thereby helping to elucidate behaviour in animals that are difficult to observe. We sought to validate the identification of sea turtle behaviours from accelerometer signals by deploying tags on the carapace of a juvenile loggerhead ( Caretta caretta ), an adult hawksbill ( Eretmochelys imbricata ) and an adult green turtle ( Chelonia mydas ) at Aquarium La Rochelle, France. We recorded tri-axial acceleration at 50 Hz for each species for a full day while two fixed cameras recorded their behaviours. We identified behaviours from the acceleration data using two different supervised learning algorithms, Random Forest and Classification And Regression Tree (CART), treating the data from the adult animals as separate from the juvenile data. We achieved a global accuracy of 81.30% for the adult hawksbill and green turtle CART model and 71.63% for the juvenile loggerhead, identifying 10 and 12 different behaviours, respectively. Equivalent figures were 86.96% for the adult hawksbill and green turtle Random Forest model and 79.49% for the juvenile loggerhead, for the same behaviours. The use of Random Forest combined with CART algorithms allowed us to understand the decision rules implicated in behaviour discrimination, and thus remove or group together some 'confused' or under--represented behaviours in order to get the most accurate models. This study is the first to validate accelerometer data to identify turtle behaviours and the approach can now be tested on other captive sea turtle species. © 2018. Published by The Company of Biologists Ltd.

  11. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    Science.gov (United States)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  12. Automated sequence-specific protein NMR assignment using the memetic algorithm MATCH

    International Nuclear Information System (INIS)

    Volk, Jochen; Herrmann, Torsten; Wuethrich, Kurt

    2008-01-01

    MATCH (Memetic Algorithm and Combinatorial Optimization Heuristics) is a new memetic algorithm for automated sequence-specific polypeptide backbone NMR assignment of proteins. MATCH employs local optimization for tracing partial sequence-specific assignments within a global, population-based search environment, where the simultaneous application of local and global optimization heuristics guarantees high efficiency and robustness. MATCH thus makes combined use of the two predominant concepts in use for automated NMR assignment of proteins. Dynamic transition and inherent mutation are new techniques that enable automatic adaptation to variable quality of the experimental input data. The concept of dynamic transition is incorporated in all major building blocks of the algorithm, where it enables switching between local and global optimization heuristics at any time during the assignment process. Inherent mutation restricts the intrinsically required randomness of the evolutionary algorithm to those regions of the conformation space that are compatible with the experimental input data. Using intact and artificially deteriorated APSY-NMR input data of proteins, MATCH performed sequence-specific resonance assignment with high efficiency and robustness

  13. A new memetic algorithm for mitigating tandem automated guided vehicle system partitioning problem

    Science.gov (United States)

    Pourrahimian, Parinaz

    2017-11-01

    Automated Guided Vehicle System (AGVS) provides the flexibility and automation demanded by Flexible Manufacturing System (FMS). However, with the growing concern on responsible management of resource use, it is crucial to manage these vehicles in an efficient way in order reduces travel time and controls conflicts and congestions. This paper presents the development process of a new Memetic Algorithm (MA) for optimizing partitioning problem of tandem AGVS. MAs employ a Genetic Algorithm (GA), as a global search, and apply a local search to bring the solutions to a local optimum point. A new Tabu Search (TS) has been developed and combined with a GA to refine the newly generated individuals by GA. The aim of the proposed algorithm is to minimize the maximum workload of the system. After all, the performance of the proposed algorithm is evaluated using Matlab. This study also compared the objective function of the proposed MA with GA. The results showed that the TS, as a local search, significantly improves the objective function of the GA for different system sizes with large and small numbers of zone by 1.26 in average.

  14. An Automated Algorithm for Identifying and Tracking Transverse Waves in Solar Images

    Science.gov (United States)

    Weberg, Micah J.; Morton, Richard J.; McLaughlin, James A.

    2018-01-01

    Recent instrumentation has demonstrated that the solar atmosphere supports omnipresent transverse waves, which could play a key role in energizing the solar corona. Large-scale studies are required in order to build up an understanding of the general properties of these transverse waves. To help facilitate this, we present an automated algorithm for identifying and tracking features in solar images and extracting the wave properties of any observed transverse oscillations. We test and calibrate our algorithm using a set of synthetic data, which includes noise and rotational effects. The results indicate an accuracy of 1%–2% for displacement amplitudes and 4%–10% for wave periods and velocity amplitudes. We also apply the algorithm to data from the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory and find good agreement with previous studies. Of note, we find that 35%–41% of the observed plumes exhibit multiple wave signatures, which indicates either the superposition of waves or multiple independent wave packets observed at different times within a single structure. The automated methods described in this paper represent a significant improvement on the speed and quality of direct measurements of transverse waves within the solar atmosphere. This algorithm unlocks a wide range of statistical studies that were previously impractical.

  15. Design and Demonstration of Automated Data Analysis Algorithms for Ultrasonic Inspection of Complex Composite Panels with Bonds

    Science.gov (United States)

    2016-02-01

    all of the ADA called indications into three groups: true positives (TP), missed calls (MC) and false calls (FC). Note, an indication position error...data review burden and improve the reliability of the ultrasonic inspection of large composite structures, automated data analysis ( ADA ) algorithms...thickness and backwall C-scan images. 15. SUBJECT TERMS automated data analysis ( ADA ) algorithms; time-of-flight indications; backwall amplitude dropout

  16. Machine-Learning Algorithms to Automate Morphological and Functional Assessments in 2D Echocardiography.

    Science.gov (United States)

    Narula, Sukrit; Shameer, Khader; Salem Omar, Alaa Mabrouk; Dudley, Joel T; Sengupta, Partho P

    2016-11-29

    Machine-learning models may aid cardiac phenotypic recognition by using features of cardiac tissue deformation. This study investigated the diagnostic value of a machine-learning framework that incorporates speckle-tracking echocardiographic data for automated discrimination of hypertrophic cardiomyopathy (HCM) from physiological hypertrophy seen in athletes (ATH). Expert-annotated speckle-tracking echocardiographic datasets obtained from 77 ATH and 62 HCM patients were used for developing an automated system. An ensemble machine-learning model with 3 different machine-learning algorithms (support vector machines, random forests, and artificial neural networks) was developed and a majority voting method was used for conclusive predictions with further K-fold cross-validation. Feature selection using an information gain (IG) algorithm revealed that volume was the best predictor for differentiating between HCM ands. ATH (IG = 0.24) followed by mid-left ventricular segmental (IG = 0.134) and average longitudinal strain (IG = 0.131). The ensemble machine-learning model showed increased sensitivity and specificity compared with early-to-late diastolic transmitral velocity ratio (p 13 mm. In this subgroup analysis, the automated model continued to show equal sensitivity, but increased specificity relative to early-to-late diastolic transmitral velocity ratio, e', and strain. Our results suggested that machine-learning algorithms can assist in the discrimination of physiological versus pathological patterns of hypertrophic remodeling. This effort represents a step toward the development of a real-time, machine-learning-based system for automated interpretation of echocardiographic images, which may help novice readers with limited experience. Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  17. A novel validation algorithm allows for automated cell tracking and the extraction of biologically meaningful parameters.

    Directory of Open Access Journals (Sweden)

    Daniel H Rapoport

    Full Text Available Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters

  18. An automated algorithm for photoreceptors counting in adaptive optics retinal images

    Science.gov (United States)

    Liu, Xu; Zhang, Yudong; Yun, Dai

    2012-10-01

    Eyes are important organs of humans that detect light and form spatial and color vision. Knowing the exact number of cones in retinal image has great importance in helping us understand the mechanism of eyes' function and the pathology of some eye disease. In order to analyze data in real time and process large-scale data, an automated algorithm is designed to label cone photoreceptors in adaptive optics (AO) retinal images. Images acquired by the flood-illuminated AO system are taken to test the efficiency of this algorithm. We labeled these images both automatically and manually, and compared the results of the two methods. A 94.1% to 96.5% agreement rate between the two methods is achieved in this experiment, which demonstrated the reliability and efficiency of the algorithm.

  19. Automated backbone assignment of labeled proteins using the threshold accepting algorithm

    International Nuclear Information System (INIS)

    Leutner, Michael; Gschwind, Ruth M.; Liermann, Jens; Schwarz, Christian; Gemmecker, Gerd; Kessler, Horst

    1998-01-01

    The sequential assignment of backbone resonances is the first step in the structure determination of proteins by heteronuclear NMR. For larger proteins, an assignment strategy based on proton side-chain information is no longer suitable for the use in an automated procedure. Our program PASTA (Protein ASsignment by Threshold Accepting) is therefore designed to partially or fully automate the sequential assignment of proteins, based on the analysis of NMR backbone resonances plus C β information. In order to overcome the problems caused by peak overlap and missing signals in an automated assignment process, PASTA uses threshold accepting, a combinatorial optimization strategy, which is superior to simulated annealing due to generally faster convergence and better solutions. The reliability of this algorithm is shown by reproducing the complete sequential backbone assignment of several proteins from published NMR data. The robustness of the algorithm against misassigned signals, noise, spectral overlap and missing peaks is shown by repeating the assignment with reduced sequential information and increased chemical shift tolerances. The performance of the program on real data is finally demonstrated with automatically picked peak lists of human nonpancreatic synovial phospholipase A 2 , a protein with 124 residues

  20. 3-D image pre-processing algorithms for improved automated tracing of neuronal arbors.

    Science.gov (United States)

    Narayanaswamy, Arunachalam; Wang, Yu; Roysam, Badrinath

    2011-09-01

    The accuracy and reliability of automated neurite tracing systems is ultimately limited by image quality as reflected in the signal-to-noise ratio, contrast, and image variability. This paper describes a novel combination of image processing methods that operate on images of neurites captured by confocal and widefield microscopy, and produce synthetic images that are better suited to automated tracing. The algorithms are based on the curvelet transform (for denoising curvilinear structures and local orientation estimation), perceptual grouping by scalar voting (for elimination of non-tubular structures and improvement of neurite continuity while preserving branch points), adaptive focus detection, and depth estimation (for handling widefield images without deconvolution). The proposed methods are fast, and capable of handling large images. Their ability to handle images of unlimited size derives from automated tiling of large images along the lateral dimension, and processing of 3-D images one optical slice at a time. Their speed derives in part from the fact that the core computations are formulated in terms of the Fast Fourier Transform (FFT), and in part from parallel computation on multi-core computers. The methods are simple to apply to new images since they require very few adjustable parameters, all of which are intuitive. Examples of pre-processing DIADEM Challenge images are used to illustrate improved automated tracing resulting from our pre-processing methods.

  1. Low Speed Longitudinal Control Algorithms for Automated Vehicles in Simulation and Real Platforms

    Directory of Open Access Journals (Sweden)

    Mauricio Marcano

    2018-01-01

    Full Text Available Advanced Driver Assistance Systems (ADAS acting over throttle and brake are already available in level 2 automated vehicles. In order to increase the level of automation new systems need to be tested in an extensive set of complex scenarios, ensuring safety under all circumstances. Validation of these systems using real vehicles presents important drawbacks: the time needed to drive millions of kilometers, the risk associated with some situations, and the high cost involved. Simulation platforms emerge as a feasible solution. Therefore, robust and reliable virtual environments to test automated driving maneuvers and control techniques are needed. In that sense, this paper presents a use case where three longitudinal low speed control techniques are designed, tuned, and validated using an in-house simulation framework and later applied in a real vehicle. Control algorithms include a classical PID, an adaptive network fuzzy inference system (ANFIS, and a Model Predictive Control (MPC. The simulated dynamics are calculated using a multibody vehicle model. In addition, longitudinal actuators of a Renault Twizy are characterized through empirical tests. A comparative analysis of results between simulated and real platform shows the effectiveness of the proposed framework for designing and validating longitudinal controllers for real automated vehicles.

  2. National Automated Surveillance of Hospital-Acquired Bacteremia in Denmark Using a Computer Algorithm

    DEFF Research Database (Denmark)

    Gubbels, Sophie; Nielsen, Jens; Voldstedlund, Marianne

    2017-01-01

    BACKGROUND In 2015, Denmark launched an automated surveillance system for hospital-acquired infections, the Hospital-Acquired Infections Database (HAIBA). OBJECTIVE To describe the algorithm used in HAIBA, to determine its concordance with point prevalence surveys (PPSs), and to present trends...... advantages of automated surveillance, HAIBA allows monitoring of HA bacteremia across the healthcare system, supports prioritizing preventive measures, and holds promise for evaluating interventions. Infect Control Hosp Epidemiol 2017;1-8....... for hospital-acquired bacteremia SETTING Private and public hospitals in Denmark METHODS A hospital-acquired bacteremia case was defined as at least 1 positive blood culture with at least 1 pathogen (bacterium or fungus) taken between 48 hours after admission and 48 hours after discharge, using the Danish...

  3. Concurrent validity of an automated algorithm for computing the center of pressure excursion index (CPEI).

    Science.gov (United States)

    Diaz, Michelle A; Gibbons, Mandi W; Song, Jinsup; Hillstrom, Howard J; Choe, Kersti H; Pasquale, Maria R

    2018-01-01

    Center of Pressure Excursion Index (CPEI), a parameter computed from the distribution of plantar pressures during stance phase of barefoot walking, has been used to assess dynamic foot function. The original custom program developed to calculate CPEI required the oversight of a user who could manually correct for certain exceptions to the computational rules. A new fully automatic program has been developed to calculate CPEI with an algorithm that accounts for these exceptions. The purpose of this paper is to compare resulting CPEI values computed by these two programs on plantar pressure data from both asymptomatic and pathologic subjects. If comparable, the new program offers significant benefits-reduced potential for variability due to rater discretion and faster CPEI calculation. CPEI values were calculated from barefoot plantar pressure distributions during comfortable paced walking on 61 healthy asymptomatic adults, 19 diabetic adults with moderate hallux valgus, and 13 adults with mild hallux valgus. Right foot data for each subject was analyzed with linear regression and a Bland-Altman plot. The automated algorithm yielded CPEI values that were linearly related to the original program (R 2 =0.99; Pcomputation methods. Results of this analysis suggest that the new automated algorithm may be used to calculate CPEI on both healthy and pathologic feet. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Algorithms of control parameters selection for automation of FDM 3D printing process

    Directory of Open Access Journals (Sweden)

    Kogut Paweł

    2017-01-01

    Full Text Available The paper presents algorithms of control parameters selection of the Fused Deposition Modelling (FDM technology in case of an open printing solutions environment and 3DGence ONE printer. The following parameters were distinguished: model mesh density, material flow speed, cooling performance, retraction and printing speeds. These parameters are independent in principle printing system, but in fact to a certain degree that results from the selected printing equipment features. This is the first step for automation of the 3D printing process in FDM technology.

  5. Algorithms and data structures for automated change detection and classification of sidescan sonar imagery

    Science.gov (United States)

    Gendron, Marlin Lee

    During Mine Warfare (MIW) operations, MIW analysts perform change detection by visually comparing historical sidescan sonar imagery (SSI) collected by a sidescan sonar with recently collected SSI in an attempt to identify objects (which might be explosive mines) placed at sea since the last time the area was surveyed. This dissertation presents a data structure and three algorithms, developed by the author, that are part of an automated change detection and classification (ACDC) system. MIW analysts at the Naval Oceanographic Office, to reduce the amount of time to perform change detection, are currently using ACDC. The dissertation introductory chapter gives background information on change detection, ACDC, and describes how SSI is produced from raw sonar data. Chapter 2 presents the author's Geospatial Bitmap (GB) data structure, which is capable of storing information geographically and is utilized by the three algorithms. This chapter shows that a GB data structure used in a polygon-smoothing algorithm ran between 1.3--48.4x faster than a sparse matrix data structure. Chapter 3 describes the GB clustering algorithm, which is the author's repeatable, order-independent method for clustering. Results from tests performed in this chapter show that the time to cluster a set of points is not affected by the distribution or the order of the points. In Chapter 4, the author presents his real-time computer-aided detection (CAD) algorithm that automatically detects mine-like objects on the seafloor in SSI. The author ran his GB-based CAD algorithm on real SSI data, and results of these tests indicate that his real-time CAD algorithm performs comparably to or better than other non-real-time CAD algorithms. The author presents his computer-aided search (CAS) algorithm in Chapter 5. CAS helps MIW analysts locate mine-like features that are geospatially close to previously detected features. A comparison between the CAS and a great circle distance algorithm shows that the

  6. Automated contouring error detection based on supervised geometric attribute distribution models for radiation therapy: A general strategy

    International Nuclear Information System (INIS)

    Chen, Hsin-Chen; Tan, Jun; Dolly, Steven; Kavanaugh, James; Harold Li, H.; Altman, Michael; Gay, Hiram; Thorstad, Wade L.; Mutic, Sasa; Li, Hua; Anastasio, Mark A.; Low, Daniel A.

    2015-01-01

    Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy based on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets

  7. Automated contouring error detection based on supervised geometric attribute distribution models for radiation therapy: A general strategy

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Hsin-Chen; Tan, Jun; Dolly, Steven; Kavanaugh, James; Harold Li, H.; Altman, Michael; Gay, Hiram; Thorstad, Wade L.; Mutic, Sasa; Li, Hua, E-mail: huli@radonc.wustl.edu [Department of Radiation Oncology, Washington University, St. Louis, Missouri 63110 (United States); Anastasio, Mark A. [Department of Biomedical Engineering, Washington University, St. Louis, Missouri 63110 (United States); Low, Daniel A. [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California 90095 (United States)

    2015-02-15

    Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy based on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets

  8. An Energy Efficient Adaptive Sampling Algorithm in a Sensor Network for Automated Water Quality Monitoring.

    Science.gov (United States)

    Shu, Tongxin; Xia, Min; Chen, Jiahong; Silva, Clarence de

    2017-11-05

    Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA) is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO) and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA), while achieving around the same Normalized Mean Error (NME), DDASA is superior in saving 5.31% more battery energy.

  9. Low power multi-camera system and algorithms for automated threat detection

    Science.gov (United States)

    Huber, David J.; Khosla, Deepak; Chen, Yang; Van Buer, Darrel J.; Martin, Kevin

    2013-05-01

    A key to any robust automated surveillance system is continuous, wide field-of-view sensor coverage and high accuracy target detection algorithms. Newer systems typically employ an array of multiple fixed cameras that provide individual data streams, each of which is managed by its own processor. This array can continuously capture the entire field of view, but collecting all the data and back-end detection algorithm consumes additional power and increases the size, weight, and power (SWaP) of the package. This is often unacceptable, as many potential surveillance applications have strict system SWaP requirements. This paper describes a wide field-of-view video system that employs multiple fixed cameras and exhibits low SWaP without compromising the target detection rate. We cycle through the sensors, fetch a fixed number of frames, and process them through a modified target detection algorithm. During this time, the other sensors remain powered-down, which reduces the required hardware and power consumption of the system. We show that the resulting gaps in coverage and irregular frame rate do not affect the detection accuracy of the underlying algorithms. This reduces the power of an N-camera system by up to approximately N-fold compared to the baseline normal operation. This work was applied to Phase 2 of DARPA Cognitive Technology Threat Warning System (CT2WS) program and used during field testing.

  10. An Energy Efficient Adaptive Sampling Algorithm in a Sensor Network for Automated Water Quality Monitoring

    Directory of Open Access Journals (Sweden)

    Tongxin Shu

    2017-11-01

    Full Text Available Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA, while achieving around the same Normalized Mean Error (NME, DDASA is superior in saving 5.31% more battery energy.

  11. Optimal Solution for VLSI Physical Design Automation Using Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    I. Hameem Shanavas

    2014-01-01

    Full Text Available In Optimization of VLSI Physical Design, area minimization and interconnect length minimization is an important objective in physical design automation of very large scale integration chips. The objective of minimizing the area and interconnect length would scale down the size of integrated chips. To meet the above objective, it is necessary to find an optimal solution for physical design components like partitioning, floorplanning, placement, and routing. This work helps to perform the optimization of the benchmark circuits with the above said components of physical design using hierarchical approach of evolutionary algorithms. The goal of minimizing the delay in partitioning, minimizing the silicon area in floorplanning, minimizing the layout area in placement, minimizing the wirelength in routing has indefinite influence on other criteria like power, clock, speed, cost, and so forth. Hybrid evolutionary algorithm is applied on each of its phases to achieve the objective. Because evolutionary algorithm that includes one or many local search steps within its evolutionary cycles to obtain the minimization of area and interconnect length. This approach combines a hierarchical design like genetic algorithm and simulated annealing to attain the objective. This hybrid approach can quickly produce optimal solutions for the popular benchmarks.

  12. Kollegial supervision

    DEFF Research Database (Denmark)

    Andersen, Ole Dibbern; Petersson, Erling

    Publikationen belyser, hvordan kollegial supervision i en kan organiseres i en uddannelsesinstitution......Publikationen belyser, hvordan kollegial supervision i en kan organiseres i en uddannelsesinstitution...

  13. Synthesis Study on Transitions in Signal Infrastructure and Control Algorithms for Connected and Automated Transportation

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, H. M. Abdul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wang, Hong [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Young, Stan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sperling, Joshua [National Renewable Energy Lab. (NREL), Golden, CO (United States); Beck, John [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-06-01

    Documenting existing state of practice is an initial step in developing future control infrastructure to be co-deployed for heterogeneous mix of connected and automated vehicles with human drivers while leveraging benefits to safety, congestion, and energy. With advances in information technology and extensive deployment of connected and automated vehicle technology anticipated over the coming decades, cities globally are making efforts to plan and prepare for these transitions. CAVs not only offer opportunities to improve transportation systems through enhanced safety and efficient operations of vehicles. There are also significant needs in terms of exploring how best to leverage vehicle-to-vehicle (V2V) technology, vehicle-to-infrastructure (V2I) technology and vehicle-to-everything (V2X) technology. Both Connected Vehicle (CV) and Connected and Automated Vehicle (CAV) paradigms feature bi-directional connectivity and share similar applications in terms of signal control algorithm and infrastructure implementation. The discussion in our synthesis study assumes the CAV/CV context where connectivity exists with or without automated vehicles. Our synthesis study explores the current state of signal control algorithms and infrastructure, reports the completed and newly proposed CV/CAV deployment studies regarding signal control schemes, reviews the deployment costs for CAV/AV signal infrastructure, and concludes with a discussion on the opportunities such as detector free signal control schemes and dynamic performance management for intersections, and challenges such as dependency on market adaptation and the need to build a fault-tolerant signal system deployment in a CAV/CV environment. The study will serve as an initial critical assessment of existing signal control infrastructure (devices, control instruments, and firmware) and control schemes (actuated, adaptive, and coordinated-green wave). Also, the report will help to identify the future needs for the signal

  14. An automated phase correction algorithm for retrieving permittivity and permeability of electromagnetic metamaterials

    Directory of Open Access Journals (Sweden)

    Z. X. Cao

    2014-06-01

    Full Text Available To retrieve complex-valued effective permittivity and permeability of electromagnetic metamaterials (EMMs based on resonant effect from scattering parameters using a complex logarithmic function is not inevitable. When complex values are expressed in terms of magnitude and phase, an infinite number of permissible phase angles is permissible due to the multi-valued property of complex logarithmic functions. Special attention needs to be paid to ensure continuity of the effective permittivity and permeability of lossy metamaterials as frequency sweeps. In this paper, an automated phase correction (APC algorithm is proposed to properly trace and compensate phase angles of the complex logarithmic function which may experience abrupt phase jumps near the resonant frequency region of the concerned EMMs, and hence the continuity of the effective optical properties of lossy metamaterials is ensured. The algorithm is then verified to extract effective optical properties from the simulated scattering parameters of the four different types of metamaterial media: a cut-wire cell array, a split ring resonator (SRR cell array, an electric-LC (E-LC resonator cell array, and a combined SRR and wire cell array respectively. The results demonstrate that the proposed algorithm is highly accurate and effective.

  15. A Hybrid Genetic Programming Algorithm for Automated Design of Dispatching Rules.

    Science.gov (United States)

    Nguyen, Su; Mei, Yi; Xue, Bing; Zhang, Mengjie

    2018-06-04

    Designing effective dispatching rules for production systems is a difficult and timeconsuming task if it is done manually. In the last decade, the growth of computing power, advanced machine learning, and optimisation techniques has made the automated design of dispatching rules possible and automatically discovered rules are competitive or outperform existing rules developed by researchers. Genetic programming is one of the most popular approaches to discovering dispatching rules in the literature, especially for complex production systems. However, the large heuristic search space may restrict genetic programming from finding near optimal dispatching rules. This paper develops a new hybrid genetic programming algorithm for dynamic job shop scheduling based on a new representation, a new local search heuristic, and efficient fitness evaluators. Experiments show that the new method is effective regarding the quality of evolved rules. Moreover, evolved rules are also significantly smaller and contain more relevant attributes.

  16. A fully automated contour detection algorithm the preliminary step for scatter and attenuation compensation in SPECT

    International Nuclear Information System (INIS)

    Younes, R.B.; Mas, J.; Bidet, R.

    1988-01-01

    Contour detection is an important step in information extraction from nuclear medicine images. In order to perform accurate quantitative studies in single photon emission computed tomography (SPECT) a new procedure is described which can rapidly derive the best fit contour of an attenuated medium. Some authors evaluate the influence of the detected contour on the reconstructed images with various attenuation correction techniques. Most of the methods are strongly affected by inaccurately detected contours. This approach uses the Compton window to redetermine the convex contour: It seems to be simpler and more practical in clinical SPECT studies. The main advantages of this procedure are the high speed of computation, the accuracy of the contour found and the programme's automation. Results obtained using computer simulated and real phantoms or clinical studies demonstrate the reliability of the present algorithm. (orig.)

  17. The AUDANA algorithm for automated protein 3D structure determination from NMR NOE data

    International Nuclear Information System (INIS)

    Lee, Woonghee; Petit, Chad M.; Cornilescu, Gabriel; Stark, Jaime L.; Markley, John L.

    2016-01-01

    We introduce AUDANA (Automated Database-Assisted NOE Assignment), an algorithm for determining three-dimensional structures of proteins from NMR data that automates the assignment of 3D-NOE spectra, generates distance constraints, and conducts iterative high temperature molecular dynamics and simulated annealing. The protein sequence, chemical shift assignments, and NOE spectra are the only required inputs. Distance constraints generated automatically from ambiguously assigned NOE peaks are validated during the structure calculation against information from an enlarged version of the freely available PACSY database that incorporates information on protein structures deposited in the Protein Data Bank (PDB). This approach yields robust sets of distance constraints and 3D structures. We evaluated the performance of AUDANA with input data for 14 proteins ranging in size from 6 to 25 kDa that had 27–98 % sequence identity to proteins in the database. In all cases, the automatically calculated 3D structures passed stringent validation tests. Structures were determined with and without database support. In 9/14 cases, database support improved the agreement with manually determined structures in the PDB and in 11/14 cases, database support lowered the r.m.s.d. of the family of 20 structural models.

  18. The AUDANA algorithm for automated protein 3D structure determination from NMR NOE data

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Woonghee, E-mail: whlee@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison and Biochemistry Department (United States); Petit, Chad M. [University of Alabama at Birmingham, Department of Biochemistry and Molecular Genetics (United States); Cornilescu, Gabriel; Stark, Jaime L.; Markley, John L., E-mail: markley@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison and Biochemistry Department (United States)

    2016-06-15

    We introduce AUDANA (Automated Database-Assisted NOE Assignment), an algorithm for determining three-dimensional structures of proteins from NMR data that automates the assignment of 3D-NOE spectra, generates distance constraints, and conducts iterative high temperature molecular dynamics and simulated annealing. The protein sequence, chemical shift assignments, and NOE spectra are the only required inputs. Distance constraints generated automatically from ambiguously assigned NOE peaks are validated during the structure calculation against information from an enlarged version of the freely available PACSY database that incorporates information on protein structures deposited in the Protein Data Bank (PDB). This approach yields robust sets of distance constraints and 3D structures. We evaluated the performance of AUDANA with input data for 14 proteins ranging in size from 6 to 25 kDa that had 27–98 % sequence identity to proteins in the database. In all cases, the automatically calculated 3D structures passed stringent validation tests. Structures were determined with and without database support. In 9/14 cases, database support improved the agreement with manually determined structures in the PDB and in 11/14 cases, database support lowered the r.m.s.d. of the family of 20 structural models.

  19. A semi-automated algorithm for hypothalamus volumetry in 3 Tesla magnetic resonance images.

    Science.gov (United States)

    Wolff, Julia; Schindler, Stephanie; Lucas, Christian; Binninger, Anne-Sophie; Weinrich, Luise; Schreiber, Jan; Hegerl, Ulrich; Möller, Harald E; Leitzke, Marco; Geyer, Stefan; Schönknecht, Peter

    2018-07-30

    The hypothalamus, a small diencephalic gray matter structure, is part of the limbic system. Volumetric changes of this structure occur in psychiatric diseases, therefore there is increasing interest in precise volumetry. Based on our detailed volumetry algorithm for 7 Tesla magnetic resonance imaging (MRI), we developed a method for 3 Tesla MRI, adopting anatomical landmarks and work in triplanar view. We overlaid T1-weighted MR images with gray matter-tissue probability maps to combine anatomical information with tissue class segmentation. Then, we outlined regions of interest (ROIs) that covered potential hypothalamus voxels. Within these ROIs, seed growing technique helped define the hypothalamic volume using gray matter probabilities from the tissue probability maps. This yielded a semi-automated method with short processing times of 20-40 min per hypothalamus. In the MRIs of ten subjects, reliabilities were determined as intraclass correlations (ICC) and volume overlaps in percent. Three raters achieved very good intra-rater reliabilities (ICC 0.82-0.97) and good inter-rater reliabilities (ICC 0.78 and 0.82). Overlaps of intra- and inter-rater runs were very good (≥ 89.7%). We present a fast, semi-automated method for in vivo hypothalamus volumetry in 3 Tesla MRI. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Methodology, Algorithms, and Emerging Tool for Automated Design of Intelligent Integrated Multi-Sensor Systems

    Directory of Open Access Journals (Sweden)

    Andreas König

    2009-11-01

    Full Text Available The emergence of novel sensing elements, computing nodes, wireless communication and integration technology provides unprecedented possibilities for the design and application of intelligent systems. Each new application system must be designed from scratch, employing sophisticated methods ranging from conventional signal processing to computational intelligence. Currently, a significant part of this overall algorithmic chain of the computational system model still has to be assembled manually by experienced designers in a time and labor consuming process. In this research work, this challenge is picked up and a methodology and algorithms for automated design of intelligent integrated and resource-aware multi-sensor systems employing multi-objective evolutionary computation are introduced. The proposed methodology tackles the challenge of rapid-prototyping of such systems under realization constraints and, additionally, includes features of system instance specific self-correction for sustained operation of a large volume and in a dynamically changing environment. The extension of these concepts to the reconfigurable hardware platform renders so called self-x sensor systems, which stands, e.g., for self-monitoring, -calibrating, -trimming, and -repairing/-healing systems. Selected experimental results prove the applicability and effectiveness of our proposed methodology and emerging tool. By our approach, competitive results were achieved with regard to classification accuracy, flexibility, and design speed under additional design constraints.

  1. Novel automated inversion algorithm for temperature reconstruction using gas isotopes from ice cores

    Directory of Open Access Journals (Sweden)

    M. Döring

    2018-06-01

    Full Text Available Greenland past temperature history can be reconstructed by forcing the output of a firn-densification and heat-diffusion model to fit multiple gas-isotope data (δ15N or δ40Ar or δ15Nexcess extracted from ancient air in Greenland ice cores using published accumulation-rate (Acc datasets. We present here a novel methodology to solve this inverse problem, by designing a fully automated algorithm. To demonstrate the performance of this novel approach, we begin by intentionally constructing synthetic temperature histories and associated δ15N datasets, mimicking real Holocene data that we use as true values (targets to be compared to the output of the algorithm. This allows us to quantify uncertainties originating from the algorithm itself. The presented approach is completely automated and therefore minimizes the subjective impact of manual parameter tuning, leading to reproducible temperature estimates. In contrast to many other ice-core-based temperature reconstruction methods, the presented approach is completely independent from ice-core stable-water isotopes, providing the opportunity to validate water-isotope-based reconstructions or reconstructions where water isotopes are used together with δ15N or δ40Ar. We solve the inverse problem T(δ15N, Acc by using a combination of a Monte Carlo based iterative approach and the analysis of remaining mismatches between modelled and target data, based on cubic-spline filtering of random numbers and the laboratory-determined temperature sensitivity for nitrogen isotopes. Additionally, the presented reconstruction approach was tested by fitting measured δ40Ar and δ15Nexcess data, which led as well to a robust agreement between modelled and measured data. The obtained final mismatches follow a symmetric standard-distribution function. For the study on synthetic data, 95 % of the mismatches compared to the synthetic target data are in an envelope between 3.0 to 6.3 permeg for δ15N and 0.23 to 0

  2. An Automated Defect Prediction Framework using Genetic Algorithms: A Validation of Empirical Studies

    Directory of Open Access Journals (Sweden)

    Juan Murillo-Morera

    2016-05-01

    Full Text Available Today, it is common for software projects to collect measurement data through development processes. With these data, defect prediction software can try to estimate the defect proneness of a software module, with the objective of assisting and guiding software practitioners. With timely and accurate defect predictions, practitioners can focus their limited testing resources on higher risk areas. This paper reports the results of three empirical studies that uses an automated genetic defect prediction framework. This framework generates and compares different learning schemes (preprocessing + attribute selection + learning algorithms and selects the best one using a genetic algorithm, with the objective to estimate the defect proneness of a software module. The first empirical study is a performance comparison of our framework with the most important framework of the literature. The second empirical study is a performance and runtime comparison between our framework and an exhaustive framework. The third empirical study is a sensitivity analysis. The last empirical study, is our main contribution in this paper. Performance of the software development defect prediction models (using AUC, Area Under the Curve was validated using NASA-MDP and PROMISE data sets. Seventeen data sets from NASA-MDP (13 and PROMISE (4 projects were analyzed running a NxM-fold cross-validation. A genetic algorithm was used to select the components of the learning schemes automatically, and to assess and report the results. Our results reported similar performance between frameworks. Our framework reported better runtime than exhaustive framework. Finally, we reported the best configuration according to sensitivity analysis.

  3. An optimized routing algorithm for the automated assembly of standard multimode ribbon fibers in a full-mesh optical backplane

    Science.gov (United States)

    Basile, Vito; Guadagno, Gianluca; Ferrario, Maddalena; Fassi, Irene

    2018-03-01

    In this paper a parametric, modular and scalable algorithm allowing a fully automated assembly of a backplane fiber-optic interconnection circuit is presented. This approach guarantees the optimization of the optical fiber routing inside the backplane with respect to specific criteria (i.e. bending power losses), addressing both transmission performance and overall costs issues. Graph theory has been exploited to simplify the complexity of the NxN full-mesh backplane interconnection topology, firstly, into N independent sub-circuits and then, recursively, into a limited number of loops easier to be generated. Afterwards, the proposed algorithm selects a set of geometrical and architectural parameters whose optimization allows to identify the optimal fiber optic routing for each sub-circuit of the backplane. The topological and numerical information provided by the algorithm are then exploited to control a robot which performs the automated assembly of the backplane sub-circuits. The proposed routing algorithm can be extended to any array architecture and number of connections thanks to its modularity and scalability. Finally, the algorithm has been exploited for the automated assembly of an 8x8 optical backplane realized with standard multimode (MM) 12-fiber ribbons.

  4. TVR-DART: A More Robust Algorithm for Discrete Tomography From Limited Projection Data With Automated Gray Value Estimation.

    Science.gov (United States)

    Xiaodong Zhuge; Palenstijn, Willem Jan; Batenburg, Kees Joost

    2016-01-01

    In this paper, we present a novel iterative reconstruction algorithm for discrete tomography (DT) named total variation regularized discrete algebraic reconstruction technique (TVR-DART) with automated gray value estimation. This algorithm is more robust and automated than the original DART algorithm, and is aimed at imaging of objects consisting of only a few different material compositions, each corresponding to a different gray value in the reconstruction. By exploiting two types of prior knowledge of the scanned object simultaneously, TVR-DART solves the discrete reconstruction problem within an optimization framework inspired by compressive sensing to steer the current reconstruction toward a solution with the specified number of discrete gray values. The gray values and the thresholds are estimated as the reconstruction improves through iterations. Extensive experiments from simulated data, experimental μCT, and electron tomography data sets show that TVR-DART is capable of providing more accurate reconstruction than existing algorithms under noisy conditions from a small number of projection images and/or from a small angular range. Furthermore, the new algorithm requires less effort on parameter tuning compared with the original DART algorithm. With TVR-DART, we aim to provide the tomography society with an easy-to-use and robust algorithm for DT.

  5. Topside Ionogram Scaler With True Height Algorithm (TOPIST): Automated processing of ISIS topside ionograms

    Science.gov (United States)

    Bilitza, Dieter; Huang, Xueqin; Reinisch, Bodo W.; Benson, Robert F.; Hills, H. Kent; Schar, William B.

    2004-02-01

    The United States/Canadian ISIS-1 and ISIS-2 satellites collected several million topside ionograms in the 1960s and 1970s with a multinational network of ground stations that provided good global coverage. However, processing of these ionograms into electron density profiles required time-consuming manual scaling of the traces from the analog ionograms, and as a result, only a few percent of the ionograms had been processed into electron density profiles. In recent years an effort began to digitize the analog recordings to prepare the ionograms for computerized analysis. As of November 2002, approximately 390,000 ISIS-1 and ISIS-2 digital topside-sounder ionograms have been produced. The Topside Ionogram Scaler With True Height Algorithm (TOPIST) program was developed for the automated scaling of the echo traces and for the inversion of these traces into topside electron density profiles. The program is based on the techniques that have been successfully applied in the analysis of ground-based Digisonde ionograms. The TOPIST software also includes an "editing option" for manual scaling of the more difficult ionograms, which could not be scaled during the automated TOPIST run. TOPIST is now successfully scaling ˜60% of the ISIS ionograms, and the electron density profiles are available through the online archive of the National Space Science Data Center at ftp://nssdcftp.gsfc.nasa.gov/spacecraft_data/isis/topside_sounder. This data restoration effort is producing a unique global database of topside electron densities over more than one solar cycle, which will be of particular importance for improvements of topside ionosphere models, especially the International Reference Ionosphere.

  6. Automated beam placement for breast radiotherapy using a support vector machine based algorithm

    International Nuclear Information System (INIS)

    Zhao Xuan; Kong, Dewen; Jozsef, Gabor; Chang, Jenghwa; Wong, Edward K.; Formenti, Silvia C.; Wang Yao

    2012-01-01

    Purpose: To develop an automated beam placement technique for whole breast radiotherapy using tangential beams. We seek to find optimal parameters for tangential beams to cover the whole ipsilateral breast (WB) and minimize the dose to the organs at risk (OARs). Methods: A support vector machine (SVM) based method is proposed to determine the optimal posterior plane of the tangential beams. Relative significances of including/avoiding the volumes of interests are incorporated into the cost function of the SVM. After finding the optimal 3-D plane that separates the whole breast (WB) and the included clinical target volumes (CTVs) from the OARs, the gantry angle, collimator angle, and posterior jaw size of the tangential beams are derived from the separating plane equation. Dosimetric measures of the treatment plans determined by the automated method are compared with those obtained by applying manual beam placement by the physicians. The method can be further extended to use multileaf collimator (MLC) blocking by optimizing posterior MLC positions. Results: The plans for 36 patients (23 prone- and 13 supine-treated) with left breast cancer were analyzed. Our algorithm reduced the volume of the heart that receives >500 cGy dose (V5) from 2.7 to 1.7 cm 3 (p = 0.058) on average and the volume of the ipsilateral lung that receives >1000 cGy dose (V10) from 55.2 to 40.7 cm 3 (p = 0.0013). The dose coverage as measured by volume receiving >95% of the prescription dose (V95%) of the WB without a 5 mm superficial layer decreases by only 0.74% (p = 0.0002) and the V95% for the tumor bed with 1.5 cm margin remains unchanged. Conclusions: This study has demonstrated the feasibility of using a SVM-based algorithm to determine optimal beam placement without a physician's intervention. The proposed method reduced the dose to OARs, especially for supine treated patients, without any relevant degradation of dose homogeneity and coverage in general.

  7. Automation of RELAP5 input calibration and code validation using genetic algorithm

    International Nuclear Information System (INIS)

    Phung, Viet-Anh; Kööp, Kaspar; Grishchenko, Dmitry; Vorobyev, Yury; Kudinov, Pavel

    2016-01-01

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  8. Automation of RELAP5 input calibration and code validation using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Phung, Viet-Anh, E-mail: vaphung@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Kööp, Kaspar, E-mail: kaspar@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Vorobyev, Yury, E-mail: yura3510@gmail.com [National Research Center “Kurchatov Institute”, Kurchatov square 1, Moscow 123182 (Russian Federation); Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden)

    2016-04-15

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  9. Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Aidan Patrick; Schultz, Peter Andrew; Crozier, Paul; Moore, Stan Gerald; Swiler, Laura Painton; Stephens, John Adam; Trott, Christian Robert; Foiles, Stephen Martin; Tucker, Garritt J. (Drexel University)

    2014-09-01

    This report summarizes the result of LDRD project 12-0395, titled "Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations." During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel computers

  10. The Application of the Analytic Hierarchy Process and a New Correlation Algorithm to Urban Construction and Supervision Using Multi-Source Government Data in Tianjin

    Directory of Open Access Journals (Sweden)

    Shaoyi Wang

    2018-02-01

    Full Text Available As the era of big data approaches, big data has attracted increasing amounts of attention from researchers. Various types of studies have been conducted and these studies have focused particularly on the management, organization, and correlation of data and calculations using data. Most studies involving big data address applications in scientific, commercial, and ecological fields. However, the application of big data to government management is also needed. This paper examines the application of multi-source government data to urban construction and supervision in Tianjin, China. The analytic hierarchy process and a new approach called the correlation degree algorithm are introduced to calculate the degree of correlation between different approval items in one construction project and between different construction projects. The results show that more than 75% of the construction projects and their approval items are highly correlated. The results of this study suggest that most of the examined construction projects are well supervised, have relatively high probabilities of satisfying the relevant legal requirements, and observe their initial planning schemes.

  11. A Semi-Supervised Learning Algorithm for Predicting Four Types MiRNA-Disease Associations by Mutual Information in a Heterogeneous Network.

    Science.gov (United States)

    Zhang, Xiaotian; Yin, Jian; Zhang, Xu

    2018-03-02

    Increasing evidence suggests that dysregulation of microRNAs (miRNAs) may lead to a variety of diseases. Therefore, identifying disease-related miRNAs is a crucial problem. Currently, many computational approaches have been proposed to predict binary miRNA-disease associations. In this study, in order to predict underlying miRNA-disease association types, a semi-supervised model called the network-based label propagation algorithm is proposed to infer multiple types of miRNA-disease associations (NLPMMDA) by mutual information derived from the heterogeneous network. The NLPMMDA method integrates disease semantic similarity, miRNA functional similarity, and Gaussian interaction profile kernel similarity information of miRNAs and diseases to construct a heterogeneous network. NLPMMDA is a semi-supervised model which does not require verified negative samples. Leave-one-out cross validation (LOOCV) was implemented for four known types of miRNA-disease associations and demonstrated the reliable performance of our method. Moreover, case studies of lung cancer and breast cancer confirmed effective performance of NLPMMDA to predict novel miRNA-disease associations and their association types.

  12. Seasonal cultivated and fallow cropland mapping using MODIS-based automated cropland classification algorithm

    Science.gov (United States)

    Wu, Zhuoting; Thenkabail, Prasad S.; Mueller, Rick; Zakzeski, Audra; Melton, Forrest; Johnson, Lee; Rosevelt, Carolyn; Dwyer, John; Jones, Jeanine; Verdin, James P.

    2014-01-01

    Increasing drought occurrences and growing populations demand accurate, routine, and consistent cultivated and fallow cropland products to enable water and food security analysis. The overarching goal of this research was to develop and test automated cropland classification algorithm (ACCA) that provide accurate, consistent, and repeatable information on seasonal cultivated as well as seasonal fallow cropland extents and areas based on the Moderate Resolution Imaging Spectroradiometer remote sensing data. Seasonal ACCA development process involves writing series of iterative decision tree codes to separate cultivated and fallow croplands from noncroplands, aiming to accurately mirror reliable reference data sources. A pixel-by-pixel accuracy assessment when compared with the U.S. Department of Agriculture (USDA) cropland data showed, on average, a producer’s accuracy of 93% and a user’s accuracy of 85% across all months. Further, ACCA-derived cropland maps agreed well with the USDA Farm Service Agency crop acreage-reported data for both cultivated and fallow croplands with R-square values over 0.7 and field surveys with an accuracy of ≥95% for cultivated croplands and ≥76% for fallow croplands. Our results demonstrated the ability of ACCA to generate cropland products, such as cultivated and fallow cropland extents and areas, accurately, automatically, and repeatedly throughout the growing season.

  13. Seasonal cultivated and fallow cropland mapping using MODIS-based automated cropland classification algorithm

    Science.gov (United States)

    Wu, Zhuoting; Thenkabail, Prasad S.; Mueller, Rick; Zakzeski, Audra; Melton, Forrest; Johnson, Lee; Rosevelt, Carolyn; Dwyer, John; Jones, Jeanine; Verdin, James P.

    2014-01-01

    Increasing drought occurrences and growing populations demand accurate, routine, and consistent cultivated and fallow cropland products to enable water and food security analysis. The overarching goal of this research was to develop and test automated cropland classification algorithm (ACCA) that provide accurate, consistent, and repeatable information on seasonal cultivated as well as seasonal fallow cropland extents and areas based on the Moderate Resolution Imaging Spectroradiometer remote sensing data. Seasonal ACCA development process involves writing series of iterative decision tree codes to separate cultivated and fallow croplands from noncroplands, aiming to accurately mirror reliable reference data sources. A pixel-by-pixel accuracy assessment when compared with the U.S. Department of Agriculture (USDA) cropland data showed, on average, a producer's accuracy of 93% and a user's accuracy of 85% across all months. Further, ACCA-derived cropland maps agreed well with the USDA Farm Service Agency crop acreage-reported data for both cultivated and fallow croplands with R-square values over 0.7 and field surveys with an accuracy of ≥95% for cultivated croplands and ≥76% for fallow croplands. Our results demonstrated the ability of ACCA to generate cropland products, such as cultivated and fallow cropland extents and areas, accurately, automatically, and repeatedly throughout the growing season.

  14. System Performance of an Integrated Airborne Spacing Algorithm with Ground Automation

    Science.gov (United States)

    Swieringa, Kurt A.; Wilson, Sara R.; Baxley, Brian T.

    2016-01-01

    The National Aeronautics and Space Administration's (NASA's) first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature ATM technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise time-based scheduling in the Terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools to enable precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise spacing behind another aircraft. Recent simulations and IM algorithm development at NASA have focused on trajectory-based IM operations where aircraft equipped with IM avionics are expected to achieve a spacing goal, assigned by air traffic controllers, at the final approach fix. The recently published IM Minimum Operational Performance Standards describe five types of IM operations. This paper discusses the results and conclusions of a human-in-the-loop simulation that investigated three of those IM operations. The results presented in this paper focus on system performance and integration metrics. Overall, the IM operations conducted in this simulation integrated well with ground-based decisions support tools and certain types of IM operational were able to provide improved spacing precision at the final approach fix; however, some issues were identified that should be addressed prior to implementing IM procedures into real-world operations.

  15. The design of 3D scaffold for tissue engineering using automated scaffold design algorithm.

    Science.gov (United States)

    Mahmoud, Shahenda; Eldeib, Ayman; Samy, Sherif

    2015-06-01

    Several progresses have been introduced in the field of bone regenerative medicine. A new term tissue engineering (TE) was created. In TE, a highly porous artificial extracellular matrix or scaffold is required to accommodate cells and guide their growth in three dimensions. The design of scaffolds with desirable internal and external structure represents a challenge for TE. In this paper, we introduce a new method known as automated scaffold design (ASD) for designing a 3D scaffold with a minimum mismatches for its geometrical parameters. The method makes use of k-means clustering algorithm to separate the different tissues and hence decodes the defected bone portions. The segmented portions of different slices are registered to construct the 3D volume for the data. It also uses an isosurface rendering technique for 3D visualization of the scaffold and bones. It provides the ability to visualize the transplanted as well as the normal bone portions. The proposed system proves good performance in both the segmentation results and visualizations aspects.

  16. Microseismic event location using global optimization algorithms: An integrated and automated workflow

    Science.gov (United States)

    Lagos, Soledad R.; Velis, Danilo R.

    2018-02-01

    We perform the location of microseismic events generated in hydraulic fracturing monitoring scenarios using two global optimization techniques: Very Fast Simulated Annealing (VFSA) and Particle Swarm Optimization (PSO), and compare them against the classical grid search (GS). To this end, we present an integrated and optimized workflow that concatenates into an automated bash script the different steps that lead to the microseismic events location from raw 3C data. First, we carry out the automatic detection, denoising and identification of the P- and S-waves. Secondly, we estimate their corresponding backazimuths using polarization information, and propose a simple energy-based criterion to automatically decide which is the most reliable estimate. Finally, after taking proper care of the size of the search space using the backazimuth information, we perform the location using the aforementioned algorithms for 2D and 3D usual scenarios of hydraulic fracturing processes. We assess the impact of restricting the search space and show the advantages of using either VFSA or PSO over GS to attain significant speed-ups.

  17. A SURVEY OF SEMI-SUPERVISED LEARNING

    OpenAIRE

    Amrita Sadarangani *, Dr. Anjali Jivani

    2016-01-01

    Semi Supervised Learning involves using both labeled and unlabeled data to train a classifier or for clustering. Semi supervised learning finds usage in many applications, since labeled data can be hard to find in many cases. Currently, a lot of research is being conducted in this area. This paper discusses the different algorithms of semi supervised learning and then their advantages and limitations are compared. The differences between supervised classification and semi-supervised classific...

  18. Performance evaluation of an automated single-channel sleep–wake detection algorithm

    Directory of Open Access Journals (Sweden)

    Kaplan RF

    2014-10-01

    Full Text Available Richard F Kaplan,1 Ying Wang,1 Kenneth A Loparo,1,2 Monica R Kelly,3 Richard R Bootzin3 1General Sleep Corporation, Euclid, OH, USA; 2Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH, USA; 3Department of Psychology, University of Arizona, Tucson, AZ, USA Background: A need exists, from both a clinical and a research standpoint, for objective sleep measurement systems that are both easy to use and can accurately assess sleep and wake. This study evaluates the output of an automated sleep–wake detection algorithm (Z-ALG used in the Zmachine (a portable, single-channel, electroencephalographic [EEG] acquisition and analysis system against laboratory polysomnography (PSG using a consensus of expert visual scorers. Methods: Overnight laboratory PSG studies from 99 subjects (52 females/47 males, 18–60 years, median age 32.7 years, including both normal sleepers and those with a variety of sleep disorders, were assessed. PSG data obtained from the differential mastoids (A1–A2 were assessed by Z-ALG, which determines sleep versus wake every 30 seconds using low-frequency, intermediate-frequency, and high-frequency and time domain EEG features. PSG data were independently scored by two to four certified PSG technologists, using standard Rechtschaffen and Kales guidelines, and these score files were combined on an epoch-by-epoch basis, using a majority voting rule, to generate a single score file per subject to compare against the Z-ALG output. Both epoch-by-epoch and standard sleep indices (eg, total sleep time, sleep efficiency, latency to persistent sleep, and wake after sleep onset were compared between the Z-ALG output and the technologist consensus score files. Results: Overall, the sensitivity and specificity for detecting sleep using the Z-ALG as compared to the technologist consensus are 95.5% and 92.5%, respectively, across all subjects, and the positive predictive value and the

  19. Combination of mass spectrometry-based targeted lipidomics and supervised machine learning algorithms in detecting adulterated admixtures of white rice.

    Science.gov (United States)

    Lim, Dong Kyu; Long, Nguyen Phuoc; Mo, Changyeun; Dong, Ziyuan; Cui, Lingmei; Kim, Giyoung; Kwon, Sung Won

    2017-10-01

    The mixing of extraneous ingredients with original products is a common adulteration practice in food and herbal medicines. In particular, authenticity of white rice and its corresponding blended products has become a key issue in food industry. Accordingly, our current study aimed to develop and evaluate a novel discrimination method by combining targeted lipidomics with powerful supervised learning methods, and eventually introduce a platform to verify the authenticity of white rice. A total of 30 cultivars were collected, and 330 representative samples of white rice from Korea and China as well as seven mixing ratios were examined. Random forests (RF), support vector machines (SVM) with a radial basis function kernel, C5.0, model averaged neural network, and k-nearest neighbor classifiers were used for the classification. We achieved desired results, and the classifiers effectively differentiated white rice from Korea to blended samples with high prediction accuracy for the contamination ratio as low as five percent. In addition, RF and SVM classifiers were generally superior to and more robust than the other techniques. Our approach demonstrated that the relative differences in lysoGPLs can be successfully utilized to detect the adulterated mixing of white rice originating from different countries. In conclusion, the present study introduces a novel and high-throughput platform that can be applied to authenticate adulterated admixtures from original white rice samples. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Derivation and validation of the automated search algorithms to identify cognitive impairment and dementia in electronic health records.

    Science.gov (United States)

    Amra, Sakusic; O'Horo, John C; Singh, Tarun D; Wilson, Gregory A; Kashyap, Rahul; Petersen, Ronald; Roberts, Rosebud O; Fryer, John D; Rabinstein, Alejandro A; Gajic, Ognjen

    2017-02-01

    Long-term cognitive impairment is a common and important problem in survivors of critical illness. We developed electronic search algorithms to identify cognitive impairment and dementia from the electronic medical records (EMRs) that provide opportunity for big data analysis. Eligible patients met 2 criteria. First, they had a formal cognitive evaluation by The Mayo Clinic Study of Aging. Second, they were hospitalized in intensive care unit at our institution between 2006 and 2014. The "criterion standard" for diagnosis was formal cognitive evaluation supplemented by input from an expert neurologist. Using all available EMR data, we developed and improved our algorithms in the derivation cohort and validated them in the independent validation cohort. Of 993 participants who underwent formal cognitive testing and were hospitalized in intensive care unit, we selected 151 participants at random to form the derivation and validation cohorts. The automated electronic search algorithm for cognitive impairment was 94.3% sensitive and 93.0% specific. The search algorithms for dementia achieved respective sensitivity and specificity of 97% and 99%. EMR search algorithms significantly outperformed International Classification of Diseases codes. Automated EMR data extractions for cognitive impairment and dementia are reliable and accurate and can serve as acceptable and efficient alternatives to time-consuming manual data review. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Supervised Machine Learning Algorithms Can Classify Open-Text Feedback of Doctor Performance With Human-Level Accuracy

    Science.gov (United States)

    2017-01-01

    Background Machine learning techniques may be an effective and efficient way to classify open-text reports on doctor’s activity for the purposes of quality assurance, safety, and continuing professional development. Objective The objective of the study was to evaluate the accuracy of machine learning algorithms trained to classify open-text reports of doctor performance and to assess the potential for classifications to identify significant differences in doctors’ professional performance in the United Kingdom. Methods We used 1636 open-text comments (34,283 words) relating to the performance of 548 doctors collected from a survey of clinicians’ colleagues using the General Medical Council Colleague Questionnaire (GMC-CQ). We coded 77.75% (1272/1636) of the comments into 5 global themes (innovation, interpersonal skills, popularity, professionalism, and respect) using a qualitative framework. We trained 8 machine learning algorithms to classify comments and assessed their performance using several training samples. We evaluated doctor performance using the GMC-CQ and compared scores between doctors with different classifications using t tests. Results Individual algorithm performance was high (range F score=.68 to .83). Interrater agreement between the algorithms and the human coder was highest for codes relating to “popular” (recall=.97), “innovator” (recall=.98), and “respected” (recall=.87) codes and was lower for the “interpersonal” (recall=.80) and “professional” (recall=.82) codes. A 10-fold cross-validation demonstrated similar performance in each analysis. When combined together into an ensemble of multiple algorithms, mean human-computer interrater agreement was .88. Comments that were classified as “respected,” “professional,” and “interpersonal” related to higher doctor scores on the GMC-CQ compared with comments that were not classified (P.05). Conclusions Machine learning algorithms can classify open-text feedback

  2. Supervised Machine Learning Algorithms Can Classify Open-Text Feedback of Doctor Performance With Human-Level Accuracy.

    Science.gov (United States)

    Gibbons, Chris; Richards, Suzanne; Valderas, Jose Maria; Campbell, John

    2017-03-15

    Machine learning techniques may be an effective and efficient way to classify open-text reports on doctor's activity for the purposes of quality assurance, safety, and continuing professional development. The objective of the study was to evaluate the accuracy of machine learning algorithms trained to classify open-text reports of doctor performance and to assess the potential for classifications to identify significant differences in doctors' professional performance in the United Kingdom. We used 1636 open-text comments (34,283 words) relating to the performance of 548 doctors collected from a survey of clinicians' colleagues using the General Medical Council Colleague Questionnaire (GMC-CQ). We coded 77.75% (1272/1636) of the comments into 5 global themes (innovation, interpersonal skills, popularity, professionalism, and respect) using a qualitative framework. We trained 8 machine learning algorithms to classify comments and assessed their performance using several training samples. We evaluated doctor performance using the GMC-CQ and compared scores between doctors with different classifications using t tests. Individual algorithm performance was high (range F score=.68 to .83). Interrater agreement between the algorithms and the human coder was highest for codes relating to "popular" (recall=.97), "innovator" (recall=.98), and "respected" (recall=.87) codes and was lower for the "interpersonal" (recall=.80) and "professional" (recall=.82) codes. A 10-fold cross-validation demonstrated similar performance in each analysis. When combined together into an ensemble of multiple algorithms, mean human-computer interrater agreement was .88. Comments that were classified as "respected," "professional," and "interpersonal" related to higher doctor scores on the GMC-CQ compared with comments that were not classified (P.05). Machine learning algorithms can classify open-text feedback of doctor performance into multiple themes derived by human raters with high

  3. An automated land-use mapping comparison of the Bayesian maximum likelihood and linear discriminant analysis algorithms

    Science.gov (United States)

    Tom, C. H.; Miller, L. D.

    1984-01-01

    The Bayesian maximum likelihood parametric classifier has been tested against the data-based formulation designated 'linear discrimination analysis', using the 'GLIKE' decision and "CLASSIFY' classification algorithms in the Landsat Mapping System. Identical supervised training sets, USGS land use/land cover classes, and various combinations of Landsat image and ancilliary geodata variables, were used to compare the algorithms' thematic mapping accuracy on a single-date summer subscene, with a cellularized USGS land use map of the same time frame furnishing the ground truth reference. CLASSIFY, which accepts a priori class probabilities, is found to be more accurate than GLIKE, which assumes equal class occurrences, for all three mapping variable sets and both levels of detail. These results may be generalized to direct accuracy, time, cost, and flexibility advantages of linear discriminant analysis over Bayesian methods.

  4. Global left ventricular function in cardiac CT. Evaluation of an automated 3D region-growing segmentation algorithm

    International Nuclear Information System (INIS)

    Muehlenbruch, Georg; Das, Marco; Hohl, Christian; Wildberger, Joachim E.; Guenther, Rolf W.; Mahnken, Andreas H.; Rinck, Daniel; Flohr, Thomas G.; Koos, Ralf; Knackstedt, Christian

    2006-01-01

    The purpose was to evaluate a new semi-automated 3D region-growing segmentation algorithm for functional analysis of the left ventricle in multislice CT (MSCT) of the heart. Twenty patients underwent contrast-enhanced MSCT of the heart (collimation 16 x 0.75 mm; 120 kV; 550 mAseff). Multiphase image reconstructions with 1-mm axial slices and 8-mm short-axis slices were performed. Left ventricular volume measurements (end-diastolic volume, end-systolic volume, ejection fraction and stroke volume) from manually drawn endocardial contours in the short axis slices were compared to semi-automated region-growing segmentation of the left ventricle from the 1-mm axial slices. The post-processing-time for both methods was recorded. Applying the new region-growing algorithm in 13/20 patients (65%), proper segmentation of the left ventricle was feasible. In these patients, the signal-to-noise ratio was higher than in the remaining patients (3.2±1.0 vs. 2.6±0.6). Volume measurements of both segmentation algorithms showed an excellent correlation (all P≤0.0001); the limits of agreement for the ejection fraction were 2.3±8.3 ml. In the patients with proper segmentation the mean post-processing time using the region-growing algorithm was diminished by 44.2%. On the basis of a good contrast-enhanced data set, a left ventricular volume analysis using the new semi-automated region-growing segmentation algorithm is technically feasible, accurate and more time-effective. (orig.)

  5. SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws

    Science.gov (United States)

    Cooke, Daniel; Rushton, Nelson

    2013-01-01

    With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less

  6. Automated seismic detection of landslides at regional scales: a Random Forest based detection algorithm

    Science.gov (United States)

    Hibert, C.; Michéa, D.; Provost, F.; Malet, J. P.; Geertsema, M.

    2017-12-01

    Detection of landslide occurrences and measurement of their dynamics properties during run-out is a high research priority but a logistical and technical challenge. Seismology has started to help in several important ways. Taking advantage of the densification of global, regional and local networks of broadband seismic stations, recent advances now permit the seismic detection and location of landslides in near-real-time. This seismic detection could potentially greatly increase the spatio-temporal resolution at which we study landslides triggering, which is critical to better understand the influence of external forcings such as rainfalls and earthquakes. However, detecting automatically seismic signals generated by landslides still represents a challenge, especially for events with small mass. The low signal-to-noise ratio classically observed for landslide-generated seismic signals and the difficulty to discriminate these signals from those generated by regional earthquakes or anthropogenic and natural noises are some of the obstacles that have to be circumvented. We present a new method for automatically constructing instrumental landslide catalogues from continuous seismic data. We developed a robust and versatile solution, which can be implemented in any context where a seismic detection of landslides or other mass movements is relevant. The method is based on a spectral detection of the seismic signals and the identification of the sources with a Random Forest machine learning algorithm. The spectral detection allows detecting signals with low signal-to-noise ratio, while the Random Forest algorithm achieve a high rate of positive identification of the seismic signals generated by landslides and other seismic sources. The processing chain is implemented to work in a High Performance Computers centre which permits to explore years of continuous seismic data rapidly. We present here the preliminary results of the application of this processing chain for years

  7. Automated Detection of Selective Logging in Amazon Forests Using Airborne Lidar Data and Pattern Recognition Algorithms

    Science.gov (United States)

    Keller, M. M.; d'Oliveira, M. N.; Takemura, C. M.; Vitoria, D.; Araujo, L. S.; Morton, D. C.

    2012-12-01

    Selective logging, the removal of several valuable timber trees per hectare, is an important land use in the Brazilian Amazon and may degrade forests through long term changes in structure, loss of forest carbon and species diversity. Similar to deforestation, the annual area affected by selected logging has declined significantly in the past decade. Nonetheless, this land use affects several thousand km2 per year in Brazil. We studied a 1000 ha area of the Antimary State Forest (FEA) in the State of Acre, Brazil (9.304 ○S, 68.281 ○W) that has a basal area of 22.5 m2 ha-1 and an above-ground biomass of 231 Mg ha-1. Logging intensity was low, approximately 10 to 15 m3 ha-1. We collected small-footprint airborne lidar data using an Optech ALTM 3100EA over the study area once each in 2010 and 2011. The study area contained both recent and older logging that used both conventional and technologically advanced logging techniques. Lidar return density averaged over 20 m-2 for both collection periods with estimated horizontal and vertical precision of 0.30 and 0.15 m. A relative density model comparing returns from 0 to 1 m elevation to returns in 1-5 m elevation range revealed the pattern of roads and skid trails. These patterns were confirmed by ground-based GPS survey. A GIS model of the road and skid network was built using lidar and ground data. We tested and compared two pattern recognition approaches used to automate logging detection. Both segmentation using commercial eCognition segmentation and a Frangi filter algorithm identified the road and skid trail network compared to the GIS model. We report on the effectiveness of these two techniques.

  8. Man-machine supervision

    International Nuclear Information System (INIS)

    Montmain, J.

    2005-01-01

    Today's complexity of systems where man is involved has led to the development of more and more sophisticated information processing systems where decision making has become more and more difficult. The operator task has moved from operation to supervision and the production tool has become indissociable from its numerical instrumentation and control system. The integration of more and more numerous and sophisticated control indicators in the control room does not necessary fulfill the expectations of the operation team. It is preferable to develop cooperative information systems which are real situation understanding aids. The stake is not the automation of operators' cognitive tasks but the supply of a reasoning help. One of the challenges of interactive information systems is the selection, organisation and dynamical display of information. The efficiency of the whole man-machine system depends on the communication interface efficiency. This article presents the principles and specificities of man-machine supervision systems: 1 - principle: operator's role in control room, operator and automation, monitoring and diagnosis, characteristics of useful models for supervision; 2 - qualitative reasoning: origin, trends, evolutions; 3 - causal reasoning: causality, causal graph representation, causal and diagnostic graph; 4 - multi-points of view reasoning: multi flow modeling method, Sagace method; 5 - approximate reasoning: the symbolic numerical interface, the multi-criteria decision; 6 - example of application: supervision in a spent-fuel reprocessing facility. (J.S.)

  9. Development and Evaluation of an Automated Machine Learning Algorithm for In-Hospital Mortality Risk Adjustment Among Critical Care Patients.

    Science.gov (United States)

    Delahanty, Ryan J; Kaufman, David; Jones, Spencer S

    2018-06-01

    Risk adjustment algorithms for ICU mortality are necessary for measuring and improving ICU performance. Existing risk adjustment algorithms are not widely adopted. Key barriers to adoption include licensing and implementation costs as well as labor costs associated with human-intensive data collection. Widespread adoption of electronic health records makes automated risk adjustment feasible. Using modern machine learning methods and open source tools, we developed and evaluated a retrospective risk adjustment algorithm for in-hospital mortality among ICU patients. The Risk of Inpatient Death score can be fully automated and is reliant upon data elements that are generated in the course of usual hospital processes. One hundred thirty-one ICUs in 53 hospitals operated by Tenet Healthcare. A cohort of 237,173 ICU patients discharged between January 2014 and December 2016. The data were randomly split into training (36 hospitals), and validation (17 hospitals) data sets. Feature selection and model training were carried out using the training set while the discrimination, calibration, and accuracy of the model were assessed in the validation data set. Model discrimination was evaluated based on the area under receiver operating characteristic curve; accuracy and calibration were assessed via adjusted Brier scores and visual analysis of calibration curves. Seventeen features, including a mix of clinical and administrative data elements, were retained in the final model. The Risk of Inpatient Death score demonstrated excellent discrimination (area under receiver operating characteristic curve = 0.94) and calibration (adjusted Brier score = 52.8%) in the validation dataset; these results compare favorably to the published performance statistics for the most commonly used mortality risk adjustment algorithms. Low adoption of ICU mortality risk adjustment algorithms impedes progress toward increasing the value of the healthcare delivered in ICUs. The Risk of Inpatient Death

  10. Combining deep residual neural network features with supervised machine learning algorithms to classify diverse food image datasets.

    Science.gov (United States)

    McAllister, Patrick; Zheng, Huiru; Bond, Raymond; Moorhead, Anne

    2018-04-01

    Obesity is increasing worldwide and can cause many chronic conditions such as type-2 diabetes, heart disease, sleep apnea, and some cancers. Monitoring dietary intake through food logging is a key method to maintain a healthy lifestyle to prevent and manage obesity. Computer vision methods have been applied to food logging to automate image classification for monitoring dietary intake. In this work we applied pretrained ResNet-152 and GoogleNet convolutional neural networks (CNNs), initially trained using ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset with MatConvNet package, to extract features from food image datasets; Food 5K, Food-11, RawFooT-DB, and Food-101. Deep features were extracted from CNNs and used to train machine learning classifiers including artificial neural network (ANN), support vector machine (SVM), Random Forest, and Naive Bayes. Results show that using ResNet-152 deep features with SVM with RBF kernel can accurately detect food items with 99.4% accuracy using Food-5K validation food image dataset and 98.8% with Food-5K evaluation dataset using ANN, SVM-RBF, and Random Forest classifiers. Trained with ResNet-152 features, ANN can achieve 91.34%, 99.28% when applied to Food-11 and RawFooT-DB food image datasets respectively and SVM with RBF kernel can achieve 64.98% with Food-101 image dataset. From this research it is clear that using deep CNN features can be used efficiently for diverse food item image classification. The work presented in this research shows that pretrained ResNet-152 features provide sufficient generalisation power when applied to a range of food image classification tasks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Supervised Learning

    Science.gov (United States)

    Rokach, Lior; Maimon, Oded

    This chapter summarizes the fundamental aspects of supervised methods. The chapter provides an overview of concepts from various interrelated fields used in subsequent chapters. It presents basic definitions and arguments from the supervised machine learning literature and considers various issues, such as performance evaluation techniques and challenges for data mining tasks.

  12. Automated delay estimation at signalized intersections : phase I concept and algorithm development.

    Science.gov (United States)

    2011-07-01

    Currently there are several methods to measure the performance of surface streets, but their capabilities in dynamically estimating vehicle delay are limited. The objective of this research is to develop a method to automate traffic delay estimation ...

  13. An Automated Cropland Classification Algorithm (ACCA) for Tajikistan by combining Landsat, MODIS, and secondary data

    Science.gov (United States)

    Thenkabail, Prasad S.; Wu, Zhuoting

    2012-01-01

    The overarching goal of this research was to develop and demonstrate an automated Cropland Classification Algorithm (ACCA) that will rapidly, routinely, and accurately classify agricultural cropland extent, areas, and characteristics (e.g., irrigated vs. rainfed) over large areas such as a country or a region through combination of multi-sensor remote sensing and secondary data. In this research, a rule-based ACCA was conceptualized, developed, and demonstrated for the country of Tajikistan using mega file data cubes (MFDCs) involving data from Landsat Global Land Survey (GLS), Landsat Enhanced Thematic Mapper Plus (ETM+) 30 m, Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m time-series, a suite of secondary data (e.g., elevation, slope, precipitation, temperature), and in situ data. First, the process involved producing an accurate reference (or truth) cropland layer (TCL), consisting of cropland extent, areas, and irrigated vs. rainfed cropland areas, for the entire country of Tajikistan based on MFDC of year 2005 (MFDC2005). The methods involved in producing TCL included using ISOCLASS clustering, Tasseled Cap bi-spectral plots, spectro-temporal characteristics from MODIS 250 m monthly normalized difference vegetation index (NDVI) maximum value composites (MVC) time-series, and textural characteristics of higher resolution imagery. The TCL statistics accurately matched with the national statistics of Tajikistan for irrigated and rainfed croplands, where about 70% of croplands were irrigated and the rest rainfed. Second, a rule-based ACCA was developed to replicate the TCL accurately (~80% producer’s and user’s accuracies or within 20% quantity disagreement involving about 10 million Landsat 30 m sized cropland pixels of Tajikistan). Development of ACCA was an iterative process involving series of rules that are coded, refined, tweaked, and re-coded till ACCA derived croplands (ACLs) match accurately with TCLs. Third, the ACCA derived cropland

  14. An Automated Cropland Classification Algorithm (ACCA for Tajikistan by Combining Landsat, MODIS, and Secondary Data

    Directory of Open Access Journals (Sweden)

    Prasad S. Thenkabail

    2012-09-01

    Full Text Available The overarching goal of this research was to develop and demonstrate an automated Cropland Classification Algorithm (ACCA that will rapidly, routinely, and accurately classify agricultural cropland extent, areas, and characteristics (e.g., irrigated vs. rainfed over large areas such as a country or a region through combination of multi-sensor remote sensing and secondary data. In this research, a rule-based ACCA was conceptualized, developed, and demonstrated for the country of Tajikistan using mega file data cubes (MFDCs involving data from Landsat Global Land Survey (GLS, Landsat Enhanced Thematic Mapper Plus (ETM+ 30 m, Moderate Resolution Imaging Spectroradiometer (MODIS 250 m time-series, a suite of secondary data (e.g., elevation, slope, precipitation, temperature, and in situ data. First, the process involved producing an accurate reference (or truth cropland layer (TCL, consisting of cropland extent, areas, and irrigated vs. rainfed cropland areas, for the entire country of Tajikistan based on MFDC of year 2005 (MFDC2005. The methods involved in producing TCL included using ISOCLASS clustering, Tasseled Cap bi-spectral plots, spectro-temporal characteristics from MODIS 250 m monthly normalized difference vegetation index (NDVI maximum value composites (MVC time-series, and textural characteristics of higher resolution imagery. The TCL statistics accurately matched with the national statistics of Tajikistan for irrigated and rainfed croplands, where about 70% of croplands were irrigated and the rest rainfed. Second, a rule-based ACCA was developed to replicate the TCL accurately (~80% producer’s and user’s accuracies or within 20% quantity disagreement involving about 10 million Landsat 30 m sized cropland pixels of Tajikistan. Development of ACCA was an iterative process involving series of rules that are coded, refined, tweaked, and re-coded till ACCA derived croplands (ACLs match accurately with TCLs. Third, the ACCA derived

  15. Correction of oral contrast artifacts in CT-based attenuation correction of PET images using an automated segmentation algorithm

    International Nuclear Information System (INIS)

    Ahmadian, Alireza; Ay, Mohammad R.; Sarkar, Saeed; Bidgoli, Javad H.; Zaidi, Habib

    2008-01-01

    Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (μmap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated μmaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in

  16. Automated segmentation of white matter fiber bundles using diffusion tensor imaging data and a new density based clustering algorithm.

    Science.gov (United States)

    Kamali, Tahereh; Stashuk, Daniel

    2016-10-01

    Robust and accurate segmentation of brain white matter (WM) fiber bundles assists in diagnosing and assessing progression or remission of neuropsychiatric diseases such as schizophrenia, autism and depression. Supervised segmentation methods are infeasible in most applications since generating gold standards is too costly. Hence, there is a growing interest in designing unsupervised methods. However, most conventional unsupervised methods require the number of clusters be known in advance which is not possible in most applications. The purpose of this study is to design an unsupervised segmentation algorithm for brain white matter fiber bundles which can automatically segment fiber bundles using intrinsic diffusion tensor imaging data information without considering any prior information or assumption about data distributions. Here, a new density based clustering algorithm called neighborhood distance entropy consistency (NDEC), is proposed which discovers natural clusters within data by simultaneously utilizing both local and global density information. The performance of NDEC is compared with other state of the art clustering algorithms including chameleon, spectral clustering, DBSCAN and k-means using Johns Hopkins University publicly available diffusion tensor imaging data. The performance of NDEC and other employed clustering algorithms were evaluated using dice ratio as an external evaluation criteria and density based clustering validation (DBCV) index as an internal evaluation metric. Across all employed clustering algorithms, NDEC obtained the highest average dice ratio (0.94) and DBCV value (0.71). NDEC can find clusters with arbitrary shapes and densities and consequently can be used for WM fiber bundle segmentation where there is no distinct boundary between various bundles. NDEC may also be used as an effective tool in other pattern recognition and medical diagnostic systems in which discovering natural clusters within data is a necessity. Copyright

  17. Whither Supervision?

    OpenAIRE

    Duncan Waite

    2006-01-01

    This paper inquires if the school supervision is in decadence. Dr. Waite responds that the answer will depend on which perspective you look at it. Dr. Waite suggests taking in consideration three elements that are related: the field itself, the expert in the field (the professor, the theorist, the student and the administrator), and the context. When these three elements are revised, it emphasizes that there is not a consensus about the field of supervision, but there are coincidences related...

  18. SU-E-T-497: Semi-Automated in Vivo Radiochromic Film Dosimetry Using a Novel Image Processing Algorithm

    International Nuclear Information System (INIS)

    Reyhan, M; Yue, N

    2014-01-01

    Purpose: To validate an automated image processing algorithm designed to detect the center of radiochromic film used for in vivo film dosimetry against the current gold standard of manual selection. Methods: An image processing algorithm was developed to automatically select the region of interest (ROI) in *.tiff images that contain multiple pieces of radiochromic film (0.5x1.3cm 2 ). After a user has linked a calibration file to the processing algorithm and selected a *.tiff file for processing, an ROI is automatically detected for all films by a combination of thresholding and erosion, which removes edges and any additional markings for orientation. Calibration is applied to the mean pixel values from the ROIs and a *.tiff image is output displaying the original image with an overlay of the ROIs and the measured doses. Validation of the algorithm was determined by comparing in vivo dose determined using the current gold standard (manually drawn ROIs) versus automated ROIs for n=420 scanned films. Bland-Altman analysis, paired t-test, and linear regression were performed to demonstrate agreement between the processes. Results: The measured doses ranged from 0.2-886.6cGy. Bland-Altman analysis of the two techniques (automatic minus manual) revealed a bias of -0.28cGy and a 95% confidence interval of (5.5cGy,-6.1cGy). These values demonstrate excellent agreement between the two techniques. Paired t-test results showed no statistical differences between the two techniques, p=0.98. Linear regression with a forced zero intercept demonstrated that Automatic=0.997*Manual, with a Pearson correlation coefficient of 0.999. The minimal differences between the two techniques may be explained by the fact that the hand drawn ROIs were not identical to the automatically selected ones. The average processing time was 6.7seconds in Matlab on an IntelCore2Duo processor. Conclusion: An automated image processing algorithm has been developed and validated, which will help minimize

  19. Genetic algorithms in teaching artificial intelligence (automated generation of specific algebras)

    Science.gov (United States)

    Habiballa, Hashim; Jendryscik, Radek

    2017-11-01

    The problem of teaching essential Artificial Intelligence (AI) methods is an important task for an educator in the branch of soft-computing. The key focus is often given to proper understanding of the principle of AI methods in two essential points - why we use soft-computing methods at all and how we apply these methods to generate reasonable results in sensible time. We present one interesting problem solved in the non-educational research concerning automated generation of specific algebras in the huge search space. We emphasize above mentioned points as an educational case study of an interesting problem in automated generation of specific algebras.

  20. A new automated quantification algorithm for the detection and evaluation of focal liver lesions with contrast-enhanced ultrasound.

    Science.gov (United States)

    Gatos, Ilias; Tsantis, Stavros; Spiliopoulos, Stavros; Skouroliakou, Aikaterini; Theotokas, Ioannis; Zoumpoulis, Pavlos; Hazle, John D; Kagadis, George C

    2015-07-01

    Detect and classify focal liver lesions (FLLs) from contrast-enhanced ultrasound (CEUS) imaging by means of an automated quantification algorithm. The proposed algorithm employs a sophisticated segmentation method to detect and contour focal lesions from 52 CEUS video sequences (30 benign and 22 malignant). Lesion detection involves wavelet transform zero crossings utilization as an initialization step to the Markov random field model toward the lesion contour extraction. After FLL detection across frames, time intensity curve (TIC) is computed which provides the contrast agents' behavior at all vascular phases with respect to adjacent parenchyma for each patient. From each TIC, eight features were automatically calculated and employed into the support vector machines (SVMs) classification algorithm in the design of the image analysis model. With regard to FLLs detection accuracy, all lesions detected had an average overlap value of 0.89 ± 0.16 with manual segmentations for all CEUS frame-subsets included in the study. Highest classification accuracy from the SVM model was 90.3%, misdiagnosing three benign and two malignant FLLs with sensitivity and specificity values of 93.1% and 86.9%, respectively. The proposed quantification system that employs FLLs detection and classification algorithms may be of value to physicians as a second opinion tool for avoiding unnecessary invasive procedures.

  1. Application of Novel Software Algorithms to Spectral-Domain Optical Coherence Tomography for Automated Detection of Diabetic Retinopathy.

    Science.gov (United States)

    Adhi, Mehreen; Semy, Salim K; Stein, David W; Potter, Daniel M; Kuklinski, Walter S; Sleeper, Harry A; Duker, Jay S; Waheed, Nadia K

    2016-05-01

    To present novel software algorithms applied to spectral-domain optical coherence tomography (SD-OCT) for automated detection of diabetic retinopathy (DR). Thirty-one diabetic patients (44 eyes) and 18 healthy, nondiabetic controls (20 eyes) who underwent volumetric SD-OCT imaging and fundus photography were retrospectively identified. A retina specialist independently graded DR stage. Trained automated software generated a retinal thickness score signifying macular edema and a cluster score signifying microaneurysms and/or hard exudates for each volumetric SD-OCT. Of 44 diabetic eyes, 38 had DR and six eyes did not have DR. Leave-one-out cross-validation using a linear discriminant at missed detection/false alarm ratio of 3.00 computed software sensitivity and specificity of 92% and 69%, respectively, for DR detection when compared to clinical assessment. Novel software algorithms applied to commercially available SD-OCT can successfully detect DR and may have potential as a viable screening tool for DR in future. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:410-417.]. Copyright 2016, SLACK Incorporated.

  2. Neural network based automated algorithm to identify joint locations on hand/wrist radiographs for arthritis assessment

    International Nuclear Information System (INIS)

    Duryea, J.; Zaim, S.; Wolfe, F.

    2002-01-01

    Arthritis is a significant and costly healthcare problem that requires objective and quantifiable methods to evaluate its progression. Here we describe software that can automatically determine the locations of seven joints in the proximal hand and wrist that demonstrate arthritic changes. These are the five carpometacarpal (CMC1, CMC2, CMC3, CMC4, CMC5), radiocarpal (RC), and the scaphocapitate (SC) joints. The algorithm was based on an artificial neural network (ANN) that was trained using independent sets of digitized hand radiographs and manually identified joint locations. The algorithm used landmarks determined automatically by software developed in our previous work as starting points. Other than requiring user input of the location of nonanatomical structures and the orientation of the hand on the film, the procedure was fully automated. The software was tested on two datasets: 50 digitized hand radiographs from patients participating in a large clinical study, and 60 from subjects participating in arthritis research studies and who had mild to moderate rheumatoid arthritis (RA). It was evaluated by a comparison to joint locations determined by a trained radiologist using manual tracing. The success rate for determining the CMC, RC, and SC joints was 87%-99%, for normal hands and 81%-99% for RA hands. This is a first step in performing an automated computer-aided assessment of wrist joints for arthritis progression. The software provides landmarks that will be used by subsequent image processing routines to analyze each joint individually for structural changes such as erosions and joint space narrowing

  3. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  4. Automating "Word of Mouth" to Recommend Classes to Students: An Application of Social Information Filtering Algorithms

    Science.gov (United States)

    Booker, Queen Esther

    2009-01-01

    An approach used to tackle the problem of helping online students find the classes they want and need is a filtering technique called "social information filtering," a general approach to personalized information filtering. Social information filtering essentially automates the process of "word-of-mouth" recommendations: items are recommended to a…

  5. Semi-automated algorithm for localization of dermal/epidermal junction in reflectance confocal microscopy images of human skin

    Science.gov (United States)

    Kurugol, Sila; Dy, Jennifer G.; Rajadhyaksha, Milind; Gossage, Kirk W.; Weissmann, Jesse; Brooks, Dana H.

    2011-03-01

    The examination of the dermis/epidermis junction (DEJ) is clinically important for skin cancer diagnosis. Reflectance confocal microscopy (RCM) is an emerging tool for detection of skin cancers in vivo. However, visual localization of the DEJ in RCM images, with high accuracy and repeatability, is challenging, especially in fair skin, due to low contrast, heterogeneous structure and high inter- and intra-subject variability. We recently proposed a semi-automated algorithm to localize the DEJ in z-stacks of RCM images of fair skin, based on feature segmentation and classification. Here we extend the algorithm to dark skin. The extended algorithm first decides the skin type and then applies the appropriate DEJ localization method. In dark skin, strong backscatter from the pigment melanin causes the basal cells above the DEJ to appear with high contrast. To locate those high contrast regions, the algorithm operates on small tiles (regions) and finds the peaks of the smoothed average intensity depth profile of each tile. However, for some tiles, due to heterogeneity, multiple peaks in the depth profile exist and the strongest peak might not be the basal layer peak. To select the correct peak, basal cells are represented with a vector of texture features. The peak with most similar features to this feature vector is selected. The results show that the algorithm detected the skin types correctly for all 17 stacks tested (8 fair, 9 dark). The DEJ detection algorithm achieved an average distance from the ground truth DEJ surface of around 4.7μm for dark skin and around 7-14μm for fair skin.

  6. EpHLA: an innovative and user-friendly software automating the HLAMatchmaker algorithm for antibody analysis.

    Science.gov (United States)

    Sousa, Luiz Cláudio Demes da Mata; Filho, Herton Luiz Alves Sales; Von Glehn, Cristina de Queiroz Carrascosa; da Silva, Adalberto Socorro; Neto, Pedro de Alcântara dos Santos; de Castro, José Adail Fonseca; do Monte, Semíramis Jamil Hadad

    2011-12-01

    The global challenge for solid organ transplantation programs is to distribute organs to the highly sensitized recipients. The purpose of this work is to describe and test the functionality of the EpHLA software, a program that automates the analysis of acceptable and unacceptable HLA epitopes on the basis of the HLAMatchmaker algorithm. HLAMatchmaker considers small configurations of polymorphic residues referred to as eplets as essential components of HLA-epitopes. Currently, the analyses require the creation of temporary files and the manual cut and paste of laboratory tests results between electronic spreadsheets, which is time-consuming and prone to administrative errors. The EpHLA software was developed in Object Pascal programming language and uses the HLAMatchmaker algorithm to generate histocompatibility reports. The automated generation of reports requires the integration of files containing the results of laboratory tests (HLA typing, anti-HLA antibody signature) and public data banks (NMDP, IMGT). The integration and the access to this data were accomplished by means of the framework called eDAFramework. The eDAFramework was developed in Object Pascal and PHP and it provides data access functionalities for software developed in these languages. The tool functionality was successfully tested in comparison to actual, manually derived reports of patients from a renal transplantation program with related donors. We successfully developed software, which enables the automated definition of the epitope specificities of HLA antibodies. This new tool will benefit the management of recipient/donor pairs selection for highly sensitized patients. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Does the Location of Bruch's Membrane Opening Change Over Time? Longitudinal Analysis Using San Diego Automated Layer Segmentation Algorithm (SALSA).

    Science.gov (United States)

    Belghith, Akram; Bowd, Christopher; Medeiros, Felipe A; Hammel, Naama; Yang, Zhiyong; Weinreb, Robert N; Zangwill, Linda M

    2016-02-01

    We determined if the Bruch's membrane opening (BMO) location changes over time in healthy eyes and eyes with progressing glaucoma, and validated an automated segmentation algorithm for identifying the BMO in Cirrus high-definition coherence tomography (HD-OCT) images. We followed 95 eyes (35 progressing glaucoma and 60 healthy) for an average of 3.7 ± 1.1 years. A stable group of 50 eyes had repeated tests over a short period. In each B-scan of the stable group, the BMO points were delineated manually and automatically to assess the reproducibility of both segmentation methods. Moreover, the BMO location variation over time was assessed longitudinally on the aligned images in 3D space point by point in x, y, and z directions. Mean visual field mean deviation at baseline of the progressing glaucoma group was -7.7 dB. Mixed-effects models revealed small nonsignificant changes in BMO location over time for all directions in healthy eyes (the smallest P value was 0.39) and in the progressing glaucoma eyes (the smallest P value was 0.30). In the stable group, the overall intervisit-intraclass correlation coefficient (ICC) and coefficient of variation (CV) were 98.4% and 2.1%, respectively, for the manual segmentation and 98.1% and 1.9%, respectively, for the automated algorithm. Bruch's membrane opening location was stable in normal and progressing glaucoma eyes with follow-up between 3 and 4 years indicating that it can be used as reference point in monitoring glaucoma progression. The BMO location estimation with Cirrus HD-OCT using manual and automated segmentation showed excellent reproducibility.

  8. Automation of Algorithmic Tasks for Virtual Laboratories Based on Automata Theory

    Directory of Open Access Journals (Sweden)

    Evgeniy A. Efimchik

    2016-03-01

    Full Text Available In the work a description of an automata model of standard algorithm for constructing a correct solution of algorithmic tests is given. The described model allows a formal determination of the variant complexity of algorithmic test and serves as a basis for determining the complexity functions, including the collision concept – the situation of uncertainty, when a choice must be made upon fulfilling the task between the alternatives with various priorities. The influence of collisions on the automata model and its inner structure is described. The model and complexity functions are applied for virtual laboratories upon designing the algorithms of constructing variant with a predetermined complexity in real time and algorithms of the estimation procedures of students’ solution with respect to collisions. The results of the work are applied to the development of virtual laboratories, which are used in the practical part of massive online course on graph theory.

  9. Whither Supervision?

    Directory of Open Access Journals (Sweden)

    Duncan Waite

    2006-11-01

    Full Text Available This paper inquires if the school supervision is in decadence. Dr. Waite responds that the answer will depend on which perspective you look at it. Dr. Waite suggests taking in consideration three elements that are related: the field itself, the expert in the field (the professor, the theorist, the student and the administrator, and the context. When these three elements are revised, it emphasizes that there is not a consensus about the field of supervision, but there are coincidences related to its importance and that it is related to the improvement of the practice of the students in the school for their benefit. Dr. Waite suggests that the practice on this field is not always in harmony with what the theorists affirm. When referring to the supervisor or the skilled person, the author indicates that his or her perspective depends on his or her epistemological believes or in the way he or she conceives the learning; that is why supervision can be understood in different ways. About the context, Waite suggests that there have to be taken in consideration the social or external forces that influent the people and the society, because through them the education is affected. Dr. Waite concludes that the way to understand the supervision depends on the performer’s perspective. He responds to the initial question saying that the supervision authorities, the knowledge on this field, the performers, and its practice, are maybe spread but not extinct because the supervision will always be part of the great enterprise that we called education.

  10. Automated process flowsheet synthesis for membrane processes using genetic algorithm: role of crossover operators

    KAUST Repository

    Shafiee, Alireza; Arab, Mobin; Lai, Zhiping; Liu, Zongwen; Abbas, Ali

    2016-01-01

    In optimization-based process flowsheet synthesis, optimization methods, including genetic algorithms (GA), are used as advantageous tools to select a high performance flowsheet by ‘screening’ large numbers of possible flowsheets. In this study, we

  11. GOES Fire Detects from the Wildfire Automated Biomass Burning Algorithm (WF-ABBA)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The GOES ABBA is a contextual multi-spectral thresholding algorithm which utilizes dynamic local thresholds derived from the GOES satellite imagery and ancillary...

  12. CASA: An Efficient Automated Assignment of Protein Mainchain NMR Data Using an Ordered Tree Search Algorithm

    International Nuclear Information System (INIS)

    Wang Jianyong; Wang Tianzhi; Zuiderweg, Erik R. P.; Crippen, Gordon M.

    2005-01-01

    Rapid analysis of protein structure, interaction, and dynamics requires fast and automated assignments of 3D protein backbone triple-resonance NMR spectra. We introduce a new depth-first ordered tree search method of automated assignment, CASA, which uses hand-edited peak-pick lists of a flexible number of triple resonance experiments. The computer program was tested on 13 artificially simulated peak lists for proteins up to 723 residues, as well as on the experimental data for four proteins. Under reasonable tolerances, it generated assignments that correspond to the ones reported in the literature within a few minutes of CPU time. The program was also tested on the proteins analyzed by other methods, with both simulated and experimental peaklists, and it could generate good assignments in all relevant cases. The robustness was further tested under various situations

  13. Automated Assessment of Existing Patient's Revised Cardiac Risk Index Using Algorithmic Software.

    Science.gov (United States)

    Hofer, Ira S; Cheng, Drew; Grogan, Tristan; Fujimoto, Yohei; Yamada, Takashige; Beck, Lauren; Cannesson, Maxime; Mahajan, Aman

    2018-05-25

    Previous work in the field of medical informatics has shown that rules-based algorithms can be created to identify patients with various medical conditions; however, these techniques have not been compared to actual clinician notes nor has the ability to predict complications been tested. We hypothesize that a rules-based algorithm can successfully identify patients with the diseases in the Revised Cardiac Risk Index (RCRI). Patients undergoing surgery at the University of California, Los Angeles Health System between April 1, 2013 and July 1, 2016 and who had at least 2 previous office visits were included. For each disease in the RCRI except renal failure-congestive heart failure, ischemic heart disease, cerebrovascular disease, and diabetes mellitus-diagnosis algorithms were created based on diagnostic and standard clinical treatment criteria. For each disease state, the prevalence of the disease as determined by the algorithm, International Classification of Disease (ICD) code, and anesthesiologist's preoperative note were determined. Additionally, 400 American Society of Anesthesiologists classes III and IV cases were randomly chosen for manual review by an anesthesiologist. The sensitivity, specificity, accuracy, positive predictive value, negative predictive value, and area under the receiver operating characteristic curve were determined using the manual review as a gold standard. Last, the ability of the RCRI as calculated by each of the methods to predict in-hospital mortality was determined, and the time necessary to run the algorithms was calculated. A total of 64,151 patients met inclusion criteria for the study. In general, the incidence of definite or likely disease determined by the algorithms was higher than that detected by the anesthesiologist. Additionally, in all disease states, the prevalence of disease was always lowest for the ICD codes, followed by the preoperative note, followed by the algorithms. In the subset of patients for whom the

  14. Evaluation of an Automated Swallow-Detection Algorithm Using Visual Biofeedback in Healthy Adults and Head and Neck Cancer Survivors.

    Science.gov (United States)

    Constantinescu, Gabriela; Kuffel, Kristina; Aalto, Daniel; Hodgetts, William; Rieger, Jana

    2017-11-02

    Mobile health (mHealth) technologies may offer an opportunity to address longstanding clinical challenges, such as access and adherence to swallowing therapy. Mobili-T ® is an mHealth device that uses surface electromyography (sEMG) to provide biofeedback on submental muscles activity during exercise. An automated swallow-detection algorithm was developed for Mobili-T ® . This study evaluated the performance of the swallow-detection algorithm. Ten healthy participants and 10 head and neck cancer (HNC) patients were fitted with the device. Signal was acquired during regular, effortful, and Mendelsohn maneuver saliva swallows, as well as lip presses, tongue, and head movements. Signals of interest were tagged during data acquisition and used to evaluate algorithm performance. Sensitivity and positive predictive values (PPV) were calculated for each participant. Saliva swallows were compared between HNC and controls in the four sEMG-based parameters used in the algorithm: duration, peak amplitude ratio, median frequency, and 15th percentile of the power spectrum density. In healthy participants, sensitivity and PPV were 92.3 and 83.9%, respectively. In HNC patients, sensitivity was 92.7% and PPV was 72.2%. In saliva swallows, HNC patients had longer event durations (U = 1925.5, p performed well with healthy participants and retained a high sensitivity, but had lowered PPV with HNC patients. With respect to Mobili-T ® , the algorithm will next be evaluated using the mHealth system.

  15. Automated Storm Tracking and the Lightning Jump Algorithm Using GOES-R Geostationary Lightning Mapper (GLM) Proxy Data

    Science.gov (United States)

    Schultz, Elise; Schultz, Christopher Joseph; Carey, Lawrence D.; Cecil, Daniel J.; Bateman, Monte

    2016-01-01

    This study develops a fully automated lightning jump system encompassing objective storm tracking, Geostationary Lightning Mapper proxy data, and the lightning jump algorithm (LJA), which are important elements in the transition of the LJA concept from a research to an operational based algorithm. Storm cluster tracking is based on a product created from the combination of a radar parameter (vertically integrated liquid, VIL), and lightning information (flash rate density). Evaluations showed that the spatial scale of tracked features or storm clusters had a large impact on the lightning jump system performance, where increasing spatial scale size resulted in decreased dynamic range of the system's performance. This framework will also serve as a means to refine the LJA itself to enhance its operational applicability. Parameters within the system are isolated and the system's performance is evaluated with adjustments to parameter sensitivity. The system's performance is evaluated using the probability of detection (POD) and false alarm ratio (FAR) statistics. Of the algorithm parameters tested, sigma-level (metric of lightning jump strength) and flash rate threshold influenced the system's performance the most. Finally, verification methodologies are investigated. It is discovered that minor changes in verification methodology can dramatically impact the evaluation of the lightning jump system.

  16. A fully automated non-external marker 4D-CT sorting algorithm using a serial cine scanning protocol.

    Science.gov (United States)

    Carnes, Greg; Gaede, Stewart; Yu, Edward; Van Dyk, Jake; Battista, Jerry; Lee, Ting-Yim

    2009-04-07

    Current 4D-CT methods require external marker data to retrospectively sort image data and generate CT volumes. In this work we develop an automated 4D-CT sorting algorithm that performs without the aid of data collected from an external respiratory surrogate. The sorting algorithm requires an overlapping cine scan protocol. The overlapping protocol provides a spatial link between couch positions. Beginning with a starting scan position, images from the adjacent scan position (which spatial match the starting scan position) are selected by maximizing the normalized cross correlation (NCC) of the images at the overlapping slice position. The process was continued by 'daisy chaining' all couch positions using the selected images until an entire 3D volume was produced. The algorithm produced 16 phase volumes to complete a 4D-CT dataset. Additional 4D-CT datasets were also produced using external marker amplitude and phase angle sorting methods. The image quality of the volumes produced by the different methods was quantified by calculating the mean difference of the sorted overlapping slices from adjacent couch positions. The NCC sorted images showed a significant decrease in the mean difference (p < 0.01) for the five patients.

  17. Development of automated system based on neural network algorithm for detecting defects on molds installed on casting machines

    Science.gov (United States)

    Bazhin, V. Yu; Danilov, I. V.; Petrov, P. A.

    2018-05-01

    During the casting of light alloys and ligatures based on aluminum and magnesium, problems of the qualitative distribution of the metal and its crystallization in the mold arise. To monitor the defects of molds on the casting conveyor, a camera with a resolution of 780 x 580 pixels and a shooting rate of 75 frames per second was selected. Images of molds from casting machines were used as input data for neural network algorithm. On the preparation of a digital database and its analytical evaluation stage, the architecture of the convolutional neural network was chosen for the algorithm. The information flow from the local controller is transferred to the OPC server and then to the SCADA system of foundry. After the training, accuracy of neural network defect recognition was about 95.1% on a validation split. After the training, weight coefficients of the neural network were used on testing split and algorithm had identical accuracy with validation images. The proposed technical solutions make it possible to increase the efficiency of the automated process control system in the foundry by expanding the digital database.

  18. Automated Lead Optimization of MMP-12 Inhibitors Using a Genetic Algorithm.

    Science.gov (United States)

    Pickett, Stephen D; Green, Darren V S; Hunt, David L; Pardoe, David A; Hughes, Ian

    2011-01-13

    Traditional lead optimization projects involve long synthesis and testing cycles, favoring extensive structure-activity relationship (SAR) analysis and molecular design steps, in an attempt to limit the number of cycles that a project must run to optimize a development candidate. Microfluidic-based chemistry and biology platforms, with cycle times of minutes rather than weeks, lend themselves to unattended autonomous operation. The bottleneck in the lead optimization process is therefore shifted from synthesis or test to SAR analysis and design. As such, the way is open to an algorithm-directed process, without the need for detailed user data analysis. Here, we present results of two synthesis and screening experiments, undertaken using traditional methodology, to validate a genetic algorithm optimization process for future application to a microfluidic system. The algorithm has several novel features that are important for the intended application. For example, it is robust to missing data and can suggest compounds for retest to ensure reliability of optimization. The algorithm is first validated on a retrospective analysis of an in-house library embedded in a larger virtual array of presumed inactive compounds. In a second, prospective experiment with MMP-12 as the target protein, 140 compounds are submitted for synthesis over 10 cycles of optimization. Comparison is made to the results from the full combinatorial library that was synthesized manually and tested independently. The results show that compounds selected by the algorithm are heavily biased toward the more active regions of the library, while the algorithm is robust to both missing data (compounds where synthesis failed) and inactive compounds. This publication places the full combinatorial library and biological data into the public domain with the intention of advancing research into algorithm-directed lead optimization methods.

  19. SemiBoost: boosting for semi-supervised learning.

    Science.gov (United States)

    Mallapragada, Pavan Kumar; Jin, Rong; Jain, Anil K; Liu, Yi

    2009-11-01

    Semi-supervised learning has attracted a significant amount of attention in pattern recognition and machine learning. Most previous studies have focused on designing special algorithms to effectively exploit the unlabeled data in conjunction with labeled data. Our goal is to improve the classification accuracy of any given supervised learning algorithm by using the available unlabeled examples. We call this as the Semi-supervised improvement problem, to distinguish the proposed approach from the existing approaches. We design a metasemi-supervised learning algorithm that wraps around the underlying supervised algorithm and improves its performance using unlabeled data. This problem is particularly important when we need to train a supervised learning algorithm with a limited number of labeled examples and a multitude of unlabeled examples. We present a boosting framework for semi-supervised learning, termed as SemiBoost. The key advantages of the proposed semi-supervised learning approach are: 1) performance improvement of any supervised learning algorithm with a multitude of unlabeled data, 2) efficient computation by the iterative boosting algorithm, and 3) exploiting both manifold and cluster assumption in training classification models. An empirical study on 16 different data sets and text categorization demonstrates that the proposed framework improves the performance of several commonly used supervised learning algorithms, given a large number of unlabeled examples. We also show that the performance of the proposed algorithm, SemiBoost, is comparable to the state-of-the-art semi-supervised learning algorithms.

  20. A Robust Automated Cataract Detection Algorithm Using Diagnostic Opinion Based Parameter Thresholding for Telemedicine Application

    Directory of Open Access Journals (Sweden)

    Shashwat Pathak

    2016-09-01

    Full Text Available This paper proposes and evaluates an algorithm to automatically detect the cataracts from color images in adult human subjects. Currently, methods available for cataract detection are based on the use of either fundus camera or Digital Single-Lens Reflex (DSLR camera; both are very expensive. The main motive behind this work is to develop an inexpensive, robust and convenient algorithm which in conjugation with suitable devices will be able to diagnose the presence of cataract from the true color images of an eye. An algorithm is proposed for cataract screening based on texture features: uniformity, intensity and standard deviation. These features are first computed and mapped with diagnostic opinion by the eye expert to define the basic threshold of screening system and later tested on real subjects in an eye clinic. Finally, a tele-ophthamology model using our proposed system has been suggested, which confirms the telemedicine application of the proposed system.

  1. Automated phase picker and source location algorithm for local distances using a single three component seismic station

    International Nuclear Information System (INIS)

    Saari, J.

    1989-12-01

    The paper describes procedures for automatic location of local events by using single-site, three-component (3c) seismogram records. Epicentral distance is determined from the time difference between P- and S-onsets. For onset time estimates a special phase picker algorithm is introduced. Onset detection is accomplished by comparing short-term average with long-term average after multiplication of north, east and vertical components of recording. For epicentral distances up to 100 km, errors seldom exceed 5 km. The slowness vector, essentially the azimuth, is estimated independently by using the Christoffersson et al. (1988) 'polarization' technique, although a priori knowledge of the P-onset time gives the best results. Differences between 'true' and observed azimuths are generally less than 12 deg C. Practical examples are given by demonstrating the viability of the procedures for automated 3c seismogram analysis. The results obtained compare favourably with those achieved by a miniarray of three stations. (orig.)

  2. MaxBin 2.0: an automated binning algorithm to recover genomes from multiple metagenomic datasets

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Yu-Wei [Joint BioEnergy Inst. (JBEI), Emeryville, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Simmons, Blake A. [Joint BioEnergy Inst. (JBEI), Emeryville, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Singer, Steven W. [Joint BioEnergy Inst. (JBEI), Emeryville, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-10-29

    The recovery of genomes from metagenomic datasets is a critical step to defining the functional roles of the underlying uncultivated populations. We previously developed MaxBin, an automated binning approach for high-throughput recovery of microbial genomes from metagenomes. Here, we present an expanded binning algorithm, MaxBin 2.0, which recovers genomes from co-assembly of a collection of metagenomic datasets. Tests on simulated datasets revealed that MaxBin 2.0 is highly accurate in recovering individual genomes, and the application of MaxBin 2.0 to several metagenomes from environmental samples demonstrated that it could achieve two complementary goals: recovering more bacterial genomes compared to binning a single sample as well as comparing the microbial community composition between different sampling environments. Availability and implementation: MaxBin 2.0 is freely available at http://sourceforge.net/projects/maxbin/ under BSD license. Supplementary information: Supplementary data are available at Bioinformatics online.

  3. Microprocessor-based integration of microfluidic control for the implementation of automated sensor monitoring and multithreaded optimization algorithms.

    Science.gov (United States)

    Ezra, Elishai; Maor, Idan; Bavli, Danny; Shalom, Itai; Levy, Gahl; Prill, Sebastian; Jaeger, Magnus S; Nahmias, Yaakov

    2015-08-01

    Microfluidic applications range from combinatorial synthesis to high throughput screening, with platforms integrating analog perfusion components, digitally controlled micro-valves and a range of sensors that demand a variety of communication protocols. Currently, discrete control units are used to regulate and monitor each component, resulting in scattered control interfaces that limit data integration and synchronization. Here, we present a microprocessor-based control unit, utilizing the MS Gadgeteer open framework that integrates all aspects of microfluidics through a high-current electronic circuit that supports and synchronizes digital and analog signals for perfusion components, pressure elements, and arbitrary sensor communication protocols using a plug-and-play interface. The control unit supports an integrated touch screen and TCP/IP interface that provides local and remote control of flow and data acquisition. To establish the ability of our control unit to integrate and synchronize complex microfluidic circuits we developed an equi-pressure combinatorial mixer. We demonstrate the generation of complex perfusion sequences, allowing the automated sampling, washing, and calibrating of an electrochemical lactate sensor continuously monitoring hepatocyte viability following exposure to the pesticide rotenone. Importantly, integration of an optical sensor allowed us to implement automated optimization protocols that require different computational challenges including: prioritized data structures in a genetic algorithm, distributed computational efforts in multiple-hill climbing searches and real-time realization of probabilistic models in simulated annealing. Our system offers a comprehensive solution for establishing optimization protocols and perfusion sequences in complex microfluidic circuits.

  4. The Automation of Stochastization Algorithm with Use of SymPy Computer Algebra Library

    Science.gov (United States)

    Demidova, Anastasya; Gevorkyan, Migran; Kulyabov, Dmitry; Korolkova, Anna; Sevastianov, Leonid

    2018-02-01

    SymPy computer algebra library is used for automatic generation of ordinary and stochastic systems of differential equations from the schemes of kinetic interaction. Schemes of this type are used not only in chemical kinetics but also in biological, ecological and technical models. This paper describes the automatic generation algorithm with an emphasis on application details.

  5. Searching dependency between algebraic equations: An algorithm applied to automated reasoning

    International Nuclear Information System (INIS)

    Yang Lu; Zhang Jingzhong

    1990-01-01

    An efficient computer algorithm is given to decide how many branches of the solution to a system of algebraic also solve another equation. As one of the applications, this can be used in practice to verify a conjecture with hypotheses and conclusion expressed by algebraic equations, despite the variety of reducible or irreducible. (author). 10 refs

  6. Automated spike sorting algorithm based on Laplacian eigenmaps and k-means clustering.

    Science.gov (United States)

    Chah, E; Hok, V; Della-Chiesa, A; Miller, J J H; O'Mara, S M; Reilly, R B

    2011-02-01

    This study presents a new automatic spike sorting method based on feature extraction by Laplacian eigenmaps combined with k-means clustering. The performance of the proposed method was compared against previously reported algorithms such as principal component analysis (PCA) and amplitude-based feature extraction. Two types of classifier (namely k-means and classification expectation-maximization) were incorporated within the spike sorting algorithms, in order to find a suitable classifier for the feature sets. Simulated data sets and in-vivo tetrode multichannel recordings were employed to assess the performance of the spike sorting algorithms. The results show that the proposed algorithm yields significantly improved performance with mean sorting accuracy of 73% and sorting error of 10% compared to PCA which combined with k-means had a sorting accuracy of 58% and sorting error of 10%.A correction was made to this article on 22 February 2011. The spacing of the title was amended on the abstract page. No changes were made to the article PDF and the print version was unaffected.

  7. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  8. Design and Implementation of the Automated Rendezvous Targeting Algorithms for Orion

    Science.gov (United States)

    DSouza, Christopher; Weeks, Michael

    2010-01-01

    The Orion vehicle will be designed to perform several rendezvous missions: rendezvous with the ISS in Low Earth Orbit (LEO), rendezvous with the EDS/Altair in LEO, a contingency rendezvous with the ascent stage of the Altair in Low Lunar Orbit (LLO) and a contingency rendezvous in LLO with the ascent and descent stage in the case of an aborted lunar landing. Therefore, it is not difficult to realize that each of these scenarios imposes different operational, timing, and performance constraints on the GNC system. To this end, a suite of on-board guidance and targeting algorithms have been designed to meet the requirement to perform the rendezvous independent of communications with the ground. This capability is particularly relevant for the lunar missions, some of which may occur on the far side of the moon. This paper will describe these algorithms which are designed to be structured and arranged in such a way so as to be flexible and able to safely perform a wide variety of rendezvous trajectories. The goal of the algorithms is not to merely fly one specific type of canned rendezvous profile. Conversely, it was designed from the start to be general enough such that any type of trajectory profile can be flown.(i.e. a coelliptic profile, a stable orbit rendezvous profile, and a expedited LLO rendezvous profile, etc) all using the same rendezvous suite of algorithms. Each of these profiles makes use of maneuver types which have been designed with dual goals of robustness and performance. They are designed to converge quickly under dispersed conditions and they are designed to perform many of the functions performed on the ground today. The targeting algorithms consist of a phasing maneuver (NC), an altitude adjust maneuver (NH), and plane change maneuver (NPC), a coelliptic maneuver (NSR), a Lambert targeted maneuver, and several multiple-burn targeted maneuvers which combine one of more of these algorithms. The derivation and implementation of each of these

  9. A comparison of an algorithm for automated sequential beam orientation selection (Cycle) with simulated annealing

    International Nuclear Information System (INIS)

    Woudstra, Evert; Heijmen, Ben J M; Storchi, Pascal R M

    2008-01-01

    Some time ago we developed and published a new deterministic algorithm (called Cycle) for automatic selection of beam orientations in radiotherapy. This algorithm is a plan generation process aiming at the prescribed PTV dose within hard dose and dose-volume constraints. The algorithm allows a large number of input orientations to be used and selects only the most efficient orientations, surviving the selection process. Efficiency is determined by a score function and is more or less equal to the extent of uninhibited access to the PTV for a specific beam during the selection process. In this paper we compare the capabilities of fast-simulated annealing (FSA) and Cycle for cases where local optima are supposed to be present. Five pancreas and five oesophagus cases previously treated in our institute were selected for this comparison. Plans were generated for FSA and Cycle, using the same hard dose and dose-volume constraints, and the largest possible achieved PTV doses as obtained from these algorithms were compared. The largest achieved PTV dose values were generally very similar for the two algorithms. In some cases FSA resulted in a slightly higher PTV dose than Cycle, at the cost of switching on substantially more beam orientations than Cycle. In other cases, when Cycle generated the solution with the highest PTV dose using only a limited number of non-zero weight beams, FSA seemed to have some difficulty in switching off the unfavourable directions. Cycle was faster than FSA, especially for large-dimensional feasible spaces. In conclusion, for the cases studied in this paper, we have found that despite the inherent drawback of sequential search as used by Cycle (where Cycle could probably get trapped in a local optimum), Cycle is nevertheless able to find comparable or sometimes slightly better treatment plans in comparison with FSA (which in theory finds the global optimum) especially in large-dimensional beam weight spaces

  10. Revision of an automated microseismic location algorithm for DAS - 3C geophone hybrid array

    Science.gov (United States)

    Mizuno, T.; LeCalvez, J.; Raymer, D.

    2017-12-01

    Application of distributed acoustic sensing (DAS) has been studied in several areas in seismology. One of the areas is microseismic reservoir monitoring (e.g., Molteni et al., 2017, First Break). Considering the present limitations of DAS, which include relatively low signal-to-noise ratio (SNR) and no 3C polarization measurements, a DAS - 3C geophone hybrid array is a practical option when using a single monitoring well. Considering the large volume of data from distributed sensing, microseismic event detection and location using a source scanning type algorithm is a reasonable choice, especially for real-time monitoring. The algorithm must handle both strain rate along the borehole axis for DAS and particle velocity for 3C geophones. Only a small quantity of large SNR events will be detected throughout a large aperture encompassing the hybrid array; therefore, the aperture is to be optimized dynamically to eliminate noisy channels for a majority of events. For such hybrid array, coalescence microseismic mapping (CMM) (Drew et al., 2005, SPE) was revised. CMM forms a likelihood function of location of event and its origin time. At each receiver, a time function of event arrival likelihood is inferred using an SNR function, and it is migrated to time and space to determine hypocenter and origin time likelihood. This algorithm was revised to dynamically optimize such a hybrid array by identifying receivers where a microseismic signal is possibly detected and using only those receivers to compute the likelihood function. Currently, peak SNR is used to select receivers. To prevent false results due to small aperture, a minimum aperture threshold is employed. The algorithm refines location likelihood using 3C geophone polarization. We tested this algorithm using a ray-based synthetic dataset. Leaney (2014, PhD thesis, UBC) is used to compute particle velocity at receivers. Strain rate along the borehole axis is computed from particle velocity as DAS microseismic

  11. A thesis on the Development of an Automated SWIFT Edge Detection Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Trujillo, Christopher J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-28

    Throughout the world, scientists and engineers such as those at Los Alamos National Laboratory, perform research and testing unique only to applications aimed towards advancing technology, and understanding the nature of materials. With this testing, comes a need for advanced methods of data acquisition and most importantly, a means of analyzing and extracting the necessary information from such acquired data. In this thesis, I aim to produce an automated method implementing advanced image processing techniques and tools to analyze SWIFT image datasets for Detonator Technology at Los Alamos National Laboratory. Such an effective method for edge detection and point extraction can prove to be advantageous in analyzing such unique datasets and provide for consistency in producing results.

  12. An Automated Energy Detection Algorithm Based on Kurtosis-Histogram Excision

    Science.gov (United States)

    2018-01-01

    10 kHz, 100 kHz, 1 MHz 100 MHz–1 GHz 1 100 kHz 3. Statistical Processing 3.1 Statistical Analysis Statistical analysis is the mathematical ...quantitative terms. In commercial prognostics and diagnostic vibrational monitoring applications , statistical techniques that are mainly used for alarm...applying statistical processing techniques to the energy detection scenario of signals in the RF spectrum domain. The algorithm was developed after

  13. An Automated Processing Algorithm for Flat Areas Resulting from DEM Filling and Interpolation

    Directory of Open Access Journals (Sweden)

    Xingwei Liu

    2017-11-01

    Full Text Available Correction of digital elevation models (DEMs for flat areas is a critical process for hydrological analyses and modeling, such as the determination of flow directions and accumulations, and the delineation of drainage networks and sub-basins. In this study, a new algorithm is proposed for flat correction/removal. It uses the puddle delineation (PD program to identify depressions (including their centers and overflow/spilling thresholds, compute topographic characteristics, and further fill the depressions. Three different levels of elevation increments are used for flat correction. The first and second level of increments create flows toward the thresholds and centers of the filled depressions or flats, while the third level of small random increments is introduced to cope with multiple threshold conditions. A set of artificial surfaces and two real-world landscapes were selected to test the new algorithm. The results showed that the proposed method was not limited by the shapes, the number of thresholds, and the surrounding topographic conditions of flat areas. Compared with the traditional methods, the new algorithm simplified the flat correction procedure and reduced the final elevation increments by 5.71–33.33%. This can be used to effectively remove/correct topographic flats and create flat-free DEMs.

  14. Automated process flowsheet synthesis for membrane processes using genetic algorithm: role of crossover operators

    KAUST Repository

    Shafiee, Alireza

    2016-06-25

    In optimization-based process flowsheet synthesis, optimization methods, including genetic algorithms (GA), are used as advantageous tools to select a high performance flowsheet by ‘screening’ large numbers of possible flowsheets. In this study, we expand the role of GA to include flowsheet generation through proposing a modified Greedysub tour crossover operator. Performance of the proposed crossover operator is compared with four other commonly used operators. The proposed GA optimizationbased process synthesis method is applied to generate the optimum process flowsheet for a multicomponent membrane-based CO2 capture process. Within defined constraints and using the random-point crossover, CO2 purity of 0.827 (equivalent to 0.986 on dry basis) is achieved which results in improvement (3.4%) over the simplest crossover operator applied. In addition, the least variability in the converged flowsheet and CO2 purity is observed for random-point crossover operator, which approximately implies closeness of the solution to the global optimum, and hence the consistency of the algorithm. The proposed crossover operator is found to improve the convergence speed of the algorithm by 77.6%.

  15. Analysis and development of the automated emergency algorithm to control primary to secondary LOCA for SUNPP safety upgrading

    International Nuclear Information System (INIS)

    Kim, V.; Kuznetsov, V.; Balakan, G.; Gromov, G.; Krushynsky, A.; Sholomitsky, S.; Lola, I.

    2007-01-01

    The paper presents the results of the study conducted to support planned modernization of the South Ukraine nuclear power plant. The objective of the analysis has been to develop the automated emergency control algorithm for primary to secondary LOCA accident for SUNPP WWER-1000 safety upgrading. According to the analyses performed in the framework of safety assesment report, given accident is the most complex for control and has the largest contribution into the core damage frequency value. This is because of initial event diagnostics is difficult, emergency control is complicated for personnel, time available for decision making and actions performing is limited with coolant inventory for make-up, probability of steam dump valves on affected steam generator non-closing after opening is high, and as a consequence containment bypass, irretrievable loss of coolant and radioactive materials release into the environment are possible. Unit design modifications are directed on expansion of safety systems capabilities to overcome given accident and to facilitate the personnel actions on emergency control. Safety systems modification according to developed algorithm will allow to simplify accident control by personnel and enable to control the ECCS discharge limiting pressure below the affected steam generator steam dump valve opening pressure, and decrease the probability of the containment bypass sequences. The analysis of the primary-to-secondary LOCA thermal-hydraulics has been conducted with RELAP5/Mod 3.2, and involved development of the dedicated analytical model, calculations of various plant response accident scenarios, conducting of plant personnel intervention analyses using full-scale simulator, development and justification of the emergency control algorithm aimed on the minimization of negative consequences of the primary-to-secondary LOCA (Authors)

  16. Automated sleep stage detection with a classical and a neural learning algorithm--methodological aspects.

    Science.gov (United States)

    Schwaibold, M; Schöchlin, J; Bolz, A

    2002-01-01

    For classification tasks in biosignal processing, several strategies and algorithms can be used. Knowledge-based systems allow prior knowledge about the decision process to be integrated, both by the developer and by self-learning capabilities. For the classification stages in a sleep stage detection framework, three inference strategies were compared regarding their specific strengths: a classical signal processing approach, artificial neural networks and neuro-fuzzy systems. Methodological aspects were assessed to attain optimum performance and maximum transparency for the user. Due to their effective and robust learning behavior, artificial neural networks could be recommended for pattern recognition, while neuro-fuzzy systems performed best for the processing of contextual information.

  17. AUTOMATION OF CALCULATION ALGORITHMS FOR EFFICIENCY ESTIMATION OF TRANSPORT INFRASTRUCTURE DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Sergey Kharitonov

    2015-06-01

    Full Text Available Optimum transport infrastructure usage is an important aspect of the development of the national economy of the Russian Federation. Thus, development of instruments for assessing the efficiency of infrastructure is impossible without constant monitoring of a number of significant indicators. This work is devoted to the selection of indicators and the method of their calculation in relation to the transport subsystem as airport infrastructure. The work also reflects aspects of the evaluation of the possibilities of algorithmic computational mechanisms to improve the tools of public administration transport subsystems.

  18. Comparison of algorithms of testing for use in automated evaluation of sensation.

    Science.gov (United States)

    Dyck, P J; Karnes, J L; Gillen, D A; O'Brien, P C; Zimmerman, I R; Johnson, D M

    1990-10-01

    Estimates of vibratory detection threshold may be used to detect, characterize, and follow the course of sensory abnormality in neurologic disease. The approach is especially useful in epidemiologic and controlled clinical trials. We studied which algorithm of testing and finding threshold should be used in automatic systems by comparing among algorithms and stimulus conditions for the index finger of healthy subjects and for the great toe of patients with mild neuropathy. Appearance thresholds obtained by linear ramps increasing at a rate less than 4.15 microns/sec provided accurate and repeatable thresholds compared with thresholds obtained by forced-choice testing. These rates would be acceptable if only sensitive sites were studied, but they were too slow for use in automatic testing of insensitive parts. Appearance thresholds obtained by fast linear rates (4.15 or 16.6 microns/sec) overestimated threshold, especially for sensitive parts. Use of the mean of appearance and disappearance thresholds, with the stimulus increasing exponentially at rates of 0.5 or 1.0 just noticeable difference (JND) units per second, and interspersion of null stimuli, Békésy with null stimuli, provided accurate, repeatable, and fast estimates of threshold for sensitive parts. Despite the good performance of Békésy testing, we prefer forced choice for evaluation of the sensation of patients with neuropathy.

  19. Automated Software Acceleration in Programmable Logic for an Efficient NFFT Algorithm Implementation: A Case Study.

    Science.gov (United States)

    Rodríguez, Manuel; Magdaleno, Eduardo; Pérez, Fernando; García, Cristhian

    2017-03-28

    Non-equispaced Fast Fourier transform (NFFT) is a very important algorithm in several technological and scientific areas such as synthetic aperture radar, computational photography, medical imaging, telecommunications, seismic analysis and so on. However, its computation complexity is high. In this paper, we describe an efficient NFFT implementation with a hardware coprocessor using an All-Programmable System-on-Chip (APSoC). This is a hybrid device that employs an Advanced RISC Machine (ARM) as Processing System with Programmable Logic for high-performance digital signal processing through parallelism and pipeline techniques. The algorithm has been coded in C language with pragma directives to optimize the architecture of the system. We have used the very novel Software Develop System-on-Chip (SDSoC) evelopment tool that simplifies the interface and partitioning between hardware and software. This provides shorter development cycles and iterative improvements by exploring several architectures of the global system. The computational results shows that hardware acceleration significantly outperformed the software based implementation.

  20. The automated reference toolset: A soil-geomorphic ecological potential matching algorithm

    Science.gov (United States)

    Nauman, Travis; Duniway, Michael C.

    2016-01-01

    Ecological inventory and monitoring data need referential context for interpretation. Identification of appropriate reference areas of similar ecological potential for site comparison is demonstrated using a newly developed automated reference toolset (ART). Foundational to identification of reference areas was a soil map of particle size in the control section (PSCS), a theme in US Soil Taxonomy. A 30-m resolution PSCS map of the Colorado Plateau (366,000 km2) was created by interpolating ∼5000 field soil observations using a random forest model and a suite of raster environmental spatial layers representing topography, climate, general ecological community, and satellite imagery ratios. The PSCS map had overall out of bag accuracy of 61.8% (Kappa of 0.54, p < 0.0001), and an independent validation accuracy of 93.2% at a set of 356 field plots along the southern edge of Canyonlands National Park, Utah. The ART process was also tested at these plots, and matched plots with the same ecological sites (ESs) 67% of the time where sites fell within 2-km buffers of each other. These results show that the PSCS and ART have strong application for ecological monitoring and sampling design, as well as assessing impacts of disturbance and land management action using an ecological potential framework. Results also demonstrate that PSCS could be a key mapping layer for the USDA-NRCS provisional ES development initiative.

  1. SU-E-T-427: Feasibility Study for Evaluation of IMRT Dose Distribution Using Geant4-Based Automated Algorithms

    International Nuclear Information System (INIS)

    Choi, H; Shin, W; Testa, M; Min, C; Kim, J

    2015-01-01

    Purpose: For intensity-modulated radiation therapy (IMRT) treatment planning validation using Monte Carlo (MC) simulations, a precise and automated procedure is necessary to evaluate the patient dose distribution. The aim of this study is to develop an automated algorithm for IMRT simulations using DICOM files and to evaluate the patient dose based on 4D simulation using the Geant4 MC toolkit. Methods: The head of a clinical linac (Varian Clinac 2300 IX) was modeled in Geant4 along with particular components such as the flattening filter and the multi-leaf collimator (MLC). Patient information and the position of the MLC were imported from the DICOM-RT interface. For each position of the MLC, a step- and-shoot technique was adopted. PDDs and lateral profiles were simulated in a water phantom (50×50×40 cm 3 ) and compared to measurement data. We used a lung phantom and MC-dose calculations were compared to the clinical treatment planning used at the Seoul National University Hospital. Results: In order to reproduce the measurement data, we tuned three free parameters: mean and standard deviation of the primary electron beam energy and the beam spot size. These parameters for 6 MV were found to be 5.6 MeV, 0.2378 MeV and 1 mm FWHM respectively. The average dose difference between measurements and simulations was less than 2% for PDDs and radial profiles. The lung phantom study showed fairly good agreement between MC and planning dose despite some unavoidable statistical fluctuation. Conclusion: The current feasibility study using the lung phantom shows the potential for IMRT dose validation using 4D MC simulations using Geant4 tool kits. This research was supported by Korea Institute of Nuclear safety and Development of Measurement Standards for Medical Radiation funded by Korea research Institute of Standards and Science. (KRISS-2015-15011032)

  2. Development of a Genetic Algorithm to Automate Clustering of a Dependency Structure Matrix

    Science.gov (United States)

    Rogers, James L.; Korte, John J.; Bilardo, Vincent J.

    2006-01-01

    Much technology assessment and organization design data exists in Microsoft Excel spreadsheets. Tools are needed to put this data into a form that can be used by design managers to make design decisions. One need is to cluster data that is highly coupled. Tools such as the Dependency Structure Matrix (DSM) and a Genetic Algorithm (GA) can be of great benefit. However, no tool currently combines the DSM and a GA to solve the clustering problem. This paper describes a new software tool that interfaces a GA written as an Excel macro with a DSM in spreadsheet format. The results of several test cases are included to demonstrate how well this new tool works.

  3. A computer-based automated algorithm for assessing acinar cell loss after experimental pancreatitis.

    Directory of Open Access Journals (Sweden)

    John F Eisses

    Full Text Available The change in exocrine mass is an important parameter to follow in experimental models of pancreatic injury and regeneration. However, at present, the quantitative assessment of exocrine content by histology is tedious and operator-dependent, requiring manual assessment of acinar area on serial pancreatic sections. In this study, we utilized a novel computer-generated learning algorithm to construct an accurate and rapid method of quantifying acinar content. The algorithm works by learning differences in pixel characteristics from input examples provided by human experts. HE-stained pancreatic sections were obtained in mice recovering from a 2-day, hourly caerulein hyperstimulation model of experimental pancreatitis. For training data, a pathologist carefully outlined discrete regions of acinar and non-acinar tissue in 21 sections at various stages of pancreatic injury and recovery (termed the "ground truth". After the expert defined the ground truth, the computer was able to develop a prediction rule that was then applied to a unique set of high-resolution images in order to validate the process. For baseline, non-injured pancreatic sections, the software demonstrated close agreement with the ground truth in identifying baseline acinar tissue area with only a difference of 1% ± 0.05% (p = 0.21. Within regions of injured tissue, the software reported a difference of 2.5% ± 0.04% in acinar area compared with the pathologist (p = 0.47. Surprisingly, on detailed morphological examination, the discrepancy was primarily because the software outlined acini and excluded inter-acinar and luminal white space with greater precision. The findings suggest that the software will be of great potential benefit to both clinicians and researchers in quantifying pancreatic acinar cell flux in the injured and recovering pancreas.

  4. Automated cross-identifying radio to infrared surveys using the LRPY algorithm: a case study

    Science.gov (United States)

    Weston, S. D.; Seymour, N.; Gulyaev, S.; Norris, R. P.; Banfield, J.; Vaccari, M.; Hopkins, A. M.; Franzen, T. M. O.

    2018-02-01

    Cross-identifying complex radio sources with optical or infra red (IR) counterparts in surveys such as the Australia Telescope Large Area Survey (ATLAS) has traditionally been performed manually. However, with new surveys from the Australian Square Kilometre Array Pathfinder detecting many tens of millions of radio sources, such an approach is no longer feasible. This paper presents new software (LRPY - Likelihood Ratio in PYTHON) to automate the process of cross-identifying radio sources with catalogues at other wavelengths. LRPY implements the likelihood ratio (LR) technique with a modification to account for two galaxies contributing to a sole measured radio component. We demonstrate LRPY by applying it to ATLAS DR3 and a Spitzer-based multiwavelength fusion catalogue, identifying 3848 matched sources via our LR-based selection criteria. A subset of 1987 sources have flux density values for all IRAC bands which allow us to use criteria to distinguish between active galactic nuclei (AGNs) and star-forming galaxies (SFG). We find that 936 radio sources ( ≈ 47 per cent) meet both of the Lacy and Stern AGN selection criteria. Of the matched sources, 295 have spectroscopic redshifts and we examine the radio to IR flux ratio versus redshift, proposing an AGN selection criterion below the Elvis radio-loud AGN limit for this dataset. Taking the union of all three AGNs selection criteria we identify 956 as AGNs ( ≈ 48 per cent). From this dataset, we find a decreasing fraction of AGNs with lower radio flux densities consistent with other results in the literature.

  5. Automated Radiology-Pathology Module Correlation Using a Novel Report Matching Algorithm by Organ System.

    Science.gov (United States)

    Dane, Bari; Doshi, Ankur; Gfytopoulos, Soterios; Bhattacharji, Priya; Recht, Michael; Moore, William

    2018-05-01

    Radiology-pathology correlation is time-consuming and is not feasible in most clinical settings, with the notable exception of breast imaging. The purpose of this study was to determine if an automated radiology-pathology report pairing system could accurately match radiology and pathology reports, thus creating a feedback loop allowing for more frequent and timely radiology-pathology correlation. An experienced radiologist created a matching matrix of radiology and pathology reports. These matching rules were then exported to a novel comprehensive radiology-pathology module. All distinct radiology-pathology pairings at our institution from January 1, 2016 to July 1, 2016 were included (n = 8999). The appropriateness of each radiology-pathology report pairing was scored as either "correlative" or "non-correlative." Pathology reports relating to anatomy imaged in the specific imaging study were deemed correlative, whereas pathology reports describing anatomy not imaged with the particular study were denoted non-correlative. Overall, there was 88.3% correlation (accuracy) of the radiology and pathology reports (n = 8999). Subset analysis demonstrated that computed tomography (CT) abdomen/pelvis, CT head/neck/face, CT chest, musculoskeletal CT (excluding spine), mammography, magnetic resonance imaging (MRI) abdomen/pelvis, MRI brain, musculoskeletal MRI (excluding spine), breast MRI, positron emission tomography (PET), breast ultrasound, and head/neck ultrasound all demonstrated greater than 91% correlation. When further stratified by imaging modality, CT, MRI, mammography, and PET demonstrated excellent correlation (greater than 96.3%). Ultrasound and non-PET nuclear medicine studies demonstrated poorer correlation (80%). There is excellent correlation of radiology imaging reports and appropriate pathology reports when matched by organ system. Rapid, appropriate radiology-pathology report pairings provide an excellent opportunity to close feedback loop to the

  6. Man-machine supervision; Supervision homme-machine

    Energy Technology Data Exchange (ETDEWEB)

    Montmain, J. [CEA Valrho, Dir. de l' Energie Nucleaire (DEN), 30 - Marcoule (France)

    2005-05-01

    Today's complexity of systems where man is involved has led to the development of more and more sophisticated information processing systems where decision making has become more and more difficult. The operator task has moved from operation to supervision and the production tool has become indissociable from its numerical instrumentation and control system. The integration of more and more numerous and sophisticated control indicators in the control room does not necessary fulfill the expectations of the operation team. It is preferable to develop cooperative information systems which are real situation understanding aids. The stake is not the automation of operators' cognitive tasks but the supply of a reasoning help. One of the challenges of interactive information systems is the selection, organisation and dynamical display of information. The efficiency of the whole man-machine system depends on the communication interface efficiency. This article presents the principles and specificities of man-machine supervision systems: 1 - principle: operator's role in control room, operator and automation, monitoring and diagnosis, characteristics of useful models for supervision; 2 - qualitative reasoning: origin, trends, evolutions; 3 - causal reasoning: causality, causal graph representation, causal and diagnostic graph; 4 - multi-points of view reasoning: multi flow modeling method, Sagace method; 5 - approximate reasoning: the symbolic numerical interface, the multi-criteria decision; 6 - example of application: supervision in a spent-fuel reprocessing facility. (J.S.)

  7. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  8. An Automated Algorithm for Producing Land Cover Information from Landsat Surface Reflectance Data Acquired Between 1984 and Present

    Science.gov (United States)

    Rover, J.; Goldhaber, M. B.; Holen, C.; Dittmeier, R.; Wika, S.; Steinwand, D.; Dahal, D.; Tolk, B.; Quenzer, R.; Nelson, K.; Wylie, B. K.; Coan, M.

    2015-12-01

    Multi-year land cover mapping from remotely sensed data poses challenges. Producing land cover products at spatial and temporal scales required for assessing longer-term trends in land cover change are typically a resource-limited process. A recently developed approach utilizes open source software libraries to automatically generate datasets, decision tree classifications, and data products while requiring minimal user interaction. Users are only required to supply coordinates for an area of interest, land cover from an existing source such as National Land Cover Database and percent slope from a digital terrain model for the same area of interest, two target acquisition year-day windows, and the years of interest between 1984 and present. The algorithm queries the Landsat archive for Landsat data intersecting the area and dates of interest. Cloud-free pixels meeting the user's criteria are mosaicked to create composite images for training the classifiers and applying the classifiers. Stratification of training data is determined by the user and redefined during an iterative process of reviewing classifiers and resulting predictions. The algorithm outputs include yearly land cover raster format data, graphics, and supporting databases for further analysis. Additional analytical tools are also incorporated into the automated land cover system and enable statistical analysis after data are generated. Applications tested include the impact of land cover change and water permanence. For example, land cover conversions in areas where shrubland and grassland were replaced by shale oil pads during hydrofracking of the Bakken Formation were quantified. Analytical analysis of spatial and temporal changes in surface water included identifying wetlands in the Prairie Pothole Region of North Dakota with potential connectivity to ground water, indicating subsurface permeability and geochemistry.

  9. Automated Means of Identifying Landslide Deposits using LiDAR Data using the Contour Connection Method Algorithm

    Science.gov (United States)

    Olsen, M. J.; Leshchinsky, B. A.; Tanyu, B. F.

    2014-12-01

    Landslides are a global natural hazard, resulting in severe economic, environmental and social impacts every year. Often, landslides occur in areas of repeated slope instability, but despite these trends, significant residential developments and critical infrastructure are built in the shadow of past landslide deposits and marginally stable slopes. These hazards, despite their sometimes enormous scale and regional propensity, however, are difficult to detect on the ground, often due to vegetative cover. However, new developments in remote sensing technology, specifically Light Detection and Ranging mapping (LiDAR) are providing a new means of viewing our landscape. Airborne LiDAR, combined with a level of post-processing, enable the creation of spatial data representative of the earth beneath the vegetation, highlighting the scars of unstable slopes of the past. This tool presents a revolutionary technique to mapping landslide deposits and their associated regions of risk; yet, their inventorying is often done manually, an approach that can be tedious, time-consuming and subjective. However, the associated LiDAR bare earth data present the opportunity to use this remote sensing technology and typical landslide geometry to create an automated algorithm that can detect and inventory deposits on a landscape scale. This algorithm, called the Contour Connection Method (CCM), functions by first detecting steep gradients, often associated with the headscarp of a failed hillslope, and initiating a search, highlighting deposits downslope of the failure. Based on input of search gradients, CCM can assist in highlighting regions identified as landslides consistently on a landscape scale, capable of mapping more than 14,000 hectares rapidly (help better define these regions of risk.

  10. The Pandora multi-algorithm approach to automated pattern recognition of cosmic-ray muon and neutrino events in the MicroBooNE detector

    Energy Technology Data Exchange (ETDEWEB)

    Acciarri, R.; Bagby, L.; Baller, B.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Greenlee, H.; James, C.; Jostlein, H.; Ketchum, W.; Kirby, M.; Kobilarcik, T.; Lockwitz, S.; Lundberg, B.; Marchionni, A.; Moore, C.D.; Palamara, O.; Pavlovic, Z.; Raaf, J.L.; Schukraft, A.; Snider, E.L.; Spentzouris, P.; Strauss, T.; Toups, M.; Wolbers, S.; Yang, T.; Zeller, G.P. [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Adams, C. [Harvard University, Cambridge, MA (United States); Yale University, New Haven, CT (United States); An, R.; Littlejohn, B.R.; Martinez Caicedo, D.A. [Illinois Institute of Technology (IIT), Chicago, IL (United States); Anthony, J.; Escudero Sanchez, L.; De Vries, J.J.; Marshall, J.; Smith, A.; Thomson, M. [University of Cambridge, Cambridge (United Kingdom); Asaadi, J. [University of Texas, Arlington, TX (United States); Auger, M.; Ereditato, A.; Goeldi, D.; Kreslo, I.; Lorca, D.; Luethi, M.; Rudolf von Rohr, C.; Sinclair, J.; Weber, M. [Universitaet Bern, Bern (Switzerland); Balasubramanian, S.; Fleming, B.T.; Gramellini, E.; Hackenburg, A.; Luo, X.; Russell, B.; Tufanli, S. [Yale University, New Haven, CT (United States); Barnes, C.; Mousseau, J.; Spitz, J. [University of Michigan, Ann Arbor, MI (United States); Barr, G.; Bass, M.; Del Tutto, M.; Laube, A.; Soleti, S.R.; De Pontseele, W.V. [University of Oxford, Oxford (United Kingdom); Bay, F. [TUBITAK Space Technologies Research Institute, Ankara (Turkey); Bishai, M.; Chen, H.; Joshi, J.; Kirby, B.; Li, Y.; Mooney, M.; Qian, X.; Viren, B.; Zhang, C. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Blake, A.; Devitt, D.; Lister, A.; Nowak, J. [Lancaster University, Lancaster (United Kingdom); Bolton, T.; Horton-Smith, G.; Meddage, V.; Rafique, A. [Kansas State University (KSU), Manhattan, KS (United States); Camilleri, L.; Caratelli, D.; Crespo-Anadon, J.I.; Fadeeva, A.A.; Genty, V.; Kaleko, D.; Seligman, W.; Shaevitz, M.H. [Columbia University, New York, NY (United States); Church, E. [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Cianci, D.; Karagiorgi, G. [Columbia University, New York, NY (United States); The University of Manchester (United Kingdom); Cohen, E.; Piasetzky, E. [Tel Aviv University, Tel Aviv (Israel); Collin, G.H.; Conrad, J.M.; Hen, O.; Hourlier, A.; Moon, J.; Wongjirad, T.; Yates, L. [Massachusetts Institute of Technology (MIT), Cambridge, MA (United States); Convery, M.; Eberly, B.; Rochester, L.; Tsai, Y.T.; Usher, T. [SLAC National Accelerator Laboratory, Menlo Park, CA (United States); Dytman, S.; Graf, N.; Jiang, L.; Naples, D.; Paolone, V.; Wickremasinghe, D.A. [University of Pittsburgh, Pittsburgh, PA (United States); Esquivel, J.; Hamilton, P.; Pulliam, G.; Soderberg, M. [Syracuse University, Syracuse, NY (United States); Foreman, W.; Ho, J.; Schmitz, D.W.; Zennamo, J. [University of Chicago, IL (United States); Furmanski, A.P.; Garcia-Gamez, D.; Hewes, J.; Hill, C.; Murrells, R.; Porzio, D.; Soeldner-Rembold, S.; Szelc, A.M. [The University of Manchester (United Kingdom); Garvey, G.T.; Huang, E.C.; Louis, W.C.; Mills, G.B.; De Water, R.G.V. [Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); Gollapinni, S. [Kansas State University (KSU), Manhattan, KS (United States); University of Tennessee, Knoxville, TN (United States); and others

    2018-01-15

    The development and operation of liquid-argon time-projection chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens of algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies. (orig.)

  11. The Pandora multi-algorithm approach to automated pattern recognition of cosmic-ray muon and neutrino events in the MicroBooNE detector

    CERN Document Server

    Acciarri, R.; An, R.; Anthony, J.; Asaadi, J.; Auger, M.; Bagby, L.; Balasubramanian, S.; Baller, B.; Barnes, C.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Cohen, E.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fadeeva, A. A.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garcia-Gamez, D.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; Hourlier, A.; Huang, E.-C.; James, C.; Jan de Vries, J.; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Piasetzky, E.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; Rudolf von Rohr, C.; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Smith, A.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van De Pontseele, W.; Van de Water, R. G.; Viren, B.; Weber, M.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Yates, L.; Zeller, G. P.; Zennamo, J.; Zhang, C.

    2017-01-01

    The development and operation of Liquid-Argon Time-Projection Chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens of algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the...

  12. The Pandora multi-algorithm approach to automated pattern recognition of cosmic-ray muon and neutrino events in the MicroBooNE detector

    Science.gov (United States)

    Acciarri, R.; Adams, C.; An, R.; Anthony, J.; Asaadi, J.; Auger, M.; Bagby, L.; Balasubramanian, S.; Baller, B.; Barnes, C.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Cohen, E.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fadeeva, A. A.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garcia-Gamez, D.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; Hourlier, A.; Huang, E.-C.; James, C.; Jan de Vries, J.; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Piasetzky, E.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; Rudolf von Rohr, C.; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Smith, A.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van De Pontseele, W.; Van de Water, R. G.; Viren, B.; Weber, M.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Yates, L.; Zeller, G. P.; Zennamo, J.; Zhang, C.

    2018-01-01

    The development and operation of liquid-argon time-projection chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens of algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.

  13. Weakly supervised classification in high energy physics

    International Nuclear Information System (INIS)

    Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco; Schwartzman, Ariel

    2017-01-01

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. This paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics — quark versus gluon tagging — we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervised classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.

  14. Weakly supervised classification in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Dery, Lucio Mwinmaarong [Physics Department, Stanford University,Stanford, CA, 94305 (United States); Nachman, Benjamin [Physics Division, Lawrence Berkeley National Laboratory,1 Cyclotron Rd, Berkeley, CA, 94720 (United States); Rubbo, Francesco; Schwartzman, Ariel [SLAC National Accelerator Laboratory, Stanford University,2575 Sand Hill Rd, Menlo Park, CA, 94025 (United States)

    2017-05-29

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. This paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics — quark versus gluon tagging — we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervised classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.

  15. SPEQTACLE: An automated generalized fuzzy C-means algorithm for tumor delineation in PET

    International Nuclear Information System (INIS)

    Lapuyade-Lahorgue, Jérôme; Visvikis, Dimitris; Hatt, Mathieu; Pradier, Olivier; Cheze Le Rest, Catherine

    2015-01-01

    Purpose: Accurate tumor delineation in positron emission tomography (PET) images is crucial in oncology. Although recent methods achieved good results, there is still room for improvement regarding tumors with complex shapes, low signal-to-noise ratio, and high levels of uptake heterogeneity. Methods: The authors developed and evaluated an original clustering-based method called spatial positron emission quantification of tumor—Automatic Lp-norm estimation (SPEQTACLE), based on the fuzzy C-means (FCM) algorithm with a generalization exploiting a Hilbertian norm to more accurately account for the fuzzy and non-Gaussian distributions of PET images. An automatic and reproducible estimation scheme of the norm on an image-by-image basis was developed. Robustness was assessed by studying the consistency of results obtained on multiple acquisitions of the NEMA phantom on three different scanners with varying acquisition parameters. Accuracy was evaluated using classification errors (CEs) on simulated and clinical images. SPEQTACLE was compared to another FCM implementation, fuzzy local information C-means (FLICM) and fuzzy locally adaptive Bayesian (FLAB). Results: SPEQTACLE demonstrated a level of robustness similar to FLAB (variability of 14% ± 9% vs 14% ± 7%, p = 0.15) and higher than FLICM (45% ± 18%, p < 0.0001), and improved accuracy with lower CE (14% ± 11%) over both FLICM (29% ± 29%) and FLAB (22% ± 20%) on simulated images. Improvement was significant for the more challenging cases with CE of 17% ± 11% for SPEQTACLE vs 28% ± 22% for FLAB (p = 0.009) and 40% ± 35% for FLICM (p < 0.0001). For the clinical cases, SPEQTACLE outperformed FLAB and FLICM (15% ± 6% vs 37% ± 14% and 30% ± 17%, p < 0.004). Conclusions: SPEQTACLE benefitted from the fully automatic estimation of the norm on a case-by-case basis. This promising approach will be extended to multimodal images and multiclass estimation in future developments

  16. Automated detection and classification of the proximal humerus fracture by using deep learning algorithm.

    Science.gov (United States)

    Chung, Seok Won; Han, Seung Seog; Lee, Ji Whan; Oh, Kyung-Soo; Kim, Na Ra; Yoon, Jong Pil; Kim, Joon Yub; Moon, Sung Hoon; Kwon, Jieun; Lee, Hyo-Jin; Noh, Young-Min; Kim, Youngjun

    2018-03-26

    Background and purpose - We aimed to evaluate the ability of artificial intelligence (a deep learning algorithm) to detect and classify proximal humerus fractures using plain anteroposterior shoulder radiographs. Patients and methods - 1,891 images (1 image per person) of normal shoulders (n = 515) and 4 proximal humerus fracture types (greater tuberosity, 346; surgical neck, 514; 3-part, 269; 4-part, 247) classified by 3 specialists were evaluated. We trained a deep convolutional neural network (CNN) after augmentation of a training dataset. The ability of the CNN, as measured by top-1 accuracy, area under receiver operating characteristics curve (AUC), sensitivity/specificity, and Youden index, in comparison with humans (28 general physicians, 11 general orthopedists, and 19 orthopedists specialized in the shoulder) to detect and classify proximal humerus fractures was evaluated. Results - The CNN showed a high performance of 96% top-1 accuracy, 1.00 AUC, 0.99/0.97 sensitivity/specificity, and 0.97 Youden index for distinguishing normal shoulders from proximal humerus fractures. In addition, the CNN showed promising results with 65-86% top-1 accuracy, 0.90-0.98 AUC, 0.88/0.83-0.97/0.94 sensitivity/specificity, and 0.71-0.90 Youden index for classifying fracture type. When compared with the human groups, the CNN showed superior performance to that of general physicians and orthopedists, similar performance to orthopedists specialized in the shoulder, and the superior performance of the CNN was more marked in complex 3- and 4-part fractures. Interpretation - The use of artificial intelligence can accurately detect and classify proximal humerus fractures on plain shoulder AP radiographs. Further studies are necessary to determine the feasibility of applying artificial intelligence in the clinic and whether its use could improve care and outcomes compared with current orthopedic assessments.

  17. FITspec: A New Algorithm for the Automated Fit of Synthetic Stellar Spectra for OB Stars

    Science.gov (United States)

    Fierro-Santillán, Celia R.; Zsargó, Janos; Klapp, Jaime; Díaz-Azuara, Santiago A.; Arrieta, Anabel; Arias, Lorena; Sigalotti, Leonardo Di G.

    2018-06-01

    In this paper we describe the FITspec code, a data mining tool for the automatic fitting of synthetic stellar spectra. The program uses a database of 27,000 CMFGEN models of stellar atmospheres arranged in a six-dimensional (6D) space, where each dimension corresponds to one model parameter. From these models a library of 2,835,000 synthetic spectra were generated covering the ultraviolet, optical, and infrared regions of the electromagnetic spectrum. Using FITspec we adjust the effective temperature and the surface gravity. From the 6D array we also get the luminosity, the metallicity, and three parameters for the stellar wind: the terminal velocity ({v}∞ ), the β exponent of the velocity law, and the clumping filling factor (F cl). Finally, the projected rotational velocity (v\\cdot \\sin i) can be obtained from the library of stellar spectra. Validation of the algorithm was performed by analyzing the spectra of a sample of eight O-type stars taken from the IACOB spectroscopic survey of Northern Galactic OB stars. The spectral lines used for the adjustment of the analyzed stars are reproduced with good accuracy. In particular, the effective temperatures calculated with the FITspec are in good agreement with those derived from spectral type and other calibrations for the same stars. The stellar luminosities and projected rotational velocities are also in good agreement with previous quantitative spectroscopic analyses in the literature. An important advantage of FITspec over traditional codes is that the time required for spectral analyses is reduced from months to a few hours.

  18. Fully automated reconstruction of three-dimensional vascular tree structures from two orthogonal views using computational algorithms and productionrules

    Science.gov (United States)

    Liu, Iching; Sun, Ying

    1992-10-01

    A system for reconstructing 3-D vascular structure from two orthogonally projected images is presented. The formidable problem of matching segments between two views is solved using knowledge of the epipolar constraint and the similarity of segment geometry and connectivity. The knowledge is represented in a rule-based system, which also controls the operation of several computational algorithms for tracking segments in each image, representing 2-D segments with directed graphs, and reconstructing 3-D segments from matching 2-D segment pairs. Uncertain reasoning governs the interaction between segmentation and matching; it also provides a framework for resolving the matching ambiguities in an iterative way. The system was implemented in the C language and the C Language Integrated Production System (CLIPS) expert system shell. Using video images of a tree model, the standard deviation of reconstructed centerlines was estimated to be 0.8 mm (1.7 mm) when the view direction was parallel (perpendicular) to the epipolar plane. Feasibility of clinical use was shown using x-ray angiograms of a human chest phantom. The correspondence of vessel segments between two views was accurate. Computational time for the entire reconstruction process was under 30 s on a workstation. A fully automated system for two-view reconstruction that does not require the a priori knowledge of vascular anatomy is demonstrated.

  19. Genetic algorithm based feature selection combined with dual classification for the automated detection of proliferative diabetic retinopathy.

    Science.gov (United States)

    Welikala, R A; Fraz, M M; Dehmeshki, J; Hoppe, A; Tah, V; Mann, S; Williamson, T H; Barman, S A

    2015-07-01

    Proliferative diabetic retinopathy (PDR) is a condition that carries a high risk of severe visual impairment. The hallmark of PDR is the growth of abnormal new vessels. In this paper, an automated method for the detection of new vessels from retinal images is presented. This method is based on a dual classification approach. Two vessel segmentation approaches are applied to create two separate binary vessel map which each hold vital information. Local morphology features are measured from each binary vessel map to produce two separate 4-D feature vectors. Independent classification is performed for each feature vector using a support vector machine (SVM) classifier. The system then combines these individual outcomes to produce a final decision. This is followed by the creation of additional features to generate 21-D feature vectors, which feed into a genetic algorithm based feature selection approach with the objective of finding feature subsets that improve the performance of the classification. Sensitivity and specificity results using a dataset of 60 images are 0.9138 and 0.9600, respectively, on a per patch basis and 1.000 and 0.975, respectively, on a per image basis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Experimental optimization of a direct injection homogeneous charge compression ignition gasoline engine using split injections with fully automated microgenetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Canakci, M. [Kocaeli Univ., Izmit (Turkey); Reitz, R.D. [Wisconsin Univ., Dept. of Mechanical Engineering, Madison, WI (United States)

    2003-03-01

    Homogeneous charge compression ignition (HCCI) is receiving attention as a new low-emission engine concept. Little is known about the optimal operating conditions for this engine operation mode. Combustion under homogeneous, low equivalence ratio conditions results in modest temperature combustion products, containing very low concentrations of NO{sub x} and particulate matter (PM) as well as providing high thermal efficiency. However, this combustion mode can produce higher HC and CO emissions than those of conventional engines. An electronically controlled Caterpillar single-cylinder oil test engine (SCOTE), originally designed for heavy-duty diesel applications, was converted to an HCCI direct injection (DI) gasoline engine. The engine features an electronically controlled low-pressure direct injection gasoline (DI-G) injector with a 60 deg spray angle that is capable of multiple injections. The use of double injection was explored for emission control and the engine was optimized using fully automated experiments and a microgenetic algorithm optimization code. The variables changed during the optimization include the intake air temperature, start of injection timing and the split injection parameters (per cent mass of fuel in each injection, dwell between the pulses). The engine performance and emissions were determined at 700 r/min with a constant fuel flowrate at 10 MPa fuel injection pressure. The results show that significant emissions reductions are possible with the use of optimal injection strategies. (Author)

  1. Automated guidance algorithms for a space station-based crew escape vehicle.

    Science.gov (United States)

    Flanary, R; Hammen, D G; Ito, D; Rabalais, B W; Rishikof, B H; Siebold, K H

    2003-04-01

    An escape vehicle was designed to provide an emergency evacuation for crew members living on a space station. For maximum escape capability, the escape vehicle needs to have the ability to safely evacuate a station in a contingency scenario such as an uncontrolled (e.g., tumbling) station. This emergency escape sequence will typically be divided into three events: The first separation event (SEP1), the navigation reconstruction event, and the second separation event (SEP2). SEP1 is responsible for taking the spacecraft from its docking port to a distance greater than the maximum radius of the rotating station. The navigation reconstruction event takes place prior to the SEP2 event and establishes the orbital state to within the tolerance limits necessary for SEP2. The SEP2 event calculates and performs an avoidance burn to prevent station recontact during the next several orbits. This paper presents the tools and results for the whole separation sequence with an emphasis on the two separation events. The first challenge includes collision avoidance during the escape sequence while the station is in an uncontrolled rotational state, with rotation rates of up to 2 degrees per second. The task of avoiding a collision may require the use of the Vehicle's de-orbit propulsion system for maximum thrust and minimum dwell time within the vicinity of the station vicinity. The thrust of the propulsion system is in a single direction, and can be controlled only by the attitude of the spacecraft. Escape algorithms based on a look-up table or analytical guidance can be implemented since the rotation rate and the angular momentum vector can be sensed onboard and a-priori knowledge of the position and relative orientation are available. In addition, crew intervention has been provided for in the event of unforeseen obstacles in the escape path. The purpose of the SEP2 burn is to avoid re-contact with the station over an extended period of time. Performing this maneuver requires

  2. Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows

    Science.gov (United States)

    Srivastav, R. K.; Srinivasan, K.; Sudheer, K.

    2009-05-01

    bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.

  3. Effect of normalization methods on the performance of supervised learning algorithms applied to HTSeq-FPKM-UQ data sets: 7SK RNA expression as a predictor of survival in patients with colon adenocarcinoma.

    Science.gov (United States)

    Shahriyari, Leili

    2017-11-03

    One of the main challenges in machine learning (ML) is choosing an appropriate normalization method. Here, we examine the effect of various normalization methods on analyzing FPKM upper quartile (FPKM-UQ) RNA sequencing data sets. We collect the HTSeq-FPKM-UQ files of patients with colon adenocarcinoma from TCGA-COAD project. We compare three most common normalization methods: scaling, standardizing using z-score and vector normalization by visualizing the normalized data set and evaluating the performance of 12 supervised learning algorithms on the normalized data set. Additionally, for each of these normalization methods, we use two different normalization strategies: normalizing samples (files) or normalizing features (genes). Regardless of normalization methods, a support vector machine (SVM) model with the radial basis function kernel had the maximum accuracy (78%) in predicting the vital status of the patients. However, the fitting time of SVM depended on the normalization methods, and it reached its minimum fitting time when files were normalized to the unit length. Furthermore, among all 12 learning algorithms and 6 different normalization techniques, the Bernoulli naive Bayes model after standardizing files had the best performance in terms of maximizing the accuracy as well as minimizing the fitting time. We also investigated the effect of dimensionality reduction methods on the performance of the supervised ML algorithms. Reducing the dimension of the data set did not increase the maximum accuracy of 78%. However, it leaded to discovery of the 7SK RNA gene expression as a predictor of survival in patients with colon adenocarcinoma with accuracy of 78%. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  4. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  5. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  6. Development of Reinforcement Learning Algorithm for Automation of Slide Gate Check Structure in Canals

    Directory of Open Access Journals (Sweden)

    K. Shahverdi

    2016-02-01

    Full Text Available Introduction: Nowadays considering water shortage and weak management in agricultural water sector and for optimal uses of water, irrigation networks performance need to be improveed. Recently, intelligent management of water conveyance and delivery, and better control technologies have been considered for improving the performance of irrigation networks and their operation. For this affair, providing of mathematical model of automatic control system and related structures, which connected with hydrodynamic models, is necessary. The main objective of this research, is development of mathematical model of RL upstream control algorithm inside ICSS hydrodynamic model as a subroutine. Materials and Methods: In the learning systems, a set of state-action rules called classifiers compete to control the system based on the system's receipt from the environment. One could be identified five main elements of the RL: an agent, an environment, a policy, a reward function, and a simulator. The learner (decision-maker is called the agent. The thing it interacts with, comprising everything outside the agent, is called the environment. The agent selects an action based on existing state in the environment. When the agent takes an action and performs on environment, the environment goes new state and reward is assigned based on it. The agent and the environment continually interact to maximize the reward. The policy is a set of state-action pair, which have higher rewards. It defines the agent's behavior and says which action must be taken in which state. The reward function defines the goal in a RL problem. The reward function defines what the good and bad events are for the agent. The higher the reward, the better the action. The simulator provides environment information. In irrigation canals, the agent is the check structures. The action and state are the check structures adjustment and the water depth, respectively. The environment comprises the hydraulic

  7. Differences between the CME fronts tracked by an expert, an automated algorithm, and the Solar Stormwatch project

    Science.gov (United States)

    Barnard, L.; Scott, C. J.; Owens, M.; Lockwood, M.; Crothers, S. R.; Davies, J. A.; Harrison, R. A.

    2015-10-01

    Observations from the Heliospheric Imager (HI) instruments aboard the twin STEREO spacecraft have enabled the compilation of several catalogues of coronal mass ejections (CMEs), each characterizing the propagation of CMEs through the inner heliosphere. Three such catalogues are the Rutherford Appleton Laboratory (RAL)-HI event list, the Solar Stormwatch CME catalogue, and, presented here, the J-tracker catalogue. Each catalogue uses a different method to characterize the location of CME fronts in the HI images: manual identification by an expert, the statistical reduction of the manual identifications of many citizen scientists, and an automated algorithm. We provide a quantitative comparison of the differences between these catalogues and techniques, using 51 CMEs common to each catalogue. The time-elongation profiles of these CME fronts are compared, as are the estimates of the CME kinematics derived from application of three widely used single-spacecraft-fitting techniques. The J-tracker and RAL-HI profiles are most similar, while the Solar Stormwatch profiles display a small systematic offset. Evidence is presented that these differences arise because the RAL-HI and J-tracker profiles follow the sunward edge of CME density enhancements, while Solar Stormwatch profiles track closer to the antisunward (leading) edge. We demonstrate that the method used to produce the time-elongation profile typically introduces more variability into the kinematic estimates than differences between the various single-spacecraft-fitting techniques. This has implications for the repeatability and robustness of these types of analyses, arguably especially so in the context of space weather forecasting, where it could make the results strongly dependent on the methods used by the forecaster.

  8. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  9. Automated Conflict Resolution For Air Traffic Control

    Science.gov (United States)

    Erzberger, Heinz

    2005-01-01

    The ability to detect and resolve conflicts automatically is considered to be an essential requirement for the next generation air traffic control system. While systems for automated conflict detection have been used operationally by controllers for more than 20 years, automated resolution systems have so far not reached the level of maturity required for operational deployment. Analytical models and algorithms for automated resolution have been traffic conditions to demonstrate that they can handle the complete spectrum of conflict situations encountered in actual operations. The resolution algorithm described in this paper was formulated to meet the performance requirements of the Automated Airspace Concept (AAC). The AAC, which was described in a recent paper [1], is a candidate for the next generation air traffic control system. The AAC's performance objectives are to increase safety and airspace capacity and to accommodate user preferences in flight operations to the greatest extent possible. In the AAC, resolution trajectories are generated by an automation system on the ground and sent to the aircraft autonomously via data link .The algorithm generating the trajectories must take into account the performance characteristics of the aircraft, the route structure of the airway system, and be capable of resolving all types of conflicts for properly equipped aircraft without requiring supervision and approval by a controller. Furthermore, the resolution trajectories should be compatible with the clearances, vectors and flight plan amendments that controllers customarily issue to pilots in resolving conflicts. The algorithm described herein, although formulated specifically to meet the needs of the AAC, provides a generic engine for resolving conflicts. Thus, it can be incorporated into any operational concept that requires a method for automated resolution, including concepts for autonomous air to air resolution.

  10. Evaluation of an automated spike-and-wave complex detection algorithm in the EEG from a rat model of absence epilepsy.

    Science.gov (United States)

    Bauquier, Sebastien H; Lai, Alan; Jiang, Jonathan L; Sui, Yi; Cook, Mark J

    2015-10-01

    The aim of this prospective blinded study was to evaluate an automated algorithm for spike-and-wave discharge (SWD) detection applied to EEGs from genetic absence epilepsy rats from Strasbourg (GAERS). Five GAERS underwent four sessions of 20-min EEG recording. Each EEG was manually analyzed for SWDs longer than one second by two investigators and automatically using an algorithm developed in MATLAB®. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated for the manual (reference) versus the automatic (test) methods. The results showed that the algorithm had specificity, sensitivity, PPV and NPV >94%, comparable to published methods that are based on analyzing EEG changes in the frequency domain. This provides a good alternative as a method designed to mimic human manual marking in the time domain.

  11. Security system signal supervision

    International Nuclear Information System (INIS)

    Chritton, M.R.; Matter, J.C.

    1991-09-01

    This purpose of this NUREG is to present technical information that should be useful to NRC licensees for understanding and applying line supervision techniques to security communication links. A review of security communication links is followed by detailed discussions of link physical protection and DC/AC static supervision and dynamic supervision techniques. Material is also presented on security for atmospheric transmission and video line supervision. A glossary of security communication line supervision terms is appended. 16 figs

  12. FindFoci: a focus detection algorithm with automated parameter training that closely matches human assignments, reduces human inconsistencies and increases speed of analysis.

    Directory of Open Access Journals (Sweden)

    Alex D Herbert

    Full Text Available Accurate and reproducible quantification of the accumulation of proteins into foci in cells is essential for data interpretation and for biological inferences. To improve reproducibility, much emphasis has been placed on the preparation of samples, but less attention has been given to reporting and standardizing the quantification of foci. The current standard to quantitate foci in open-source software is to manually determine a range of parameters based on the outcome of one or a few representative images and then apply the parameter combination to the analysis of a larger dataset. Here, we demonstrate the power and utility of using machine learning to train a new algorithm (FindFoci to determine optimal parameters. FindFoci closely matches human assignments and allows rapid automated exploration of parameter space. Thus, individuals can train the algorithm to mirror their own assignments and then automate focus counting using the same parameters across a large number of images. Using the training algorithm to match human assignments of foci, we demonstrate that applying an optimal parameter combination from a single image is not broadly applicable to analysis of other images scored by the same experimenter or by other experimenters. Our analysis thus reveals wide variation in human assignment of foci and their quantification. To overcome this, we developed training on multiple images, which reduces the inconsistency of using a single or a few images to set parameters for focus detection. FindFoci is provided as an open-source plugin for ImageJ.

  13. Fully Automated Segmentation of Fluid/Cyst Regions in Optical Coherence Tomography Images With Diabetic Macular Edema Using Neutrosophic Sets and Graph Algorithms.

    Science.gov (United States)

    Rashno, Abdolreza; Koozekanani, Dara D; Drayna, Paul M; Nazari, Behzad; Sadri, Saeed; Rabbani, Hossein; Parhi, Keshab K

    2018-05-01

    This paper presents a fully automated algorithm to segment fluid-associated (fluid-filled) and cyst regions in optical coherence tomography (OCT) retina images of subjects with diabetic macular edema. The OCT image is segmented using a novel neutrosophic transformation and a graph-based shortest path method. In neutrosophic domain, an image is transformed into three sets: (true), (indeterminate) that represents noise, and (false). This paper makes four key contributions. First, a new method is introduced to compute the indeterminacy set , and a new -correction operation is introduced to compute the set in neutrosophic domain. Second, a graph shortest-path method is applied in neutrosophic domain to segment the inner limiting membrane and the retinal pigment epithelium as regions of interest (ROI) and outer plexiform layer and inner segment myeloid as middle layers using a novel definition of the edge weights . Third, a new cost function for cluster-based fluid/cyst segmentation in ROI is presented which also includes a novel approach in estimating the number of clusters in an automated manner. Fourth, the final fluid regions are achieved by ignoring very small regions and the regions between middle layers. The proposed method is evaluated using two publicly available datasets: Duke, Optima, and a third local dataset from the UMN clinic which is available online. The proposed algorithm outperforms the previously proposed Duke algorithm by 8% with respect to the dice coefficient and by 5% with respect to precision on the Duke dataset, while achieving about the same sensitivity. Also, the proposed algorithm outperforms a prior method for Optima dataset by 6%, 22%, and 23% with respect to the dice coefficient, sensitivity, and precision, respectively. Finally, the proposed algorithm also achieves sensitivity of 67.3%, 88.8%, and 76.7%, for the Duke, Optima, and the university of minnesota (UMN) datasets, respectively.

  14. Combination of supervised and semi-supervised regression models for improved unbiased estimation

    DEFF Research Database (Denmark)

    Arenas-Garía, Jeronimo; Moriana-Varo, Carlos; Larsen, Jan

    2010-01-01

    In this paper we investigate the steady-state performance of semisupervised regression models adjusted using a modified RLS-like algorithm, identifying the situations where the new algorithm is expected to outperform standard RLS. By using an adaptive combination of the supervised and semisupervi......In this paper we investigate the steady-state performance of semisupervised regression models adjusted using a modified RLS-like algorithm, identifying the situations where the new algorithm is expected to outperform standard RLS. By using an adaptive combination of the supervised...

  15. BlobContours: adapting Blobworld for supervised color- and texture-based image segmentation

    Science.gov (United States)

    Vogel, Thomas; Nguyen, Dinh Quyen; Dittmann, Jana

    2006-01-01

    Extracting features is the first and one of the most crucial steps in recent image retrieval process. While the color features and the texture features of digital images can be extracted rather easily, the shape features and the layout features depend on reliable image segmentation. Unsupervised image segmentation, often used in image analysis, works on merely syntactical basis. That is, what an unsupervised segmentation algorithm can segment is only regions, but not objects. To obtain high-level objects, which is desirable in image retrieval, human assistance is needed. Supervised image segmentations schemes can improve the reliability of segmentation and segmentation refinement. In this paper we propose a novel interactive image segmentation technique that combines the reliability of a human expert with the precision of automated image segmentation. The iterative procedure can be considered a variation on the Blobworld algorithm introduced by Carson et al. from EECS Department, University of California, Berkeley. Starting with an initial segmentation as provided by the Blobworld framework, our algorithm, namely BlobContours, gradually updates it by recalculating every blob, based on the original features and the updated number of Gaussians. Since the original algorithm has hardly been designed for interactive processing we had to consider additional requirements for realizing a supervised segmentation scheme on the basis of Blobworld. Increasing transparency of the algorithm by applying usercontrolled iterative segmentation, providing different types of visualization for displaying the segmented image and decreasing computational time of segmentation are three major requirements which are discussed in detail.

  16. Mentoring, coaching and supervision

    OpenAIRE

    McMahon, Samantha; Dyer, Mary; Barker, Catherine

    2016-01-01

    This chapter considers the purpose of coaching, mentoring and supervision in early childhood eduaction and care. It examines a number of different approaches and considers the key skills required for effective coaching, mentoring and supervision.

  17. Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions.

    Science.gov (United States)

    Chen, Ke; Wang, Shihai

    2011-01-01

    Semi-supervised learning concerns the problem of learning in the presence of labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes all three semi-supervised assumptions, i.e., smoothness, cluster, and manifold assumptions, together into account during boosting learning. In this paper, we propose a novel cost functional consisting of the margin cost on labeled data and the regularization penalty on unlabeled data based on three fundamental semi-supervised assumptions. Thus, minimizing our proposed cost functional with a greedy yet stagewise functional optimization procedure leads to a generic boosting framework for semi-supervised learning. Extensive experiments demonstrate that our algorithm yields favorite results for benchmark and real-world classification tasks in comparison to state-of-the-art semi-supervised learning algorithms, including newly developed boosting algorithms. Finally, we discuss relevant issues and relate our algorithm to the previous work.

  18. Optimal preventive bank supervision

    OpenAIRE

    Belhaj, Mohamed; Klimenko, Nataliya

    2012-01-01

    Early regulator interventions into problem banks is one of the key suggestions of Basel Committee on Banking Supervision. However, no guidance is given on their design. To fill this gap, we outline an incentive-based preventive supervision strategy that eliminates bad asset management in banks. Two supervision techniques are combined: temporary regulatory administration and random audits. Our design ensures good management without excessive supervision costs, through a gradual adjustment of...

  19. Classical algorithms for automated parameter-search methods in compartmental neural models - A critical survey based on simulations using neuron

    International Nuclear Information System (INIS)

    Mutihac, R.; Mutihac, R.C.; Cicuttin, A.

    2001-09-01

    gradient-descent techniques are adequate if the parameter space is low-dimensional, relatively smooth, and has a few local minima (e.g., parameterizing single-neuron compartmental models). Only the fast algorithms and/or a decent (low) number of model parameters are candidates for automated parameter search because of practical reasons. Eventually, the size of the parameter space may be reduced and/or parallel supercomputers may be used. Data overfitting may negatively affect the generalization ability of the model. Bayesian methods include Occam's factor, which set the preference for simpler models. Proliferation of (neural) models raises the question of rigorous criteria for comparing the overall performance of various models designed to match the same type of data. Bayesian methods provide the best framework to assess the neural models quantitatively. Paradoxically, parameter-search methods may sometimes be more useful when they fail by discarding unrealistic mechanisms used in the model design, rather than fitting experimental data to an alleged model

  20. A Supervision of Solidarity

    Science.gov (United States)

    Reynolds, Vikki

    2010-01-01

    This article illustrates an approach to therapeutic supervision informed by a philosophy of solidarity and social justice activism. Called a "Supervision of Solidarity", this approach addresses the particular challenges in the supervision of therapists who work alongside clients who are subjected to social injustice and extreme marginalization. It…

  1. Legislation and supervision

    International Nuclear Information System (INIS)

    1998-01-01

    In this part next aspects are described: (1) Legislative and supervision-related framework (reviews of structure of supervisory bodies; legislation; state supervision in the nuclear safety area, and state supervision in the area of health protection against radiation are given); (2) Operator's responsibility

  2. INVESTIGATION OF NEURAL NETWORK ALGORITHM FOR DETECTION OF NETWORK HOST ANOMALIES IN THE AUTOMATED SEARCH FOR XSS VULNERABILITIES AND SQL INJECTIONS

    Directory of Open Access Journals (Sweden)

    Y. D. Shabalin

    2016-03-01

    Full Text Available A problem of aberrant behavior detection for network communicating computer is discussed. A novel approach based on dynamic response of computer is introduced. The computer is suggested as a multiple-input multiple-output (MIMO plant. To characterize dynamic response of the computer on incoming requests a correlation between input data rate and observed output response (outgoing data rate and performance metrics is used. To distinguish normal and aberrant behavior of the computer one-class neural network classifieris used. General idea of the algorithm is shortly described. Configuration of network testbed for experiments with real attacks and their detection is presented (the automated search for XSS and SQL injections. Real found-XSS and SQL injection attack software was used to model the intrusion scenario. It would be expectable that aberrant behavior of the server will reveal itself by some instantaneous correlation response which will be significantly different from any of normal ones. It is evident that correlation picture of attacks from different malware running, the site homepage overriding on the server (so called defacing, hardware and software failures will differ from correlation picture of normal functioning. Intrusion detection algorithm is investigated to estimate false positive and false negative rates in relation to algorithm parameters. The importance of correlation width value and threshold value selection was emphasized. False positive rate was estimated along the time series of experimental data. Some ideas about enhancement of the algorithm quality and robustness were mentioned.

  3. A new automated assign and analysing method for high-resolution rotationally resolved spectra using genetic algorithms

    NARCIS (Netherlands)

    Meerts, W.L.; Schmitt, M.

    2006-01-01

    This paper describes a numerical technique that has recently been developed to automatically assign and fit high-resolution spectra. The method makes use of genetic algorithms (GA). The current algorithm is compared with previously used analysing methods. The general features of the GA and its

  4. Definition and Analysis of a System for the Automated Comparison of Curriculum Sequencing Algorithms in Adaptive Distance Learning

    Science.gov (United States)

    Limongelli, Carla; Sciarrone, Filippo; Temperini, Marco; Vaste, Giulia

    2011-01-01

    LS-Lab provides automatic support to comparison/evaluation of the Learning Object Sequences produced by different Curriculum Sequencing Algorithms. Through this framework a teacher can verify the correspondence between the behaviour of different sequencing algorithms and her pedagogical preferences. In fact the teacher can compare algorithms…

  5. Good supervision and PBL

    DEFF Research Database (Denmark)

    Otrel-Cass, Kathrin

    This field study was conducted at the Faculty of Social Sciences at Aalborg University with the intention to investigate how students reflect on their experiences with supervision in a PBL environment. The overall aim of this study was to inform about the continued work in strengthening supervision...... at this faculty. This particular study invited Master level students to discuss: • How a typical supervision process proceeds • How they experienced and what they expected of PBL in the supervision process • What makes a good supervision process...

  6. Supervised Learning for Visual Pattern Classification

    Science.gov (United States)

    Zheng, Nanning; Xue, Jianru

    This chapter presents an overview of the topics and major ideas of supervised learning for visual pattern classification. Two prevalent algorithms, i.e., the support vector machine (SVM) and the boosting algorithm, are briefly introduced. SVMs and boosting algorithms are two hot topics of recent research in supervised learning. SVMs improve the generalization of the learning machine by implementing the rule of structural risk minimization (SRM). It exhibits good generalization even when little training data are available for machine training. The boosting algorithm can boost a weak classifier to a strong classifier by means of the so-called classifier combination. This algorithm provides a general way for producing a classifier with high generalization capability from a great number of weak classifiers.

  7. Observations on muscle activity in REM sleep behavior disorder assessed with a semi-automated scoring algorithm

    DEFF Research Database (Denmark)

    Jeppesen, Jesper; Otto, Marit; Frederiksen, Yoon

    2018-01-01

    OBJECTIVES: Rapid eye movement (REM) sleep behavior disorder (RBD) is defined by dream enactment due to a failure of normal muscle atonia. Visual assessment of this muscle activity is time consuming and rater-dependent. METHODS: An EMG computer algorithm for scoring 'tonic', 'phasic' and 'any......' submental muscle activity during REM sleep was evaluated compared with human visual ratings. Subsequently, 52 subjects were analyzed with the algorithm. Duration and maximal amplitude of muscle activity, and self-awareness of RBD symptoms were assessed. RESULTS: The computer algorithm showed high congruency...... sleep without atonia. CONCLUSIONS: Our proposed algorithm was able to detect and rate REM sleep without atonia allowing identification of RBD. Increased duration and amplitude of muscle activity bouts were characteristics of RBD. Quantification of REM sleep without atonia represents a marker of RBD...

  8. Computer algorithms for automated detection and analysis of local Ca2+ releases in spontaneously beating cardiac pacemaker cells.

    Directory of Open Access Journals (Sweden)

    Alexander V Maltsev

    Full Text Available Local Ca2+ Releases (LCRs are crucial events involved in cardiac pacemaker cell function. However, specific algorithms for automatic LCR detection and analysis have not been developed in live, spontaneously beating pacemaker cells. In the present study we measured LCRs using a high-speed 2D-camera in spontaneously contracting sinoatrial (SA node cells isolated from rabbit and guinea pig and developed a new algorithm capable of detecting and analyzing the LCRs spatially in two-dimensions, and in time. Our algorithm tracks points along the midline of the contracting cell. It uses these points as a coordinate system for affine transform, producing a transformed image series where the cell does not contract. Action potential-induced Ca2+ transients and LCRs were thereafter isolated from recording noise by applying a series of spatial filters. The LCR birth and death events were detected by a differential (frame-to-frame sensitivity algorithm applied to each pixel (cell location. An LCR was detected when its signal changes sufficiently quickly within a sufficiently large area. The LCR is considered to have died when its amplitude decays substantially, or when it merges into the rising whole cell Ca2+ transient. Ultimately, our algorithm provides major LCR parameters such as period, signal mass, duration, and propagation path area. As the LCRs propagate within live cells, the algorithm identifies splitting and merging behaviors, indicating the importance of locally propagating Ca2+-induced-Ca2+-release for the fate of LCRs and for generating a powerful ensemble Ca2+ signal. Thus, our new computer algorithms eliminate motion artifacts and detect 2D local spatiotemporal events from recording noise and global signals. While the algorithms were developed to detect LCRs in sinoatrial nodal cells, they have the potential to be used in other applications in biophysics and cell physiology, for example, to detect Ca2+ wavelets (abortive waves, sparks and

  9. A multi-stage heuristic algorithm for matching problem in the modified miniload automated storage and retrieval system of e-commerce

    Science.gov (United States)

    Wang, Wenrui; Wu, Yaohua; Wu, Yingying

    2016-05-01

    E-commerce, as an emerging marketing mode, has attracted more and more attention and gradually changed the way of our life. However, the existing layout of distribution centers can't fulfill the storage and picking demands of e-commerce sufficiently. In this paper, a modified miniload automated storage/retrieval system is designed to fit these new characteristics of e-commerce in logistics. Meanwhile, a matching problem, concerning with the improvement of picking efficiency in new system, is studied in this paper. The problem is how to reduce the travelling distance of totes between aisles and picking stations. A multi-stage heuristic algorithm is proposed based on statement and model of this problem. The main idea of this algorithm is, with some heuristic strategies based on similarity coefficients, minimizing the transportations of items which can not arrive in the destination picking stations just through direct conveyors. The experimental results based on the cases generated by computers show that the average reduced rate of indirect transport times can reach 14.36% with the application of multi-stage heuristic algorithm. For the cases from a real e-commerce distribution center, the order processing time can be reduced from 11.20 h to 10.06 h with the help of the modified system and the proposed algorithm. In summary, this research proposed a modified system and a multi-stage heuristic algorithm that can reduce the travelling distance of totes effectively and improve the whole performance of e-commerce distribution center.

  10. Power of automated algorithms for combining time-line follow-back and urine drug screening test results in stimulant-abuse clinical trials.

    Science.gov (United States)

    Oden, Neal L; VanVeldhuisen, Paul C; Wakim, Paul G; Trivedi, Madhukar H; Somoza, Eugene; Lewis, Daniel

    2011-09-01

    In clinical trials of treatment for stimulant abuse, researchers commonly record both Time-Line Follow-Back (TLFB) self-reports and urine drug screen (UDS) results. To compare the power of self-report, qualitative (use vs. no use) UDS assessment, and various algorithms to generate self-report-UDS composite measures to detect treatment differences via t-test in simulated clinical trial data. We performed Monte Carlo simulations patterned in part on real data to model self-report reliability, UDS errors, dropout, informatively missing UDS reports, incomplete adherence to a urine donation schedule, temporal correlation of drug use, number of days in the study period, number of patients per arm, and distribution of drug-use probabilities. Investigated algorithms include maximum likelihood and Bayesian estimates, self-report alone, UDS alone, and several simple modifications of self-report (referred to here as ELCON algorithms) which eliminate perceived contradictions between it and UDS. Among the algorithms investigated, simple ELCON algorithms gave rise to the most powerful t-tests to detect mean group differences in stimulant drug use. Further investigation is needed to determine if simple, naïve procedures such as the ELCON algorithms are optimal for comparing clinical study treatment arms. But researchers who currently require an automated algorithm in scenarios similar to those simulated for combining TLFB and UDS to test group differences in stimulant use should consider one of the ELCON algorithms. This analysis continues a line of inquiry which could determine how best to measure outpatient stimulant use in clinical trials (NIDA. NIDA Monograph-57: Self-Report Methods of Estimating Drug Abuse: Meeting Current Challenges to Validity. NTIS PB 88248083. Bethesda, MD: National Institutes of Health, 1985; NIDA. NIDA Research Monograph 73: Urine Testing for Drugs of Abuse. NTIS PB 89151971. Bethesda, MD: National Institutes of Health, 1987; NIDA. NIDA Research

  11. Spectral matching techniques (SMTs) and automated cropland classification algorithms (ACCAs) for mapping croplands of Australia using MODIS 250-m time-series (2000–2015) data

    Science.gov (United States)

    Teluguntla, Pardhasaradhi G.; Thenkabail, Prasad S.; Xiong, Jun N.; Gumma, Murali Krishna; Congalton, Russell G.; Oliphant, Adam; Poehnelt, Justin; Yadav, Kamini; Rao, Mahesh N.; Massey, Richard

    2017-01-01

    Mapping croplands, including fallow areas, are an important measure to determine the quantity of food that is produced, where they are produced, and when they are produced (e.g. seasonality). Furthermore, croplands are known as water guzzlers by consuming anywhere between 70% and 90% of all human water use globally. Given these facts and the increase in global population to nearly 10 billion by the year 2050, the need for routine, rapid, and automated cropland mapping year-after-year and/or season-after-season is of great importance. The overarching goal of this study was to generate standard and routine cropland products, year-after-year, over very large areas through the use of two novel methods: (a) quantitative spectral matching techniques (QSMTs) applied at continental level and (b) rule-based Automated Cropland Classification Algorithm (ACCA) with the ability to hind-cast, now-cast, and future-cast. Australia was chosen for the study given its extensive croplands, rich history of agriculture, and yet nonexistent routine yearly generated cropland products using multi-temporal remote sensing. This research produced three distinct cropland products using Moderate Resolution Imaging Spectroradiometer (MODIS) 250-m normalized difference vegetation index 16-day composite time-series data for 16 years: 2000 through 2015. The products consisted of: (1) cropland extent/areas versus cropland fallow areas, (2) irrigated versus rainfed croplands, and (3) cropping intensities: single, double, and continuous cropping. An accurate reference cropland product (RCP) for the year 2014 (RCP2014) produced using QSMT was used as a knowledge base to train and develop the ACCA algorithm that was then applied to the MODIS time-series data for the years 2000–2015. A comparison between the ACCA-derived cropland products (ACPs) for the year 2014 (ACP2014) versus RCP2014 provided an overall agreement of 89.4% (kappa = 0.814) with six classes: (a) producer’s accuracies varying

  12. An ImageJ-based algorithm for a semi-automated method for microscopic image enhancement and DNA repair foci counting

    International Nuclear Information System (INIS)

    Klokov, D.; Suppiah, R.

    2015-01-01

    Proper evaluation of the health risks of low-dose ionizing radiation exposure heavily relies on the ability to accurately measure very low levels of DNA damage in cells. One of the most sensitive methods for measuring DNA damage levels is the quantification of DNA repair foci that consist of macromolecular aggregates of DNA repair proteins, such as γH2AX and 53BP1, forming around individual DNA double-strand breaks. They can be quantified using immunofluorescence microscopy and are widely used as markers of DNA double-strand breaks. However this quantification, if performed manually, may be very tedious and prone to inter-individual bias. Low-dose radiation studies are especially sensitive to this potential bias due to a very low magnitude of the effects anticipated. Therefore, we designed and validated an algorithm for the semi-automated processing of microscopic images and quantification of DNA repair foci. The algorithm uses ImageJ, a freely available image analysis software that is customizable to individual cellular properties or experimental conditions. We validated the algorithm using immunolabeled 53BP1 and γH2AX in normal human fibroblast AG01522 cells under both normal and irradiated conditions. This method is easy to learn, can be used by nontrained personnel, and can help avoiding discrepancies in inter-laboratory comparison studies examining the effects of low-dose radiation. (author)

  13. An ImageJ-based algorithm for a semi-automated method for microscopic image enhancement and DNA repair foci counting

    Energy Technology Data Exchange (ETDEWEB)

    Klokov, D., E-mail: dmitry.klokov@cnl.ca [Canadian Nuclear Laboratories, Chalk River, Ontario (Canada); Suppiah, R. [Queen' s Univ., Dept. of Biomedical and Molecular Sciences, Kingston, Ontario (Canada)

    2015-06-15

    Proper evaluation of the health risks of low-dose ionizing radiation exposure heavily relies on the ability to accurately measure very low levels of DNA damage in cells. One of the most sensitive methods for measuring DNA damage levels is the quantification of DNA repair foci that consist of macromolecular aggregates of DNA repair proteins, such as γH2AX and 53BP1, forming around individual DNA double-strand breaks. They can be quantified using immunofluorescence microscopy and are widely used as markers of DNA double-strand breaks. However this quantification, if performed manually, may be very tedious and prone to inter-individual bias. Low-dose radiation studies are especially sensitive to this potential bias due to a very low magnitude of the effects anticipated. Therefore, we designed and validated an algorithm for the semi-automated processing of microscopic images and quantification of DNA repair foci. The algorithm uses ImageJ, a freely available image analysis software that is customizable to individual cellular properties or experimental conditions. We validated the algorithm using immunolabeled 53BP1 and γH2AX in normal human fibroblast AG01522 cells under both normal and irradiated conditions. This method is easy to learn, can be used by nontrained personnel, and can help avoiding discrepancies in inter-laboratory comparison studies examining the effects of low-dose radiation. (author)

  14. Automated Field-of-View, Illumination, and Recognition Algorithm Design of a Vision System for Pick-and-Place Considering Colour Information in Illumination and Images.

    Science.gov (United States)

    Chen, Yibing; Ogata, Taiki; Ueyama, Tsuyoshi; Takada, Toshiyuki; Ota, Jun

    2018-05-22

    Machine vision is playing an increasingly important role in industrial applications, and the automated design of image recognition systems has been a subject of intense research. This study has proposed a system for automatically designing the field-of-view (FOV) of a camera, the illumination strength and the parameters in a recognition algorithm. We formulated the design problem as an optimisation problem and used an experiment based on a hierarchical algorithm to solve it. The evaluation experiments using translucent plastics objects showed that the use of the proposed system resulted in an effective solution with a wide FOV, recognition of all objects and 0.32 mm and 0.4° maximal positional and angular errors when all the RGB (red, green and blue) for illumination and R channel image for recognition were used. Though all the RGB illumination and grey scale images also provided recognition of all the objects, only a narrow FOV was selected. Moreover, full recognition was not achieved by using only G illumination and a grey-scale image. The results showed that the proposed method can automatically design the FOV, illumination and parameters in the recognition algorithm and that tuning all the RGB illumination is desirable even when single-channel or grey-scale images are used for recognition.

  15. Influencing Trust for Human-Automation Collaborative Scheduling of Multiple Unmanned Vehicles.

    Science.gov (United States)

    Clare, Andrew S; Cummings, Mary L; Repenning, Nelson P

    2015-11-01

    We examined the impact of priming on operator trust and system performance when supervising a decentralized network of heterogeneous unmanned vehicles (UVs). Advances in autonomy have enabled a future vision of single-operator control of multiple heterogeneous UVs. Real-time scheduling for multiple UVs in uncertain environments requires the computational ability of optimization algorithms combined with the judgment and adaptability of human supervisors. Because of system and environmental uncertainty, appropriate operator trust will be instrumental to maintain high system performance and prevent cognitive overload. Three groups of operators experienced different levels of trust priming prior to conducting simulated missions in an existing, multiple-UV simulation environment. Participants who play computer and video games frequently were found to have a higher propensity to overtrust automation. By priming gamers to lower their initial trust to a more appropriate level, system performance was improved by 10% as compared to gamers who were primed to have higher trust in the automation. Priming was successful at adjusting the operator's initial and dynamic trust in the automated scheduling algorithm, which had a substantial impact on system performance. These results have important implications for personnel selection and training for futuristic multi-UV systems under human supervision. Although gamers may bring valuable skills, they may also be potentially prone to automation bias. Priming during training and regular priming throughout missions may be one potential method for overcoming this propensity to overtrust automation. © 2015, Human Factors and Ergonomics Society.

  16. Optimum supervision intervals and order of supervision in nuclear reactor protective systems

    International Nuclear Information System (INIS)

    Kontoleon, J.M.

    1978-01-01

    The optimum inspection strategy of an m-out-of-n:G nuclear reactor protective system with nonidentical units is analyzed. A 2-out-of-4:G system is used to formulate a multi-variable optimization problem to determine (a) the optimum order of supervision of the units and (b) the optimum supervision intervals between units. The case of systems with identical units is a special case of the above. Numerical results are derived using a computer algorithm

  17. Reflecting reflection in supervision

    DEFF Research Database (Denmark)

    Lystbæk, Christian Tang

    associated with reflection and an exploration of alternative conceptions that view reflection within the context of settings which have a more group- and team-based orientation. Drawing on an action research project on health care supervision, the paper questions whether we should reject earlier views...... of reflection, rehabilitate them in order to capture broader connotations or move to new ways of regarding reflection that are more in keeping with not only reflective but also emotive, normative and formative views on supervision. The paper presents a critical perspective on supervision that challenge...... the current reflective paradigm I supervision and relate this to emotive, normative and formative views supervision. The paper is relevant for Nordic educational research into the supervision and guidance...

  18. Supervision in banking industry

    OpenAIRE

    Šmída, David

    2012-01-01

    The aim of submitted thesis Supervision in banking is to define the nature and the importance of banking supervision, to justify its existence and to analyze the applicable mechanisms while the system of banking regulation and supervision in this thesis is primarily examined in the European context, with a focus on the Czech Republic. The thesis is divided into five main chapters. The first chapter is devoted to the financial system and the importance of banks in this system, it defines the c...

  19. MULTIPERIOD BANKING SUPERVISION

    OpenAIRE

    KARL-THEODOR EISELE; PHILIPPE ARTZNER

    2013-01-01

    This paper is based on a general method for multiperiod prudential supervision of companies submitted to hedgeable and non-hedgeable risks. Having treated the case of insurance in an earlier paper, we now consider a quantitative approach to supervision of commercial banks. The various elements under supervision are the bank’s current amount of tradeable assets, the deposit amount, and four flow processes: future trading risk exposures, deposit flows, flows of loan repayments and of deposit re...

  20. Rethinking Educational Supervision

    OpenAIRE

    Burhanettin DÖNMEZ; Kadir BEYCİOĞLU

    2009-01-01

    The history of educational (school) supervision has been influenced by the history of the interaction of intellectual movements in politics, society, philosophy and industrial movements. The purpose of this conceptual and theoretical study is to have a brief look at the concept of educational supervision with related historical developments in the field. The paper also intends to see the terms and issues critically, and to conceptualize some issues associated with educational supervision in...

  1. Evaluering af kollegial supervision

    DEFF Research Database (Denmark)

    Petersen, Anne Line Bjerre Folsgaard; Bager, Lene Tortzen; Jørgensen, Mette Eg

    2015-01-01

    Videoen er en evaluering af arbejdet med en metodisk tilgang til kollegial supervision på VIA Ergoterapeutuddannelsen gennem et par år. Evalueringen sætter fokus på selve metoden, der er anvendt til kollegial supervision. Derudover er der fokus på erfaringer og udbytte af at arbejde systematisk med...... kollegial supervision blandt undervisere på VIA Ergoterapeutuddannelsen....

  2. Semi-supervised Learning for Phenotyping Tasks.

    Science.gov (United States)

    Dligach, Dmitriy; Miller, Timothy; Savova, Guergana K

    2015-01-01

    Supervised learning is the dominant approach to automatic electronic health records-based phenotyping, but it is expensive due to the cost of manual chart review. Semi-supervised learning takes advantage of both scarce labeled and plentiful unlabeled data. In this work, we study a family of semi-supervised learning algorithms based on Expectation Maximization (EM) in the context of several phenotyping tasks. We first experiment with the basic EM algorithm. When the modeling assumptions are violated, basic EM leads to inaccurate parameter estimation. Augmented EM attenuates this shortcoming by introducing a weighting factor that downweights the unlabeled data. Cross-validation does not always lead to the best setting of the weighting factor and other heuristic methods may be preferred. We show that accurate phenotyping models can be trained with only a few hundred labeled (and a large number of unlabeled) examples, potentially providing substantial savings in the amount of the required manual chart review.

  3. Collective academic supervision

    DEFF Research Database (Denmark)

    Nordentoft, Helle Merete; Thomsen, Rie; Wichmann-Hansen, Gitte

    2013-01-01

    Supervision of students is a core activity in higher education. Previous research on student supervision in higher education focus on individual and relational aspects in the supervisory relationship rather than collective, pedagogical and methodical aspects of the planning of the supervision...... process. This article fills these gaps by discussing potentials and challenges in “Collective Academic Supervision”, a model for supervision at the Master of Education in Guidance at Aarhus University in Denmark. The pedagogical rationale behind the model is that students’ participation and learning...

  4. A Semi-Automated Machine Learning Algorithm for Tree Cover Delineation from 1-m Naip Imagery Using a High Performance Computing Architecture

    Science.gov (United States)

    Basu, S.; Ganguly, S.; Nemani, R. R.; Mukhopadhyay, S.; Milesi, C.; Votava, P.; Michaelis, A.; Zhang, G.; Cook, B. D.; Saatchi, S. S.; Boyda, E.

    2014-12-01

    Accurate tree cover delineation is a useful instrument in the derivation of Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree cover delineation in high to coarse resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR datasets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree cover estimates for the whole of Continental United States, using a High Performance Computing Architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field (CRF), which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the state of California, which covers a total of 11,095 NAIP tiles and spans a total geographical area of 163,696 sq. miles. Our framework produced correct detection rates of around 85% for fragmented forests and 70% for urban tree cover areas, with false positive rates lower than 3% for both regions. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR high-resolution canopy height model shows the effectiveness of our algorithm in generating accurate high-resolution tree cover maps.

  5. Automated real-time search and analysis algorithms for a non-contact 3D profiling system

    Science.gov (United States)

    Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.

    2013-04-01

    The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time

  6. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan; Gao, Xin

    2014-01-01

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  7. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  8. Performance Monitoring Applied to System Supervision

    Directory of Open Access Journals (Sweden)

    Bertille Somon

    2017-07-01

    Full Text Available Nowadays, automation is present in every aspect of our daily life and has some benefits. Nonetheless, empirical data suggest that traditional automation has many negative performance and safety consequences as it changed task performers into task supervisors. In this context, we propose to use recent insights into the anatomical and neurophysiological substrates of action monitoring in humans, to help further characterize performance monitoring during system supervision. Error monitoring is critical for humans to learn from the consequences of their actions. A wide variety of studies have shown that the error monitoring system is involved not only in our own errors, but also in the errors of others. We hypothesize that the neurobiological correlates of the self-performance monitoring activity can be applied to system supervision. At a larger scale, a better understanding of system supervision may allow its negative effects to be anticipated or even countered. This review is divided into three main parts. First, we assess the neurophysiological correlates of self-performance monitoring and their characteristics during error execution. Then, we extend these results to include performance monitoring and error observation of others or of systems. Finally, we provide further directions in the study of system supervision and assess the limits preventing us from studying a well-known phenomenon: the Out-Of-the-Loop (OOL performance problem.

  9. Impact of data transformation and preprocessing in supervised ...

    African Journals Online (AJOL)

    Impact of data transformation and preprocessing in supervised learning ... Nowadays, the ideas of integrating machine learning techniques in power system has ... The proposed algorithm used Python-based split train and k-fold model ...

  10. Researching online supervision

    DEFF Research Database (Denmark)

    Bengtsen, Søren S. E.; Mathiasen, Helle

    2014-01-01

    Online supervision and the use of digital media in supervisory dialogues is a fast increasing practice in higher education today. However, the concepts in our pedagogical repertoire often reflect the digital tools used for supervision purposes as either a prolongation of the face-to-face contact...

  11. Clinical Supervision in Denmark

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard

    2011-01-01

    Core Questionnaire (DPCCQ) has only few questions on supervision. To rectify this limitation, a recent Danish version of the DPCCQ included two new sections on supervision, one focusing on supervisees and another on supervisors and their supervisory training. This paper presents our initial findings...

  12. Evolution in banking supervision

    OpenAIRE

    Edward J. Stevens

    2000-01-01

    Banking supervision must keep pace with technical innovations in the banking industry. The international Basel Committee on Banking Supervision currently is reviewing public comments on its proposed new method for judging whether a bank maintains enough capital to absorb unexpected losses. This Economic Commentary explains how existing standards became obsolete and describes the new plan.

  13. Forskellighed i supervision

    DEFF Research Database (Denmark)

    Petersen, Birgitte; Beck, Emma

    2009-01-01

    Indtryk og tendenser fra den anden danske konference om supervision, som blev holdt på Københavns Universitet i oktober 2008......Indtryk og tendenser fra den anden danske konference om supervision, som blev holdt på Københavns Universitet i oktober 2008...

  14. Networks of Professional Supervision

    Science.gov (United States)

    Annan, Jean; Ryba, Ken

    2013-01-01

    An ecological analysis of the supervisory activity of 31 New Zealand school psychologists examined simultaneously the theories of school psychology, supervision practices, and the contextual qualities that mediated participants' supervisory actions. The findings indicated that the school psychologists worked to achieve the supervision goals of…

  15. STRAT: an automated algorithm to retrieve the vertical structure of the atmosphere from single channel lidar data

    OpenAIRE

    Morille, Yohann; Haeffelin, Martial; Drobinski, Philippe; Pelon, Jacques

    2007-01-01

    International audience; Today several lidar networks around the world provide large data sets that are extremely valuable for aerosol and cloud research. Retrieval of atmospheric constituent properties from lidar profiles requires detailed analysis of spatial and temporal variations of the signal. This paper presents an algorithm called STRAT (STRucture of the ATmosphere) designed to retrieve the vertical distribution of cloud and aerosol layers in the boundary layer and through the free trop...

  16. Discovery of new natural products by application of X-hitting, a novel algorithm for automated comparison of full UV-spectra, combined with structural determination by NMR spectroscophy

    DEFF Research Database (Denmark)

    Larsen, Thomas Ostenfeld; Petersen, Bent O.; Duus, Jens Øllgaard

    2005-01-01

    X-hitting, a newly developed algorithm for automated comparison of UV data, has been used for the tracking of two novel spiro-quinazoline metabolites, lapatins A (1)andB(2), in a screening study targeting quinazolines. The structures of 1 and 2 were elucidated by analysis of spectroscopic data...

  17. Can a semi-automated surface matching and principal axis-based algorithm accurately quantify femoral shaft fracture alignment in six degrees of freedom?

    Science.gov (United States)

    Crookshank, Meghan C; Beek, Maarten; Singh, Devin; Schemitsch, Emil H; Whyne, Cari M

    2013-07-01

    Accurate alignment of femoral shaft fractures treated with intramedullary nailing remains a challenge for orthopaedic surgeons. The aim of this study is to develop and validate a cone-beam CT-based, semi-automated algorithm to quantify the malalignment in six degrees of freedom (6DOF) using a surface matching and principal axes-based approach. Complex comminuted diaphyseal fractures were created in nine cadaveric femora and cone-beam CT images were acquired (27 cases total). Scans were cropped and segmented using intensity-based thresholding, producing superior, inferior and comminution volumes. Cylinders were fit to estimate the long axes of the superior and inferior fragments. The angle and distance between the two cylindrical axes were calculated to determine flexion/extension and varus/valgus angulation and medial/lateral and anterior/posterior translations, respectively. Both surfaces were unwrapped about the cylindrical axes. Three methods of matching the unwrapped surface for determination of periaxial rotation were compared based on minimizing the distance between features. The calculated corrections were compared to the input malalignment conditions. All 6DOF were calculated to within current clinical tolerances for all but two cases. This algorithm yielded accurate quantification of malalignment of femoral shaft fractures for fracture gaps up to 60 mm, based on a single CBCT image of the fractured limb. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  18. Adaptive algorithm of selecting optimal variant of errors detection system for digital means of automation facility of oil and gas complex

    Science.gov (United States)

    Poluyan, A. Y.; Fugarov, D. D.; Purchina, O. A.; Nesterchuk, V. V.; Smirnova, O. V.; Petrenkova, S. B.

    2018-05-01

    To date, the problems associated with the detection of errors in digital equipment (DE) systems for the automation of explosive objects of the oil and gas complex are extremely actual. Especially this problem is actual for facilities where a violation of the accuracy of the DE will inevitably lead to man-made disasters and essential material damage, at such facilities, the diagnostics of the accuracy of the DE operation is one of the main elements of the industrial safety management system. In the work, the solution of the problem of selecting the optimal variant of the errors detection system of errors detection by a validation criterion. Known methods for solving these problems have an exponential valuation of labor intensity. Thus, with a view to reduce time for solving the problem, a validation criterion is compiled as an adaptive bionic algorithm. Bionic algorithms (BA) have proven effective in solving optimization problems. The advantages of bionic search include adaptability, learning ability, parallelism, the ability to build hybrid systems based on combining. [1].

  19. An automated and robust image processing algorithm for glaucoma diagnosis from fundus images using novel blood vessel tracking and bend point detection.

    Science.gov (United States)

    M, Soorya; Issac, Ashish; Dutta, Malay Kishore

    2018-02-01

    Glaucoma is an ocular disease which can cause irreversible blindness. The disease is currently identified using specialized equipment operated by optometrists manually. The proposed work aims to provide an efficient imaging solution which can help in automating the process of Glaucoma diagnosis using computer vision techniques from digital fundus images. The proposed method segments the optic disc using a geometrical feature based strategic framework which improves the detection accuracy and makes the algorithm invariant to illumination and noise. Corner thresholding and point contour joining based novel methods are proposed to construct smooth contours of Optic Disc. Based on a clinical approach as used by ophthalmologist, the proposed algorithm tracks blood vessels inside the disc region and identifies the points at which first vessel bend from the optic disc boundary and connects them to obtain the contours of Optic Cup. The proposed method has been compared with the ground truth marked by the medical experts and the similarity parameters, used to determine the performance of the proposed method, have yield a high similarity of segmentation. The proposed method has achieved a macro-averaged f-score of 0.9485 and accuracy of 97.01% in correctly classifying fundus images. The proposed method is clinically significant and can be used for Glaucoma screening over a large population which will work in a real time. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. The implementation of an automated tracking algorithm for the track detection of migratory anticyclones affecting the Mediterranean

    Science.gov (United States)

    Hatzaki, Maria; Flocas, Elena A.; Simmonds, Ian; Kouroutzoglou, John; Keay, Kevin; Rudeva, Irina

    2013-04-01

    Migratory cyclones and anticyclones mainly account for the short-term weather variations in extra-tropical regions. By contrast to cyclones that have drawn major scientific attention due to their direct link to active weather and precipitation, climatological studies on anticyclones are limited, even though they also are associated with extreme weather phenomena and play an important role in global and regional climate. This is especially true for the Mediterranean, a region particularly vulnerable to climate change, and the little research which has been done is essentially confined to the manual analysis of synoptic charts. For the construction of a comprehensive climatology of migratory anticyclonic systems in the Mediterranean using an objective methodology, the Melbourne University automatic tracking algorithm is applied, based to the ERA-Interim reanalysis mean sea level pressure database. The algorithm's reliability in accurately capturing the weather patterns and synoptic climatology of the transient activity has been widely proven. This algorithm has been extensively applied for cyclone studies worldwide and it has been also successfully applied for the Mediterranean, though its use for anticyclone tracking is limited to the Southern Hemisphere. In this study the performance of the tracking algorithm under different data resolutions and different choices of parameter settings in the scheme is examined. Our focus is on the appropriate modification of the algorithm in order to efficiently capture the individual characteristics of the anticyclonic tracks in the Mediterranean, a closed basin with complex topography. We show that the number of the detected anticyclonic centers and the resulting tracks largely depend upon the data resolution and the search radius. We also find that different scale anticyclones and secondary centers that lie within larger anticyclone structures can be adequately represented; this is important, since the extensions of major

  1. Semi-supervised clustering methods.

    Science.gov (United States)

    Bair, Eric

    2013-01-01

    Cluster analysis methods seek to partition a data set into homogeneous subgroups. It is useful in a wide variety of applications, including document processing and modern genetics. Conventional clustering methods are unsupervised, meaning that there is no outcome variable nor is anything known about the relationship between the observations in the data set. In many situations, however, information about the clusters is available in addition to the values of the features. For example, the cluster labels of some observations may be known, or certain observations may be known to belong to the same cluster. In other cases, one may wish to identify clusters that are associated with a particular outcome variable. This review describes several clustering algorithms (known as "semi-supervised clustering" methods) that can be applied in these situations. The majority of these methods are modifications of the popular k-means clustering method, and several of them will be described in detail. A brief description of some other semi-supervised clustering algorithms is also provided.

  2. Semi-supervised clustering methods

    Science.gov (United States)

    Bair, Eric

    2013-01-01

    Cluster analysis methods seek to partition a data set into homogeneous subgroups. It is useful in a wide variety of applications, including document processing and modern genetics. Conventional clustering methods are unsupervised, meaning that there is no outcome variable nor is anything known about the relationship between the observations in the data set. In many situations, however, information about the clusters is available in addition to the values of the features. For example, the cluster labels of some observations may be known, or certain observations may be known to belong to the same cluster. In other cases, one may wish to identify clusters that are associated with a particular outcome variable. This review describes several clustering algorithms (known as “semi-supervised clustering” methods) that can be applied in these situations. The majority of these methods are modifications of the popular k-means clustering method, and several of them will be described in detail. A brief description of some other semi-supervised clustering algorithms is also provided. PMID:24729830

  3. Feasibility of a semi-automated contrast-oriented algorithm for tumor segmentation in retrospectively gated PET images: phantom and clinical validation

    Science.gov (United States)

    Carles, Montserrat; Fechter, Tobias; Nemer, Ursula; Nanko, Norbert; Mix, Michael; Nestle, Ursula; Schaefer, Andrea

    2015-12-01

    PET/CT plays an important role in radiotherapy planning for lung tumors. Several segmentation algorithms have been proposed for PET tumor segmentation. However, most of them do not take into account respiratory motion and are not well validated. The aim of this work was to evaluate a semi-automated contrast-oriented algorithm (COA) for PET tumor segmentation adapted to retrospectively gated (4D) images. The evaluation involved a wide set of 4D-PET/CT acquisitions of dynamic experimental phantoms and lung cancer patients. In addition, segmentation accuracy of 4D-COA was compared with four other state-of-the-art algorithms. In phantom evaluation, the physical properties of the objects defined the gold standard. In clinical evaluation, the ground truth was estimated by the STAPLE (Simultaneous Truth and Performance Level Estimation) consensus of three manual PET contours by experts. Algorithm evaluation with phantoms resulted in: (i) no statistically significant diameter differences for different targets and movements (Δ φ =0.3+/- 1.6 mm); (ii) reproducibility for heterogeneous and irregular targets independent of user initial interaction and (iii) good segmentation agreement for irregular targets compared to manual CT delineation in terms of Dice Similarity Coefficient (DSC  =  0.66+/- 0.04 ), Positive Predictive Value (PPV  =  0.81+/- 0.06 ) and Sensitivity (Sen.  =  0.49+/- 0.05 ). In clinical evaluation, the segmented volume was in reasonable agreement with the consensus volume (difference in volume (%Vol)  =  40+/- 30 , DSC  =  0.71+/- 0.07 and PPV  =  0.90+/- 0.13 ). High accuracy in target tracking position (Δ ME) was obtained for experimental and clinical data (Δ ME{{}\\text{exp}}=0+/- 3 mm; Δ ME{{}\\text{clin}}=0.3+/- 1.4 mm). In the comparison with other lung segmentation methods, 4D-COA has shown the highest volume accuracy in both experimental and clinical data. In conclusion, the accuracy in volume

  4. Automated pulmonary nodule volumetry with an optimized algorithm - accuracy at different slice thicknesses compared to unidimensional and bidimentional measurements

    International Nuclear Information System (INIS)

    Vogel, M.N.; Schmuecker, S.; Maksimovich, O.; Claussen, C.D.; Horger, M.; Vonthein, R.; Bethge, W.; Dicken, V.

    2008-01-01

    Purpose: This in-vivo study quantifies the accuracy of automated pulmonary nodule volumetry in reconstructions with different slice thicknesses (ST) of clinical routine CT scans. The accuracy of volumetry is compared to that of unidimensional and bidimensional measurements. Materials and Methods: 28 patients underwent contrast enhanced 64-row CT scans of the chest and abdomen obtained in the clinical routine. All scans were reconstructed with 1, 3, and 5 mm ST. Volume, maximum axial diameter, and areas following the guidelines of Response Evaluation Criteria in Solid Tumors (RECIST) and the World Health Organization (WHO) were measured in all 101 lesions located in the overlap region of both scans using the new software tool OncoTreat (MeVis, Deutschland). The accuracy of quantifications in both scans was evaluated using the Bland and Altmann method. The reproducibility of measurements in dependence on the ST was compared using the likelihood ratio Chi-squared test. Results: A total of 101 nodules were identified in all patients. Segmentation was considered successful in 88.1% of the cases without local manual correction which was deliberately not employed in this study. For 80 nodules all 6 measurements were successful. These were statistically evaluated. The volumes were in the range 0.1 to 15.6 ml. Of all 80 lesions, 34 (42%) had direct contact to the pleura parietalis oder diaphragmalis and were termed parapleural, 32 (40%) were paravascular, 7 (9%) both parapleural and paravascular, the remaining 21 (27%) were free standing in the lung. The trueness differed significantly (Chi-square 7.22, p value 0.027) and was best with an ST of 3 mm and worst at 5 mm. Differences in precision were not significant (Chi-square 5.20, p value 0.074). The limits of agreement for an ST of 3 mm were ± 17.5% of the mean volume for volumetry, for maximum diameters ± 1.3 mm, and ± 31.8% for the calculated areas. Conclusion: Automated volumetry of pulmonary nodules using Onco

  5. Supervision som undervisningsform i voksenspecialundervisningen

    DEFF Research Database (Denmark)

    Kristensen, René

    2000-01-01

    Supervision som undervisningsform i voksenspecialundervisningen. Procesarbejde i undervisning af voksne.......Supervision som undervisningsform i voksenspecialundervisningen. Procesarbejde i undervisning af voksne....

  6. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed

    2018-04-08

    Convolutional Sparse Coding (CSC) is a well-established image representation model especially suited for image restoration tasks. In this work, we extend the applicability of this model by proposing a supervised approach to convolutional sparse coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements to be discriminative. Experimental results show that using supervised convolutional learning results in two key advantages. First, we learn more semantically relevant filters in the dictionary and second, we achieve improved image reconstruction on unseen data.

  7. Rethinking Educational Supervision

    Directory of Open Access Journals (Sweden)

    Burhanettin DÖNMEZ

    2009-08-01

    Full Text Available The history of educational (school supervision has been influenced by the history of the interaction of intellectual movements in politics, society, philosophy and industrial movements. The purpose of this conceptual and theoretical study is to have a brief look at the concept of educational supervision with related historical developments in the field. The paper also intends to see the terms and issues critically, and to conceptualize some issues associated with educational supervision in practice. In the paper, the issues are discussed and a number of suggestions are addressed for debate.

  8. Automated treatment planning for a dedicated multi-source intracranial radiosurgery treatment unit using projected gradient and grassfire algorithms.

    Science.gov (United States)

    Ghobadi, Kimia; Ghaffari, Hamid R; Aleman, Dionne M; Jaffray, David A; Ruschin, Mark

    2012-06-01

    The purpose of this work is to develop a framework to the inverse problem for radiosurgery treatment planning on the Gamma Knife(®) Perfexion™ (PFX) for intracranial targets. The approach taken in the present study consists of two parts. First, a hybrid grassfire and sphere-packing algorithm is used to obtain shot positions (isocenters) based on the geometry of the target to be treated. For the selected isocenters, a sector duration optimization (SDO) model is used to optimize the duration of radiation delivery from each collimator size from each individual source bank. The SDO model is solved using a projected gradient algorithm. This approach has been retrospectively tested on seven manually planned clinical cases (comprising 11 lesions) including acoustic neuromas and brain metastases. In terms of conformity and organ-at-risk (OAR) sparing, the quality of plans achieved with the inverse planning approach were, on average, improved compared to the manually generated plans. The mean difference in conformity index between inverse and forward plans was -0.12 (range: -0.27 to +0.03) and +0.08 (range: 0.00-0.17) for classic and Paddick definitions, respectively, favoring the inverse plans. The mean difference in volume receiving the prescribed dose (V(100)) between forward and inverse plans was 0.2% (range: -2.4% to +2.0%). After plan renormalization for equivalent coverage (i.e., V(100)), the mean difference in dose to 1 mm(3) of brainstem between forward and inverse plans was -0.24 Gy (range: -2.40 to +2.02 Gy) favoring the inverse plans. Beam-on time varied with the number of isocenters but for the most optimal plans was on average 33 min longer than manual plans (range: -17 to +91 min) when normalized to a calibration dose rate of 3.5 Gy/min. In terms of algorithm performance, the isocenter selection for all the presented plans was performed in less than 3 s, while the SDO was performed in an average of 215 min. PFX inverse planning can be performed using

  9. Automated treatment planning for a dedicated multi-source intracranial radiosurgery treatment unit using projected gradient and grassfire algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Ghobadi, Kimia; Ghaffari, Hamid R.; Aleman, Dionne M.; Jaffray, David A.; Ruschin, Mark [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, Ontario M5S 3G8 (Canada); Department of Radiation Oncology, University of Toronto, Radiation Medicine Program, Princess Margaret Hospital, 610 University Avenue, Toronto, Ontario M5G 2M9 (Canada)

    2012-06-15

    Purpose: The purpose of this work is to develop a framework to the inverse problem for radiosurgery treatment planning on the Gamma Knife{sup Registered-Sign} Perfexion Trade-Mark-Sign (PFX) for intracranial targets. Methods: The approach taken in the present study consists of two parts. First, a hybrid grassfire and sphere-packing algorithm is used to obtain shot positions (isocenters) based on the geometry of the target to be treated. For the selected isocenters, a sector duration optimization (SDO) model is used to optimize the duration of radiation delivery from each collimator size from each individual source bank. The SDO model is solved using a projected gradient algorithm. This approach has been retrospectively tested on seven manually planned clinical cases (comprising 11 lesions) including acoustic neuromas and brain metastases. Results: In terms of conformity and organ-at-risk (OAR) sparing, the quality of plans achieved with the inverse planning approach were, on average, improved compared to the manually generated plans. The mean difference in conformity index between inverse and forward plans was -0.12 (range: -0.27 to +0.03) and +0.08 (range: 0.00-0.17) for classic and Paddick definitions, respectively, favoring the inverse plans. The mean difference in volume receiving the prescribed dose (V{sub 100}) between forward and inverse plans was 0.2% (range: -2.4% to +2.0%). After plan renormalization for equivalent coverage (i.e., V{sub 100}), the mean difference in dose to 1 mm{sup 3} of brainstem between forward and inverse plans was -0.24 Gy (range: -2.40 to +2.02 Gy) favoring the inverse plans. Beam-on time varied with the number of isocenters but for the most optimal plans was on average 33 min longer than manual plans (range: -17 to +91 min) when normalized to a calibration dose rate of 3.5 Gy/min. In terms of algorithm performance, the isocenter selection for all the presented plans was performed in less than 3 s, while the SDO was performed in an

  10. Automated treatment planning for a dedicated multi-source intracranial radiosurgery treatment unit using projected gradient and grassfire algorithms

    International Nuclear Information System (INIS)

    Ghobadi, Kimia; Ghaffari, Hamid R.; Aleman, Dionne M.; Jaffray, David A.; Ruschin, Mark

    2012-01-01

    Purpose: The purpose of this work is to develop a framework to the inverse problem for radiosurgery treatment planning on the Gamma Knife ® Perfexion™ (PFX) for intracranial targets. Methods: The approach taken in the present study consists of two parts. First, a hybrid grassfire and sphere-packing algorithm is used to obtain shot positions (isocenters) based on the geometry of the target to be treated. For the selected isocenters, a sector duration optimization (SDO) model is used to optimize the duration of radiation delivery from each collimator size from each individual source bank. The SDO model is solved using a projected gradient algorithm. This approach has been retrospectively tested on seven manually planned clinical cases (comprising 11 lesions) including acoustic neuromas and brain metastases. Results: In terms of conformity and organ-at-risk (OAR) sparing, the quality of plans achieved with the inverse planning approach were, on average, improved compared to the manually generated plans. The mean difference in conformity index between inverse and forward plans was −0.12 (range: −0.27 to +0.03) and +0.08 (range: 0.00–0.17) for classic and Paddick definitions, respectively, favoring the inverse plans. The mean difference in volume receiving the prescribed dose (V 100 ) between forward and inverse plans was 0.2% (range: −2.4% to +2.0%). After plan renormalization for equivalent coverage (i.e., V 100 ), the mean difference in dose to 1 mm 3 of brainstem between forward and inverse plans was −0.24 Gy (range: −2.40 to +2.02 Gy) favoring the inverse plans. Beam-on time varied with the number of isocenters but for the most optimal plans was on average 33 min longer than manual plans (range: −17 to +91 min) when normalized to a calibration dose rate of 3.5 Gy/min. In terms of algorithm performance, the isocenter selection for all the presented plans was performed in less than 3 s, while the SDO was performed in an average of 215 min

  11. A semi-automated 2D/3D marker-based registration algorithm modelling prostate shrinkage during radiotherapy for prostate cancer

    International Nuclear Information System (INIS)

    Budiharto, Tom; Slagmolen, Pieter; Hermans, Jeroen; Maes, Frederik; Verstraete, Jan; Heuvel, Frank Van den; Depuydt, Tom; Oyen, Raymond; Haustermans, Karin

    2009-01-01

    Background and purpose: Currently, most available patient alignment tools based on implanted markers use manual marker matching and rigid registration transformations to measure the needed translational shifts. To quantify the particular effect of prostate gland shrinkage, implanted gold markers were tracked during a course of radiotherapy including an isotropic scaling factor to model prostate shrinkage. Materials and methods: Eight patients with prostate cancer had gold markers implanted transrectally and seven were treated with (neo) adjuvant androgen deprivation therapy. After patient alignment to skin tattoos, orthogonal electronic portal images (EPIs) were taken. A semi-automated 2D/3D marker-based registration was performed to calculate the necessary couch shifts. The registration consists of a rigid transformation combined with an isotropic scaling to model prostate shrinkage. Results: The inclusion of an isotropic shrinkage model in the registration algorithm cancelled the corresponding increase in registration error. The mean scaling factor was 0.89 ± 0.09. For all but two patients, a decrease of the isotropic scaling factor during treatment was observed. However, there was almost no difference in the translation offset between the manual matching of the EPIs to the digitally reconstructed radiographs and the semi-automated 2D/3D registration. A decrease in the intermarker distance was found correlating with prostate shrinkage rather than with random marker migration. Conclusions: Inclusion of shrinkage in the registration process reduces registration errors during a course of radiotherapy. Nevertheless, this did not lead to a clinically significant change in the proposed table translations when compared to translations obtained with manual marker matching without a scaling correction

  12. Comparison of K-means and fuzzy c-means algorithm performance for automated determination of the arterial input function.

    Science.gov (United States)

    Yin, Jiandong; Sun, Hongzan; Yang, Jiawen; Guo, Qiyong

    2014-01-01

    The arterial input function (AIF) plays a crucial role in the quantification of cerebral perfusion parameters. The traditional method for AIF detection is based on manual operation, which is time-consuming and subjective. Two automatic methods have been reported that are based on two frequently used clustering algorithms: fuzzy c-means (FCM) and K-means. However, it is still not clear which is better for AIF detection. Hence, we compared the performance of these two clustering methods using both simulated and clinical data. The results demonstrate that K-means analysis can yield more accurate and robust AIF results, although it takes longer to execute than the FCM method. We consider that this longer execution time is trivial relative to the total time required for image manipulation in a PACS setting, and is acceptable if an ideal AIF is obtained. Therefore, the K-means method is preferable to FCM in AIF detection.

  13. Semi-supervised learning of hyperspectral image segmentation applied to vine tomatoes and table grapes

    Directory of Open Access Journals (Sweden)

    Jeroen van Roy

    2018-03-01

    Full Text Available Nowadays, quality inspection of fruit and vegetables is typically accomplished through visual inspection. Automation of this inspection is desirable to make it more objective. For this, hyperspectral imaging has been identified as a promising technique. When the field of view includes multiple objects, hypercubes should be segmented to assign individual pixels to different objects. Unsupervised and supervised methods have been proposed. While the latter are labour intensive as they require masking of the training images, the former are too computationally intensive for in-line use and may provide different results for different hypercubes. Therefore, a semi-supervised method is proposed to train a computationally efficient segmentation algorithm with minimal human interaction. As a first step, an unsupervised classification model is used to cluster spectra in similar groups. In the second step, a pixel selection algorithm applied to the output of the unsupervised classification is used to build a supervised model which is fast enough for in-line use. To evaluate this approach, it is applied to hypercubes of vine tomatoes and table grapes. After first derivative spectral preprocessing to remove intensity variation due to curvature and gloss effects, the unsupervised models segmented 86.11% of the vine tomato images correctly. Considering overall accuracy, sensitivity, specificity and time needed to segment one hypercube, partial least squares discriminant analysis (PLS-DA was found to be the best choice for in-line use, when using one training image. By adding a second image, the segmentation results improved considerably, yielding an overall accuracy of 96.95% for segmentation of vine tomatoes and 98.52% for segmentation of table grapes, demonstrating the added value of the learning phase in the algorithm.

  14. Automated quantification of cerebral edema following hemispheric infarction: Application of a machine-learning algorithm to evaluate CSF shifts on serial head CTs

    Directory of Open Access Journals (Sweden)

    Yasheng Chen

    2016-01-01

    Full Text Available Although cerebral edema is a major cause of death and deterioration following hemispheric stroke, there remains no validated biomarker that captures the full spectrum of this critical complication. We recently demonstrated that reduction in intracranial cerebrospinal fluid (CSF volume (∆CSF on serial computed tomography (CT scans provides an accurate measure of cerebral edema severity, which may aid in early triaging of stroke patients for craniectomy. However, application of such a volumetric approach would be too cumbersome to perform manually on serial scans in a real-world setting. We developed and validated an automated technique for CSF segmentation via integration of random forest (RF based machine learning with geodesic active contour (GAC segmentation. The proposed RF + GAC approach was compared to conventional Hounsfield Unit (HU thresholding and RF segmentation methods using Dice similarity coefficient (DSC and the correlation of volumetric measurements, with manual delineation serving as the ground truth. CSF spaces were outlined on scans performed at baseline (<6 h after stroke onset and early follow-up (FU (closest to 24 h in 38 acute ischemic stroke patients. RF performed significantly better than optimized HU thresholding (p < 10−4 in baseline and p < 10−5 in FU and RF + GAC performed significantly better than RF (p < 10−3 in baseline and p < 10−5 in FU. Pearson correlation coefficients between the automatically detected ∆CSF and the ground truth were r = 0.178 (p = 0.285, r = 0.876 (p < 10−6 and r = 0.879 (p < 10−6 for thresholding, RF and RF + GAC, respectively, with a slope closer to the line of identity in RF + GAC. When we applied the algorithm trained from images of one stroke center to segment CTs from another center, similar findings held. In conclusion, we have developed and validated an accurate automated approach to segment CSF and calculate its shifts on serial CT scans

  15. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed; Ghanem, Bernard; Wonka, Peter

    2018-01-01

    coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements

  16. Supervision and group dynamics

    DEFF Research Database (Denmark)

    Hansen, Søren; Jensen, Lars Peter

    2004-01-01

     An important aspect of the problem based and project organized study at Aalborg University is the supervision of the project groups. At the basic education (first year) it is stated in the curriculum that part of the supervisors' job is to deal with group dynamics. This is due to the experience...... that many students are having difficulties with practical issues such as collaboration, communication, and project management. Most supervisors either ignore this demand, because they do not find it important or they find it frustrating, because they do not know, how to supervise group dynamics...... as well as at Aalborg University. The first visible result has been participating supervisors telling us that the course has inspired them to try supervising group dynamics in the future. This paper will explore some aspects of supervising group dynamics as well as, how to develop the Aalborg model...

  17. Supervision af psykoterapi

    DEFF Research Database (Denmark)

    SUPERVISION AF PSYKOTERAPI indtager en central position i uddannelsen og udviklingen af psykoterapeuter. Trods flere lighedspunkter med psykoterapi, undervisning og konsultation er psykoterapisupervision et selvstændigt virksomhedsområde. Supervisor må foruden at være en trænet psykoterapeut kende...... supervisionens rammer og indplacering i forhold til organisation og samfund. En række kapitler drejer sig om supervisors opgaver, roller og kontrolfunktion, supervision set fra supervisandens perspektiv samt betragtninger over relationer og processer i supervision. Der drøftes fordele og ulemper ved de...... forskellige måder, hvorpå en sag kan fremlægges. Bogens første del afsluttes med refleksioner over de etiske aspekter ved psykoterapisupervision. Bogens anden del handler om de særlige forhold, der gør sig gældende ved supervision af en række specialiserede behandlingsformer eller af psykoterapi med bestemte...

  18. Psykoterapi og supervision

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard

    2014-01-01

    Kapitlet beskriver supervisionen funktioner i forhold til psykoterapi. Supervision af psykoterapi henviser i almindelighed til, at en psykoterapeut konsulterer en ofte mere erfaren kollega (supervisor) med henblik på drøftelse af et konkret igangværende psykoterapeutisk behandlingsforløb. Formålet...... er at fremme denne fagpersons (psykoterapeutens) faglige udvikling samt sikre kvaliteten af behandlingen.kan defineres som i. Der redegøres for, hvorfor supervision er vigtig del af psykoterapeutens profession samt vises, hvorledes supervision foruden den faglige udvikling også er vigtigt redskab i...... psykoterapiens kvalitetssikring. Efter at have drøftet nogle etiske forhold ved supervision, fremlægges endelig nogle få forskningsresultater vedr. psykoterapisupervision af danske psykologer....

  19. Semi-supervised and unsupervised extreme learning machines.

    Science.gov (United States)

    Huang, Gao; Song, Shiji; Gupta, Jatinder N D; Wu, Cheng

    2014-12-01

    Extreme learning machines (ELMs) have proven to be efficient and effective learning mechanisms for pattern classification and regression. However, ELMs are primarily applied to supervised learning problems. Only a few existing research papers have used ELMs to explore unlabeled data. In this paper, we extend ELMs for both semi-supervised and unsupervised tasks based on the manifold regularization, thus greatly expanding the applicability of ELMs. The key advantages of the proposed algorithms are as follows: 1) both the semi-supervised ELM (SS-ELM) and the unsupervised ELM (US-ELM) exhibit learning capability and computational efficiency of ELMs; 2) both algorithms naturally handle multiclass classification or multicluster clustering; and 3) both algorithms are inductive and can handle unseen data at test time directly. Moreover, it is shown in this paper that all the supervised, semi-supervised, and unsupervised ELMs can actually be put into a unified framework. This provides new perspectives for understanding the mechanism of random feature mapping, which is the key concept in ELM theory. Empirical study on a wide range of data sets demonstrates that the proposed algorithms are competitive with the state-of-the-art semi-supervised or unsupervised learning algorithms in terms of accuracy and efficiency.

  20. Energy consumption control automation using Artificial Neural Networks and adaptive algorithms: Proposal of a new methodology and case study

    International Nuclear Information System (INIS)

    Benedetti, Miriam; Cesarotti, Vittorio; Introna, Vito; Serranti, Jacopo

    2016-01-01

    Highlights: • A methodology to enable energy consumption control automation is proposed. • The methodology is based on the use of Artificial Neural Networks. • A method to control the accuracy of the model over time is proposed. • Two methods to enable automatic retraining of the network are proposed. • Retraining methods are evaluated on their accuracy over time. - Abstract: Energy consumption control in energy intensive companies is always more considered as a critical activity to continuously improve energy performance. It undoubtedly requires a huge effort in data gathering and analysis, and the amount of these data together with the scarceness of human resources devoted to Energy Management activities who could maintain and update the analyses’ output are often the main barriers to its diffusion in companies. Advanced tools such as software based on machine learning techniques are therefore the key to overcome these barriers and allow an easy but accurate control. This type of systems is able to solve complex problems obtaining reliable results over time, but not to understand when the reliability of the results is declining (a common situation considering energy using systems, often undergoing structural changes) and to automatically adapt itself using a limited amount of training data, so that a completely automatic application is not yet available and the automatic energy consumption control using intelligent systems is still a challenge. This paper presents a whole new approach to energy consumption control, proposing a methodology based on Artificial Neural Networks (ANNs) and aimed at creating an automatic energy consumption control system. First of all, three different structures of neural networks are proposed and trained using a huge amount of data. Three different performance indicators are then used to identify the most suitable structure, which is implemented to create an energy consumption control tool. In addition, considering that

  1. Automated Glioblastoma Segmentation Based on a Multiparametric Structured Unsupervised Classification

    Science.gov (United States)

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V.; Robles, Montserrat; Aparici, F.; Martí-Bonmatí, L.; García-Gómez, Juan M.

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation. PMID:25978453

  2. Implementation of Instructional Supervision in Secondary School ...

    African Journals Online (AJOL)

    Science, Technology and Arts Research Journal ... Supervision is critical in the development of any educational program in both developed and ... Clinical Supervision, Collegial Supervision, Self-directive supervision, Informal Supervision etc.

  3. Spatiotemporal patterns of High Mountain Asia's snowmelt season identified with an automated snowmelt detection algorithm, 1987-2016

    Science.gov (United States)

    Smith, Taylor; Bookhagen, Bodo; Rheinwalt, Aljoscha

    2017-10-01

    High Mountain Asia (HMA) - encompassing the Tibetan Plateau and surrounding mountain ranges - is the primary water source for much of Asia, serving more than a billion downstream users. Many catchments receive the majority of their yearly water budget in the form of snow, which is poorly monitored by sparse in situ weather networks. Both the timing and volume of snowmelt play critical roles in downstream water provision, as many applications - such as agriculture, drinking-water generation, and hydropower - rely on consistent and predictable snowmelt runoff. Here, we examine passive microwave data across HMA with five sensors (SSMI, SSMIS, AMSR-E, AMSR2, and GPM) from 1987 to 2016 to track the timing of the snowmelt season - defined here as the time between maximum passive microwave signal separation and snow clearance. We validated our method against climate model surface temperatures, optical remote-sensing snow-cover data, and a manual control dataset (n = 2100, 3 variables at 25 locations over 28 years); our algorithm is generally accurate within 3-5 days. Using the algorithm-generated snowmelt dates, we examine the spatiotemporal patterns of the snowmelt season across HMA. The climatically short (29-year) time series, along with complex interannual snowfall variations, makes determining trends in snowmelt dates at a single point difficult. We instead identify trends in snowmelt timing by using hierarchical clustering of the passive microwave data to determine trends in self-similar regions. We make the following four key observations. (1) The end of the snowmelt season is trending almost universally earlier in HMA (negative trends). Changes in the end of the snowmelt season are generally between 2 and 8 days decade-1 over the 29-year study period (5-25 days total). The length of the snowmelt season is thus shrinking in many, though not all, regions of HMA. Some areas exhibit later peak signal separation (positive trends), but with generally smaller magnitudes

  4. Spatiotemporal patterns of High Mountain Asia's snowmelt season identified with an automated snowmelt detection algorithm, 1987–2016

    Directory of Open Access Journals (Sweden)

    T. Smith

    2017-10-01

    Full Text Available High Mountain Asia (HMA – encompassing the Tibetan Plateau and surrounding mountain ranges – is the primary water source for much of Asia, serving more than a billion downstream users. Many catchments receive the majority of their yearly water budget in the form of snow, which is poorly monitored by sparse in situ weather networks. Both the timing and volume of snowmelt play critical roles in downstream water provision, as many applications – such as agriculture, drinking-water generation, and hydropower – rely on consistent and predictable snowmelt runoff. Here, we examine passive microwave data across HMA with five sensors (SSMI, SSMIS, AMSR-E, AMSR2, and GPM from 1987 to 2016 to track the timing of the snowmelt season – defined here as the time between maximum passive microwave signal separation and snow clearance. We validated our method against climate model surface temperatures, optical remote-sensing snow-cover data, and a manual control dataset (n = 2100, 3 variables at 25 locations over 28 years; our algorithm is generally accurate within 3–5 days. Using the algorithm-generated snowmelt dates, we examine the spatiotemporal patterns of the snowmelt season across HMA. The climatically short (29-year time series, along with complex interannual snowfall variations, makes determining trends in snowmelt dates at a single point difficult. We instead identify trends in snowmelt timing by using hierarchical clustering of the passive microwave data to determine trends in self-similar regions. We make the following four key observations. (1 The end of the snowmelt season is trending almost universally earlier in HMA (negative trends. Changes in the end of the snowmelt season are generally between 2 and 8 days decade−1 over the 29-year study period (5–25 days total. The length of the snowmelt season is thus shrinking in many, though not all, regions of HMA. Some areas exhibit later peak signal separation (positive

  5. Fully-automated approach to hippocampus segmentation using a graph-cuts algorithm combined with atlas-based segmentation and morphological opening.

    Science.gov (United States)

    Kwak, Kichang; Yoon, Uicheul; Lee, Dong-Kyun; Kim, Geon Ha; Seo, Sang Won; Na, Duk L; Shim, Hack-Joon; Lee, Jong-Min

    2013-09-01

    The hippocampus has been known to be an important structure as a biomarker for Alzheimer's disease (AD) and other neurological and psychiatric diseases. However, it requires accurate, robust and reproducible delineation of hippocampal structures. In this study, an automated hippocampal segmentation method based on a graph-cuts algorithm combined with atlas-based segmentation and morphological opening was proposed. First of all, the atlas-based segmentation was applied to define initial hippocampal region for a priori information on graph-cuts. The definition of initial seeds was further elaborated by incorporating estimation of partial volume probabilities at each voxel. Finally, morphological opening was applied to reduce false positive of the result processed by graph-cuts. In the experiments with twenty-seven healthy normal subjects, the proposed method showed more reliable results (similarity index=0.81±0.03) than the conventional atlas-based segmentation method (0.72±0.04). Also as for segmentation accuracy which is measured in terms of the ratios of false positive and false negative, the proposed method (precision=0.76±0.04, recall=0.86±0.05) produced lower ratios than the conventional methods (0.73±0.05, 0.72±0.06) demonstrating its plausibility for accurate, robust and reliable segmentation of hippocampus. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Optimal Installation Locations for Automated External Defibrillators in Taipei 7-Eleven Stores: Using GIS and a Genetic Algorithm with a New Stirring Operator

    Directory of Open Access Journals (Sweden)

    Chung-Yuan Huang

    2014-01-01

    Full Text Available Immediate treatment with an automated external defibrillator (AED increases out-of-hospital cardiac arrest (OHCA patient survival potential. While considerable attention has been given to determining optimal public AED locations, spatial and temporal factors such as time of day and distance from emergency medical services (EMSs are understudied. Here we describe a geocomputational genetic algorithm with a new stirring operator (GANSO that considers spatial and temporal cardiac arrest occurrence factors when assessing the feasibility of using Taipei 7-Eleven stores as installation locations for AEDs. Our model is based on two AED conveyance modes, walking/running and driving, involving service distances of 100 and 300 meters, respectively. Our results suggest different AED allocation strategies involving convenience stores in urban settings. In commercial areas, such installations can compensate for temporal gaps in EMS locations when responding to nighttime OHCA incidents. In residential areas, store installations can compensate for long distances from fire stations, where AEDs are currently held in Taipei.

  7. SU-E-I-89: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Pediatric Anthropomorphic and ACR Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Mahmood, U; Erdi, Y; Wang, W [Memorial Sloan Kettering Cancer Center, NY, NY (United States)

    2014-06-01

    Purpose: To assess the impact of General Electrics automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of a pediatric anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, 80 mA, 0.7s rotation time. Image quality was assessed by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: For the baseline protocol, CNR was found to decrease from 0.460 ± 0.182 to 0.420 ± 0.057 when kVa was activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.620 ± 0.040. The liver dose decreased by 30% with kVa activation. Conclusion: Application of kVa reduces the liver dose up to 30%. However, reduction in image quality for abdominal scans occurs when using the automated tube voltage selection feature at the baseline protocol. As demonstrated by the CNR and NPS analysis, the texture and magnitude of the noise in reconstructed images at ASiR 40% was found to be the same as our baseline images. We have demonstrated that 30% dose reduction is possible when using 40% ASiR with kVa in pediatric patients.

  8. SU-E-I-89: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Pediatric Anthropomorphic and ACR Phantoms

    International Nuclear Information System (INIS)

    Mahmood, U; Erdi, Y; Wang, W

    2014-01-01

    Purpose: To assess the impact of General Electrics automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of a pediatric anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, 80 mA, 0.7s rotation time. Image quality was assessed by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: For the baseline protocol, CNR was found to decrease from 0.460 ± 0.182 to 0.420 ± 0.057 when kVa was activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.620 ± 0.040. The liver dose decreased by 30% with kVa activation. Conclusion: Application of kVa reduces the liver dose up to 30%. However, reduction in image quality for abdominal scans occurs when using the automated tube voltage selection feature at the baseline protocol. As demonstrated by the CNR and NPS analysis, the texture and magnitude of the noise in reconstructed images at ASiR 40% was found to be the same as our baseline images. We have demonstrated that 30% dose reduction is possible when using 40% ASiR with kVa in pediatric patients

  9. Intelligent multivariate process supervision

    International Nuclear Information System (INIS)

    Visuri, Pertti.

    1986-01-01

    This thesis addresses the difficulties encountered in managing large amounts of data in supervisory control of complex systems. Some previous alarm and disturbance analysis concepts are reviewed and a method for improving the supervision of complex systems is presented. The method, called multivariate supervision, is based on adding low level intelligence to the process control system. By using several measured variables linked together by means of deductive logic, the system can take into account the overall state of the supervised system. Thus, it can present to the operators fewer messages with higher information content than the conventional control systems which are based on independent processing of each variable. In addition, the multivariate method contains a special information presentation concept for improving the man-machine interface. (author)

  10. Not-so-supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis

    NARCIS (Netherlands)

    Cheplygina, Veronika; de Bruijne, Marleen; Pluim, Josien P. W.

    2018-01-01

    Machine learning (ML) algorithms have made a tremendous impact in the field of medical imaging. While medical imaging datasets have been growing in size, a challenge for supervised ML algorithms that is frequently mentioned is the lack of annotated data. As a result, various methods which can learn

  11. A bifurcation identifier for IV-OCT using orthogonal least squares and supervised machine learning.

    Science.gov (United States)

    Macedo, Maysa M G; Guimarães, Welingson V N; Galon, Micheli Z; Takimura, Celso K; Lemos, Pedro A; Gutierrez, Marco Antonio

    2015-12-01

    Intravascular optical coherence tomography (IV-OCT) is an in-vivo imaging modality based on the intravascular introduction of a catheter which provides a view of the inner wall of blood vessels with a spatial resolution of 10-20 μm. Recent studies in IV-OCT have demonstrated the importance of the bifurcation regions. Therefore, the development of an automated tool to classify hundreds of coronary OCT frames as bifurcation or nonbifurcation can be an important step to improve automated methods for atherosclerotic plaques quantification, stent analysis and co-registration between different modalities. This paper describes a fully automated method to identify IV-OCT frames in bifurcation regions. The method is divided into lumen detection; feature extraction; and classification, providing a lumen area quantification, geometrical features of the cross-sectional lumen and labeled slices. This classification method is a combination of supervised machine learning algorithms and feature selection using orthogonal least squares methods. Training and tests were performed in sets with a maximum of 1460 human coronary OCT frames. The lumen segmentation achieved a mean difference of lumen area of 0.11 mm(2) compared with manual segmentation, and the AdaBoost classifier presented the best result reaching a F-measure score of 97.5% using 104 features. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Graph-based semi-supervised learning

    CERN Document Server

    Subramanya, Amarnag

    2014-01-01

    While labeled data is expensive to prepare, ever increasing amounts of unlabeled data is becoming widely available. In order to adapt to this phenomenon, several semi-supervised learning (SSL) algorithms, which learn from labeled as well as unlabeled data, have been developed. In a separate line of work, researchers have started to realize that graphs provide a natural way to represent data in a variety of domains. Graph-based SSL algorithms, which bring together these two lines of work, have been shown to outperform the state-of-the-art in many applications in speech processing, computer visi

  13. Abdominal adipose tissue quantification on water-suppressed and non-water-suppressed MRI at 3T using semi-automated FCM clustering algorithm

    Science.gov (United States)

    Valaparla, Sunil K.; Peng, Qi; Gao, Feng; Clarke, Geoffrey D.

    2014-03-01

    Accurate measurements of human body fat distribution are desirable because excessive body fat is associated with impaired insulin sensitivity, type 2 diabetes mellitus (T2DM) and cardiovascular disease. In this study, we hypothesized that the performance of water suppressed (WS) MRI is superior to non-water suppressed (NWS) MRI for volumetric assessment of abdominal subcutaneous (SAT), intramuscular (IMAT), visceral (VAT), and total (TAT) adipose tissues. We acquired T1-weighted images on a 3T MRI system (TIM Trio, Siemens), which was analyzed using semi-automated segmentation software that employs a fuzzy c-means (FCM) clustering algorithm. Sixteen contiguous axial slices, centered at the L4-L5 level of the abdomen, were acquired in eight T2DM subjects with water suppression (WS) and without (NWS). Histograms from WS images show improved separation of non-fatty tissue pixels from fatty tissue pixels, compared to NWS images. Paired t-tests of WS versus NWS showed a statistically significant lower volume of lipid in the WS images for VAT (145.3 cc less, p=0.006) and IMAT (305 cc less, p1), but not SAT (14.1 cc more, NS). WS measurements of TAT also resulted in lower fat volumes (436.1 cc less, p=0.002). There is strong correlation between WS and NWS quantification methods for SAT measurements (r=0.999), but poorer correlation for VAT studies (r=0.845). These results suggest that NWS pulse sequences may overestimate adipose tissue volumes and that WS pulse sequences are more desirable due to the higher contrast generated between fatty and non-fatty tissues.

  14. Spectral Learning for Supervised Topic Models.

    Science.gov (United States)

    Ren, Yong; Wang, Yining; Zhu, Jun

    2018-03-01

    Supervised topic models simultaneously model the latent topic structure of large collections of documents and a response variable associated with each document. Existing inference methods are based on variational approximation or Monte Carlo sampling, which often suffers from the local minimum defect. Spectral methods have been applied to learn unsupervised topic models, such as latent Dirichlet allocation (LDA), with provable guarantees. This paper investigates the possibility of applying spectral methods to recover the parameters of supervised LDA (sLDA). We first present a two-stage spectral method, which recovers the parameters of LDA followed by a power update method to recover the regression model parameters. Then, we further present a single-phase spectral algorithm to jointly recover the topic distribution matrix as well as the regression weights. Our spectral algorithms are provably correct and computationally efficient. We prove a sample complexity bound for each algorithm and subsequently derive a sufficient condition for the identifiability of sLDA. Thorough experiments on synthetic and real-world datasets verify the theory and demonstrate the practical effectiveness of the spectral algorithms. In fact, our results on a large-scale review rating dataset demonstrate that our single-phase spectral algorithm alone gets comparable or even better performance than state-of-the-art methods, while previous work on spectral methods has rarely reported such promising performance.

  15. Kontraktetablering i supervision

    DEFF Research Database (Denmark)

    Mortensen, Karen Vibeke; Jacobsen, Claus Haugaard

    2007-01-01

    Kapitlet behandler kontraktetablering i supervision, et element, der ofte er blevet negligeret eller endog helt forbigået ved indledningen af supervisionsforløb. Sikre aftaler om emner som tid, sted, procedurer for fremlæggelse, fortrolighed, ansvarsfordeling og evaluering skaber imidlertid trygh...

  16. Etiske betragtninger ved supervision

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Agerskov, Kirsten

    2007-01-01

    Kapitlet præsenterer nogle etiske betragtninger ved supervision. Mens der længe har eksisteret etiske retningslinjer for psykoterapeutisk arbejde, har der overraskende nok manglet tilsvarende vejledninger på supervisionsområdet. Det betyder imidlertid ikke, at de ikke er relevante. I kapitlet gøres...

  17. A New Method for Solving Supervised Data Classification Problems

    Directory of Open Access Journals (Sweden)

    Parvaneh Shabanzadeh

    2014-01-01

    Full Text Available Supervised data classification is one of the techniques used to extract nontrivial information from data. Classification is a widely used technique in various fields, including data mining, industry, medicine, science, and law. This paper considers a new algorithm for supervised data classification problems associated with the cluster analysis. The mathematical formulations for this algorithm are based on nonsmooth, nonconvex optimization. A new algorithm for solving this optimization problem is utilized. The new algorithm uses a derivative-free technique, with robustness and efficiency. To improve classification performance and efficiency in generating classification model, a new feature selection algorithm based on techniques of convex programming is suggested. Proposed methods are tested on real-world datasets. Results of numerical experiments have been presented which demonstrate the effectiveness of the proposed algorithms.

  18. Effectiveness of Group Supervision versus Combined Group and Individual Supervision.

    Science.gov (United States)

    Ray, Dee; Altekruse, Michael

    2000-01-01

    Investigates the effectiveness of different types of supervision (large group, small group, combined group, individual supervision) with counseling students (N=64). Analyses revealed that all supervision formats resulted in similar progress in counselor effectiveness and counselor development. Participants voiced a preference for individual…

  19. Algorithms for Automated DNA Assembly

    Science.gov (United States)

    2010-01-01

    known results. The total number of possible assembly graphs for a single goal part is (2n 1)!/(n 1)!n! [also known as the Catalan number (20...correct theoretical construction scheme is de- veloped manually, it is likely to be suboptimal by any number of cost metrics. Modular, robust and...collection of information if it does not display a currently valid OMB control number . 1. REPORT DATE 23 MAR 2010 2. REPORT TYPE 3. DATES COVERED

  20. The technical supervision interface

    CERN Document Server

    Sollander, P

    1998-01-01

    The Technical Control Room (TCR) is currently using 30 different applications for the remote supervision of the technical infrastructure at CERN. These applications have all been developed with the CERN made Uniform Man Machine Interface (UMMI) tools built in 1990. However, the visualization technology has evolved phenomenally since 1990, the Technical Data Server (TDS) has radically changed our control system architecture, and the standardization and the maintenance of the UMMI applications have become important issues as their number increases. The Technical Supervision Interface is intended to replace the UMMI and solve the above problems. Using a standard WWW-browser for the display, it will be inherently multi-platform and hence available for control room operators, equipment specialists and on-call personnel.

  1. Coupled dimensionality reduction and classification for supervised and semi-supervised multilabel learning.

    Science.gov (United States)

    Gönen, Mehmet

    2014-03-01

    Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F 1 , and micro F 1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks.

  2. Improving Banking Supervision

    OpenAIRE

    Mayes, David G.

    1998-01-01

    This paper explains how banking supervision within the EU, and in Finland in particular, can be improved by the implementation of greater market discipline and related changes. Although existing EU law, institutions, market structures and practices of corporate governance restrict the scope for change, substantial improvements can be introduced now while there is a window of opportunity for change. The economy is growing H5ly and the consequences of the banking crises of the early 1990s have ...

  3. Supervision in Firms

    OpenAIRE

    Vafaï , Kouroche

    2012-01-01

    URL des Documents de travail : http://centredeconomiesorbonne.univ-paris1.fr/bandeau-haut/documents-de-travail/; Documents de travail du Centre d'Economie de la Sorbonne 2012.84 - ISSN : 1955-611X; To control, evaluate, and motivate their agents, firms employ supervisors. As shown by empirical investigations, biased evaluation by supervisors linked to collusion is a persistent feature of firms. This paper studies how deceptive supervision affects agency relationships. We consider a three-leve...

  4. Ethics in education supervision

    Directory of Open Access Journals (Sweden)

    Fatma ÖZMEN

    2008-06-01

    Full Text Available Supervision in education plays a crucial role in attaining educational goals. In addition to determining the present situation, it has a theoretical and practical function regarding the actions to be taken in general and the achievement of teacher development in particular to meet the educational goals in the most effective way. For the education supervisors to act ethically in their tasks while achieving this vital mission shall facilitate them to build up trust, to enhance the level of collaboration and sharing, thus it shall contribute to organizational effectiveness. Ethics is an essential component of educational supervision. Yet, it demonstrates rather vague quality due to the conditions, persons, and situations. Therefore, it is a difficult process to develop the ethical standards in institutions. This study aims to clarify the concept of ethics, to bring up its importance, and to make recommendations for more effective supervisions from the aspect of ethics, based on the literature review, some research results, and sample cases reported by teachers and supervisors.

  5. SU-E-I-81: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Adult Anthropomorphic and ACR Phantoms

    International Nuclear Information System (INIS)

    Mahmood, U; Erdi, Y; Wang, W

    2014-01-01

    Purpose: To assess the impact of General Electrics (GE) automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of an adult anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, Auto mA (180 to 380 mA), noise index (NI) = 14, adaptive iterative statistical reconstruction (ASiR) of 20%, 0.8s rotation time. Image quality was evaluated by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: The CNR for the adult male was found to decrease from CNR = 0.912 ± 0.045 for the baseline protocol without kVa to a CNR = 0.756 ± 0.049 with kVa activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.903 ± 0.023. The difference in the central liver dose with and without kVa was found to be 0.07%. Conclusion: Dose reduction was insignificant in the adult phantom. As determined by NPS analysis, ASiR of 40% produced images with similar noise texture to the baseline protocol. However, the CNR at ASiR of 40% with kVa fails to meet the current ACR CNR passing requirement of 1.0

  6. SU-E-I-81: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Adult Anthropomorphic and ACR Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Mahmood, U; Erdi, Y; Wang, W [Memorial Sloan Kettering Cancer Center, NY, NY (United States)

    2014-06-01

    Purpose: To assess the impact of General Electrics (GE) automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of an adult anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, Auto mA (180 to 380 mA), noise index (NI) = 14, adaptive iterative statistical reconstruction (ASiR) of 20%, 0.8s rotation time. Image quality was evaluated by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: The CNR for the adult male was found to decrease from CNR = 0.912 ± 0.045 for the baseline protocol without kVa to a CNR = 0.756 ± 0.049 with kVa activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.903 ± 0.023. The difference in the central liver dose with and without kVa was found to be 0.07%. Conclusion: Dose reduction was insignificant in the adult phantom. As determined by NPS analysis, ASiR of 40% produced images with similar noise texture to the baseline protocol. However, the CNR at ASiR of 40% with kVa fails to meet the current ACR CNR passing requirement of 1.0.

  7. Group supervision for general practitioners

    DEFF Research Database (Denmark)

    Galina Nielsen, Helena; Sofie Davidsen, Annette; Dalsted, Rikke

    2013-01-01

    AIM: Group supervision is a sparsely researched method for professional development in general practice. The aim of this study was to explore general practitioners' (GPs') experiences of the benefits of group supervision for improving the treatment of mental disorders. METHODS: One long-establish......AIM: Group supervision is a sparsely researched method for professional development in general practice. The aim of this study was to explore general practitioners' (GPs') experiences of the benefits of group supervision for improving the treatment of mental disorders. METHODS: One long...... considered important prerequisites for disclosing and discussing professional problems. CONCLUSION: The results of this study indicate that participation in a supervision group can be beneficial for maintaining and developing GPs' skills in dealing with patients with mental health problems. Group supervision...... influenced other areas of GPs' professional lives as well. However, more studies are needed to assess the impact of supervision groups....

  8. Automated attribution of remotely-sensed ecological disturbances using spatial and temporal characteristics of common disturbance classes.

    Science.gov (United States)

    Cooper, L. A.; Ballantyne, A.

    2017-12-01

    Forest disturbances are critical components of ecosystems. Knowledge of their prevalence and impacts is necessary to accurately describe forest health and ecosystem services through time. While there are currently several methods available to identify and describe forest disturbances, especially those which occur in North America, the process remains inefficient and inaccessible in many parts of the world. Here, we introduce a preliminary approach to streamline and automate both the detection and attribution of forest disturbances. We use a combination of the Breaks for Additive Season and Trend (BFAST) detection algorithm to detect disturbances in combination with supervised and unsupervised classification algorithms to attribute the detections to disturbance classes. Both spatial and temporal disturbance characteristics are derived and utilized for the goal of automating the disturbance attribution process. The resulting preliminary algorithm is applied to up-scaled (100m) Landsat data for several different ecosystems in North America, with varying success. Our results indicate that supervised classification is more reliable than unsupervised classification, but that limited training data are required for a region. Future work will improve the algorithm through refining and validating at sites within North America before applying this approach globally.

  9. Analysis of new bone, cartilage, and fibrosis tissue in healing murine allografts using whole slide imaging and a new automated histomorphometric algorithm

    OpenAIRE

    Zhang, Longze; Chang, Martin; Beck, Christopher A; Schwarz, Edward M; Boyce, Brendan F

    2016-01-01

    Histomorphometric analysis of histologic sections of normal and diseased bone samples, such as healing allografts and fractures, is widely used in bone research. However, the utility of traditional semi-automated methods is limited because they are labor-intensive and can have high interobserver variability depending upon the parameters being assessed, and primary data cannot be re-analyzed automatically. Automated histomorphometry has long been recognized as a solution for these issues, and ...

  10. Supervised variational model with statistical inference and its application in medical image segmentation.

    Science.gov (United States)

    Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David

    2015-01-01

    Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.

  11. Application of semi-supervised deep learning to lung sound analysis.

    Science.gov (United States)

    Chamberlain, Daniel; Kodgule, Rahul; Ganelin, Daniela; Miglani, Vivek; Fletcher, Richard Ribon

    2016-08-01

    The analysis of lung sounds, collected through auscultation, is a fundamental component of pulmonary disease diagnostics for primary care and general patient monitoring for telemedicine. Despite advances in computation and algorithms, the goal of automated lung sound identification and classification has remained elusive. Over the past 40 years, published work in this field has demonstrated only limited success in identifying lung sounds, with most published studies using only a small numbers of patients (typically Ndeep learning algorithm for automatically classify lung sounds from a relatively large number of patients (N=284). Focusing on the two most common lung sounds, wheeze and crackle, we present results from 11,627 sound files recorded from 11 different auscultation locations on these 284 patients with pulmonary disease. 890 of these sound files were labeled to evaluate the model, which is significantly larger than previously published studies. Data was collected with a custom mobile phone application and a low-cost (US$30) electronic stethoscope. On this data set, our algorithm achieves ROC curves with AUCs of 0.86 for wheeze and 0.74 for crackle. Most importantly, this study demonstrates how semi-supervised deep learning can be used with larger data sets without requiring extensive labeling of data.

  12. Novel algorithm and MATLAB-based program for automated power law analysis of single particle, time-dependent mean-square displacement

    Science.gov (United States)

    Umansky, Moti; Weihs, Daphne

    2012-08-01

    In many physical and biophysical studies, single-particle tracking is utilized to reveal interactions, diffusion coefficients, active modes of driving motion, dynamic local structure, micromechanics, and microrheology. The basic analysis applied to those data is to determine the time-dependent mean-square displacement (MSD) of particle trajectories and perform time- and ensemble-averaging of similar motions. The motion of particles typically exhibits time-dependent power-law scaling, and only trajectories with qualitatively and quantitatively comparable MSD should be ensembled. Ensemble averaging trajectories that arise from different mechanisms, e.g., actively driven and diffusive, is incorrect and can result inaccurate correlations between structure, mechanics, and activity. We have developed an algorithm to automatically and accurately determine power-law scaling of experimentally measured single-particle MSD. Trajectories can then categorized and grouped according to user defined cutoffs of time, amplitudes, scaling exponent values, or combinations. Power-law fits are then provided for each trajectory alongside categorized groups of trajectories, histograms of power laws, and the ensemble-averaged MSD of each group. The codes are designed to be easily incorporated into existing user codes. We expect that this algorithm and program will be invaluable to anyone performing single-particle tracking, be it in physical or biophysical systems. Catalogue identifier: AEMD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 25 892 No. of bytes in distributed program, including test data, etc.: 5 572 780 Distribution format: tar.gz Programming language: MATLAB (MathWorks Inc.) version 7.11 (2010b) or higher, program

  13. Cross-Domain Semi-Supervised Learning Using Feature Formulation.

    Science.gov (United States)

    Xingquan Zhu

    2011-12-01

    Semi-Supervised Learning (SSL) traditionally makes use of unlabeled samples by including them into the training set through an automated labeling process. Such a primitive Semi-Supervised Learning (pSSL) approach suffers from a number of disadvantages including false labeling and incapable of utilizing out-of-domain samples. In this paper, we propose a formative Semi-Supervised Learning (fSSL) framework which explores hidden features between labeled and unlabeled samples to achieve semi-supervised learning. fSSL regards that both labeled and unlabeled samples are generated from some hidden concepts with labeling information partially observable for some samples. The key of the fSSL is to recover the hidden concepts, and take them as new features to link labeled and unlabeled samples for semi-supervised learning. Because unlabeled samples are only used to generate new features, but not to be explicitly included in the training set like pSSL does, fSSL overcomes the inherent disadvantages of the traditional pSSL methods, especially for samples not within the same domain as the labeled instances. Experimental results and comparisons demonstrate that fSSL significantly outperforms pSSL-based methods for both within-domain and cross-domain semi-supervised learning.

  14. Algorithmic acquisition of diagnostic patterns in district heating billing system

    International Nuclear Information System (INIS)

    Kiluk, Sebastian

    2012-01-01

    An application of algorithmic exploration of billing data is examined for fault detection, diagnosis (FDD) based on evaluation of present state and detection of unexpected changes in energy efficiency of buildings. Large data sets from district heating (DH) billing systems are used for construction of feature space, diagnostic rules and classification of the buildings according to their energy efficiency properties. The algorithmic approach automates discovering knowledge about common, thus accepted changes in buildings’ properties, in equipment and in habitants’ behavior reflecting progress in technology and life style. In this article implementation of Data Mining and Knowledge Discovery (DMKD) method in supervision system with exemplary results based on real data is presented. Crucial steps of data processing influencing diagnostic results are described in details.

  15. Automated minimax design of networks

    DEFF Research Database (Denmark)

    Madsen, Kaj; Schjær-Jacobsen, Hans; Voldby, J

    1975-01-01

    A new gradient algorithm for the solution of nonlinear minimax problems has been developed. The algorithm is well suited for automated minimax design of networks and it is very simple to use. It compares favorably with recent minimax and leastpth algorithms. General convergence problems related...

  16. Resistance to group clinical supervision

    DEFF Research Database (Denmark)

    Buus, Niels; Delgado, Cynthia; Traynor, Michael

    2018-01-01

    This present study is a report of an interview study exploring personal views on participating in group clinical supervision among mental health nursing staff members who do not participate in supervision. There is a paucity of empirical research on resistance to supervision, which has traditiona......This present study is a report of an interview study exploring personal views on participating in group clinical supervision among mental health nursing staff members who do not participate in supervision. There is a paucity of empirical research on resistance to supervision, which has...... traditionally been theorized as a supervisee's maladaptive coping with anxiety in the supervision process. The aim of the present study was to examine resistance to group clinical supervision by interviewing nurses who did not participate in supervision. In 2015, we conducted semistructured interviews with 24...... Danish mental health nursing staff members who had been observed not to participate in supervision in two periods of 3 months. Interviews were audio-recorded and subjected to discourse analysis. We constructed two discursive positions taken by the informants: (i) 'forced non-participation', where...

  17. Active link selection for efficient semi-supervised community detection

    Science.gov (United States)

    Yang, Liang; Jin, Di; Wang, Xiao; Cao, Xiaochun

    2015-01-01

    Several semi-supervised community detection algorithms have been proposed recently to improve the performance of traditional topology-based methods. However, most of them focus on how to integrate supervised information with topology information; few of them pay attention to which information is critical for performance improvement. This leads to large amounts of demand for supervised information, which is expensive or difficult to obtain in most fields. For this problem we propose an active link selection framework, that is we actively select the most uncertain and informative links for human labeling for the efficient utilization of the supervised information. We also disconnect the most likely inter-community edges to further improve the efficiency. Our main idea is that, by connecting uncertain nodes to their community hubs and disconnecting the inter-community edges, one can sharpen the block structure of adjacency matrix more efficiently than randomly labeling links as the existing methods did. Experiments on both synthetic and real networks demonstrate that our new approach significantly outperforms the existing methods in terms of the efficiency of using supervised information. It needs ~13% of the supervised information to achieve a performance similar to that of the original semi-supervised approaches. PMID:25761385

  18. Constrained Deep Weak Supervision for Histopathology Image Segmentation.

    Science.gov (United States)

    Jia, Zhipeng; Huang, Xingyi; Chang, Eric I-Chao; Xu, Yan

    2017-11-01

    In this paper, we develop a new weakly supervised learning algorithm to learn to segment cancerous regions in histopathology images. This paper is under a multiple instance learning (MIL) framework with a new formulation, deep weak supervision (DWS); we also propose an effective way to introduce constraints to our neural networks to assist the learning process. The contributions of our algorithm are threefold: 1) we build an end-to-end learning system that segments cancerous regions with fully convolutional networks (FCNs) in which image-to-image weakly-supervised learning is performed; 2) we develop a DWS formulation to exploit multi-scale learning under weak supervision within FCNs; and 3) constraints about positive instances are introduced in our approach to effectively explore additional weakly supervised information that is easy to obtain and enjoy a significant boost to the learning process. The proposed algorithm, abbreviated as DWS-MIL, is easy to implement and can be trained efficiently. Our system demonstrates the state-of-the-art results on large-scale histopathology image data sets and can be applied to various applications in medical imaging beyond histopathology images, such as MRI, CT, and ultrasound images.

  19. FIGENIX: Intelligent automation of genomic annotation: expertise integration in a new software platform

    Directory of Open Access Journals (Sweden)

    Pontarotti Pierre

    2005-08-01

    Full Text Available Abstract Background Two of the main objectives of the genomic and post-genomic era are to structurally and functionally annotate genomes which consists of detecting genes' position and structure, and inferring their function (as well as of other features of genomes. Structural and functional annotation both require the complex chaining of numerous different software, algorithms and methods under the supervision of a biologist. The automation of these pipelines is necessary to manage huge amounts of data released by sequencing projects. Several pipelines already automate some of these complex chaining but still necessitate an important contribution of biologists for supervising and controlling the results at various steps. Results Here we propose an innovative automated platform, FIGENIX, which includes an expert system capable to substitute to human expertise at several key steps. FIGENIX currently automates complex pipelines of structural and functional annotation under the supervision of the expert system (which allows for example to make key decisions, check intermediate results or refine the dataset. The quality of the results produced by FIGENIX is comparable to those obtained by expert biologists with a drastic gain in terms of time costs and avoidance of errors due to the human manipulation of data. Conclusion The core engine and expert system of the FIGENIX platform currently handle complex annotation processes of broad interest for the genomic community. They could be easily adapted to new, or more specialized pipelines, such as for example the annotation of miRNAs, the classification of complex multigenic families, annotation of regulatory elements and other genomic features of interest.

  20. Feasibility of automated 3-dimensional magnetic resonance imaging pancreas segmentation

    Directory of Open Access Journals (Sweden)

    Shuiping Gou, PhD

    2016-07-01

    Conclusions: Our study demonstrated potential feasibility of automated segmentation of the pancreas on MRI scans with minimal human supervision at the beginning of imaging acquisition. The achieved accuracy is promising for organ localization.

  1. Weakly Supervised Dictionary Learning

    Science.gov (United States)

    You, Zeyu; Raich, Raviv; Fern, Xiaoli Z.; Kim, Jinsub

    2018-05-01

    We present a probabilistic modeling and inference framework for discriminative analysis dictionary learning under a weak supervision setting. Dictionary learning approaches have been widely used for tasks such as low-level signal denoising and restoration as well as high-level classification tasks, which can be applied to audio and image analysis. Synthesis dictionary learning aims at jointly learning a dictionary and corresponding sparse coefficients to provide accurate data representation. This approach is useful for denoising and signal restoration, but may lead to sub-optimal classification performance. By contrast, analysis dictionary learning provides a transform that maps data to a sparse discriminative representation suitable for classification. We consider the problem of analysis dictionary learning for time-series data under a weak supervision setting in which signals are assigned with a global label instead of an instantaneous label signal. We propose a discriminative probabilistic model that incorporates both label information and sparsity constraints on the underlying latent instantaneous label signal using cardinality control. We present the expectation maximization (EM) procedure for maximum likelihood estimation (MLE) of the proposed model. To facilitate a computationally efficient E-step, we propose both a chain and a novel tree graph reformulation of the graphical model. The performance of the proposed model is demonstrated on both synthetic and real-world data.

  2. Supervised Transfer Sparse Coding

    KAUST Repository

    Al-Shedivat, Maruan

    2014-07-27

    A combination of the sparse coding and transfer learn- ing techniques was shown to be accurate and robust in classification tasks where training and testing objects have a shared feature space but are sampled from differ- ent underlying distributions, i.e., belong to different do- mains. The key assumption in such case is that in spite of the domain disparity, samples from different domains share some common hidden factors. Previous methods often assumed that all the objects in the target domain are unlabeled, and thus the training set solely comprised objects from the source domain. However, in real world applications, the target domain often has some labeled objects, or one can always manually label a small num- ber of them. In this paper, we explore such possibil- ity and show how a small number of labeled data in the target domain can significantly leverage classifica- tion accuracy of the state-of-the-art transfer sparse cod- ing methods. We further propose a unified framework named supervised transfer sparse coding (STSC) which simultaneously optimizes sparse representation, domain transfer and classification. Experimental results on three applications demonstrate that a little manual labeling and then learning the model in a supervised fashion can significantly improve classification accuracy.

  3. Automics: an integrated platform for NMR-based metabonomics spectral processing and data analysis

    Directory of Open Access Journals (Sweden)

    Qu Lijia

    2009-03-01

    Full Text Available Abstract Background Spectral processing and post-experimental data analysis are the major tasks in NMR-based metabonomics studies. While there are commercial and free licensed software tools available to assist these tasks, researchers usually have to use multiple software packages for their studies because software packages generally focus on specific tasks. It would be beneficial to have a highly integrated platform, in which these tasks can be completed within one package. Moreover, with open source architecture, newly proposed algorithms or methods for spectral processing and data analysis can be implemented much more easily and accessed freely by the public. Results In this paper, we report an open source software tool, Automics, which is specifically designed for NMR-based metabonomics studies. Automics is a highly integrated platform that provides functions covering almost all the stages of NMR-based metabonomics studies. Automics provides high throughput automatic modules with most recently proposed algorithms and powerful manual modules for 1D NMR spectral processing. In addition to spectral processing functions, powerful features for data organization, data pre-processing, and data analysis have been implemented. Nine statistical methods can be applied to analyses including: feature selection (Fisher's criterion, data reduction (PCA, LDA, ULDA, unsupervised clustering (K-Mean and supervised regression and classification (PLS/PLS-DA, KNN, SIMCA, SVM. Moreover, Automics has a user-friendly graphical interface for visualizing NMR spectra and data analysis results. The functional ability of Automics is demonstrated with an analysis of a type 2 diabetes metabolic profile. Conclusion Automics facilitates high throughput 1D NMR spectral processing and high dimensional data analysis for NMR-based metabonomics applications. Using Automics, users can complete spectral processing and data analysis within one software package in most cases

  4. Automics: an integrated platform for NMR-based metabonomics spectral processing and data analysis.

    Science.gov (United States)

    Wang, Tao; Shao, Kang; Chu, Qinying; Ren, Yanfei; Mu, Yiming; Qu, Lijia; He, Jie; Jin, Changwen; Xia, Bin

    2009-03-16

    Spectral processing and post-experimental data analysis are the major tasks in NMR-based metabonomics studies. While there are commercial and free licensed software tools available to assist these tasks, researchers usually have to use multiple software packages for their studies because software packages generally focus on specific tasks. It would be beneficial to have a highly integrated platform, in which these tasks can be completed within one package. Moreover, with open source architecture, newly proposed algorithms or methods for spectral processing and data analysis can be implemented much more easily and accessed freely by the public. In this paper, we report an open source software tool, Automics, which is specifically designed for NMR-based metabonomics studies. Automics is a highly integrated platform that provides functions covering almost all the stages of NMR-based metabonomics studies. Automics provides high throughput automatic modules with most recently proposed algorithms and powerful manual modules for 1D NMR spectral processing. In addition to spectral processing functions, powerful features for data organization, data pre-processing, and data analysis have been implemented. Nine statistical methods can be applied to analyses including: feature selection (Fisher's criterion), data reduction (PCA, LDA, ULDA), unsupervised clustering (K-Mean) and supervised regression and classification (PLS/PLS-DA, KNN, SIMCA, SVM). Moreover, Automics has a user-friendly graphical interface for visualizing NMR spectra and data analysis results. The functional ability of Automics is demonstrated with an analysis of a type 2 diabetes metabolic profile. Automics facilitates high throughput 1D NMR spectral processing and high dimensional data analysis for NMR-based metabonomics applications. Using Automics, users can complete spectral processing and data analysis within one software package in most cases. Moreover, with its open source architecture, interested

  5. Advanced Music Therapy Supervision Training

    DEFF Research Database (Denmark)

    Pedersen, Inge Nygaard

    2009-01-01

    supervision training excerpts live in the workshop will be offered. The workshop will include demonstrating a variety of supervision methods and techniques used in A) post graduate music therapy training programs b) a variety of work contexts such as psychiatry and somatic music psychotherapy. The workshop......The presentation will illustrate training models in supervision for experienced music therapists where transference/counter transference issues are in focus. Musical, verbal and body related tools will be illustrated from supervision practice by the presenters. A possibility to experience small...

  6. Learning Dynamics in Doctoral Supervision

    DEFF Research Database (Denmark)

    Kobayashi, Sofie

    investigates learning opportunities in supervision with multiple supervisors. This was investigated through observations and recording of supervision, and subsequent analysis of transcripts. The analyses used different perspectives on learning; learning as participation, positioning theory and variation theory....... The research illuminates how learning opportunities are created in the interaction through the scientific discussions. It also shows how multiple supervisors can contribute to supervision by providing new perspectives and opinions that have a potential for creating new understandings. The combination...... of different theoretical frameworks from the perspectives of learning as individual acquisition and a sociocultural perspective on learning contributed to a nuanced illustration of the otherwise implicit practices of supervision....

  7. Semi-Supervised Transductive Hot Spot Predictor Working on Multiple Assumptions

    KAUST Repository

    Wang, Jim Jing-Yan; Almasri, Islam; Shi, Yuexiang; Gao, Xin

    2014-01-01

    of the transductive semi-supervised algorithms takes all the three semisupervised assumptions, i.e., smoothness, cluster and manifold assumptions, together into account during learning. In this paper, we propose a novel semi-supervised method for hot spot residue

  8. Exploiting unsupervised and supervised classification for segmentation of the pathological lung in CT

    International Nuclear Information System (INIS)

    Korfiatis, P; Costaridou, L; Kalogeropoulou, C; Petsas, T; Daoussis, D; Adonopoulos, A

    2009-01-01

    Delineation of lung fields in presence of diffuse lung diseases (DLPDs), such as interstitial pneumonias (IP), challenges segmentation algorithms. To deal with IP patterns affecting the lung border an automated image texture classification scheme is proposed. The proposed segmentation scheme is based on supervised texture classification between lung tissue (normal and abnormal) and surrounding tissue (pleura and thoracic wall) in the lung border region. This region is coarsely defined around an initial estimate of lung border, provided by means of Markov Radom Field modeling and morphological operations. Subsequently, a support vector machine classifier was trained to distinguish between the above two classes of tissue, using textural feature of gray scale and wavelet domains. 17 patients diagnosed with IP, secondary to connective tissue diseases were examined. Segmentation performance in terms of overlap was 0.924±0.021, and for shape differentiation mean, rms and maximum distance were 1.663±0.816, 2.334±1.574 and 8.0515±6.549 mm, respectively. An accurate, automated scheme is proposed for segmenting abnormal lung fields in HRC affected by IP

  9. Exploiting unsupervised and supervised classification for segmentation of the pathological lung in CT

    Science.gov (United States)

    Korfiatis, P.; Kalogeropoulou, C.; Daoussis, D.; Petsas, T.; Adonopoulos, A.; Costaridou, L.

    2009-07-01

    Delineation of lung fields in presence of diffuse lung diseases (DLPDs), such as interstitial pneumonias (IP), challenges segmentation algorithms. To deal with IP patterns affecting the lung border an automated image texture classification scheme is proposed. The proposed segmentation scheme is based on supervised texture classification between lung tissue (normal and abnormal) and surrounding tissue (pleura and thoracic wall) in the lung border region. This region is coarsely defined around an initial estimate of lung border, provided by means of Markov Radom Field modeling and morphological operations. Subsequently, a support vector machine classifier was trained to distinguish between the above two classes of tissue, using textural feature of gray scale and wavelet domains. 17 patients diagnosed with IP, secondary to connective tissue diseases were examined. Segmentation performance in terms of overlap was 0.924±0.021, and for shape differentiation mean, rms and maximum distance were 1.663±0.816, 2.334±1.574 and 8.0515±6.549 mm, respectively. An accurate, automated scheme is proposed for segmenting abnormal lung fields in HRC affected by IP

  10. Public Supervision over Private Relationships : Towards European Supervision Private Law?

    NARCIS (Netherlands)

    Cherednychenko, O.O.

    2014-01-01

    The rise of public supervision over private relationships in many areas of private law has led to the development of what, in the author’s view, could be called ‘European supervision private law’. This emerging body of law forms part of European regulatory private law and is made up of

  11. Supervised Gaussian mixture model based remote sensing image ...

    African Journals Online (AJOL)

    Using the supervised classification technique, both simulated and empirical satellite remote sensing data are used to train and test the Gaussian mixture model algorithm. For the purpose of validating the experiment, the resulting classified satellite image is compared with the ground truth data. For the simulated modelling, ...

  12. Comparison of two automated instruments for Epstein-Barr virus serology in a large adult hospital and implementation of an Epstein-Barr virus nuclear antigen-based testing algorithm.

    Science.gov (United States)

    Al Sidairi, Hilal; Binkhamis, Khalifa; Jackson, Colleen; Roberts, Catherine; Heinstein, Charles; MacDonald, Jimmy; Needle, Robert; Hatchette, Todd F; LeBlanc, Jason J

    2017-11-01

    Serology remains the mainstay for diagnosis of Epstein-Barr virus (EBV) infection. This study compared two automated platforms (BioPlex 2200 and Architect i2000SR) to test three EBV serological markers: viral capsid antigen (VCA) immunoglobulins of class M (IgM), VCA immunoglobulins of class G (IgG) and EBV nuclear antigen-1 (EBNA-1) IgG. Using sera from 65 patients at various stages of EBV disease, BioPlex demonstrated near-perfect agreement for all EBV markers compared to a consensus reference. The agreement for Architect was near-perfect for VCA IgG and EBNA-1 IgG, and substantial for VCA IgM despite five equivocal results. Since the majority of testing in our hospital was from adults with EBNA-1 IgG positive results, post-implementation analysis of an EBNA-based algorithm showed advantages over parallel testing of the three serologic markers. This small verification demonstrated that both automated systems for EBV serology had good performance for all EBV markers, and an EBNA-based testing algorithm is ideal for an adult hospital.

  13. Detection of Erroneous Payments Utilizing Supervised And Unsupervised Data Mining Techniques

    National Research Council Canada - National Science Library

    Yanik, Todd

    2004-01-01

    ... (C&RT)) modeling algorithms. S-Plus software was used to construct a supervised model of vendor payment data using Logistic Regression, along with the Hosmer-Lemeshow Test, for testing the predictive ability of the model...

  14. Fully Decentralized Semi-supervised Learning via Privacy-preserving Matrix Completion.

    Science.gov (United States)

    Fierimonte, Roberto; Scardapane, Simone; Uncini, Aurelio; Panella, Massimo

    2016-08-26

    Distributed learning refers to the problem of inferring a function when the training data are distributed among different nodes. While significant work has been done in the contexts of supervised and unsupervised learning, the intermediate case of Semi-supervised learning in the distributed setting has received less attention. In this paper, we propose an algorithm for this class of problems, by extending the framework of manifold regularization. The main component of the proposed algorithm consists of a fully distributed computation of the adjacency matrix of the training patterns. To this end, we propose a novel algorithm for low-rank distributed matrix completion, based on the framework of diffusion adaptation. Overall, the distributed Semi-supervised algorithm is efficient and scalable, and it can preserve privacy by the inclusion of flexible privacy-preserving mechanisms for similarity computation. The experimental results and comparison on a wide range of standard Semi-supervised benchmarks validate our proposal.

  15. Discriminative Localization in CNNs for Weakly-Supervised Segmentation of Pulmonary Nodules.

    Science.gov (United States)

    Feng, Xinyang; Yang, Jie; Laine, Andrew F; Angelini, Elsa D

    2017-09-01

    Automated detection and segmentation of pulmonary nodules on lung computed tomography (CT) scans can facilitate early lung cancer diagnosis. Existing supervised approaches for automated nodule segmentation on CT scans require voxel-based annotations for training, which are labor- and time-consuming to obtain. In this work, we propose a weakly-supervised method that generates accurate voxel-level nodule segmentation trained with image-level labels only. By adapting a convolutional neural network (CNN) trained for image classification, our proposed method learns discriminative regions from the activation maps of convolution units at different scales, and identifies the true nodule location with a novel candidate-screening framework. Experimental results on the public LIDC-IDRI dataset demonstrate that, our weakly-supervised nodule segmentation framework achieves competitive performance compared to a fully-supervised CNN-based segmentation method.

  16. Accreditation to supervise research

    International Nuclear Information System (INIS)

    Mauclaire, Laurent

    2003-01-01

    After a recall of his academic research works, an indication of all his publications and of his teaching and research supervising activities, and a summary of his scientific activity, the author proposes an overview of his research works which addressed the study of radio-tracers for nuclear medicine, and the study of the 2,6 dimethyl beta cyclodextrin. These both topics are then more precisely presented and discussed. For the first one, the author notably studied iodine radiochemistry, and the elaboration of new compounds for dopamine recapturing visualization (development of radio-pharmaceutical drug marked with technetium). For the second one, the author reports the use of modified cyclodextrins for the transport of lipophilic radio-tracers

  17. Medical supervision of radiation workers

    International Nuclear Information System (INIS)

    Santani, S.B.; Nandakumar, A.N.; Subramanian, G.

    1982-01-01

    The basic elements of an occupational medical supervision programme for radiation workers are very much the same as those relevant to other professions with some additional special features. This paper cites examples from literature and recommends measures such as spot checks and continuance of medical supervision even after a radiation worker leaves this profession. (author)

  18. Tværfaglig supervision

    DEFF Research Database (Denmark)

    Tværfaglig supervision dækker over supervision af forskellige faggrupper. Det er en kompleks disciplin der stiller store krav tl supervisor. Bogens første del præsenterer fire faglige supervisionsmodeller: En almen, en psykodynamisk, en kognitiv adfærdsterapeutisk og en narrativ. Anden del...

  19. Assessment of Counselors' Supervision Processes

    Science.gov (United States)

    Ünal, Ali; Sürücü, Abdullah; Yavuz, Mustafa

    2013-01-01

    The aim of this study is to investigate elementary and high school counselors' supervision processes and efficiency of their supervision. The interview method was used as it was thought to be better for realizing the aim of the study. The study group was composed of ten counselors who were chosen through purposeful sampling method. Data were…

  20. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  1. Supervision Duty of School Principals

    Directory of Open Access Journals (Sweden)

    Kürşat YILMAZ

    2009-04-01

    Full Text Available Supervision by school administrators is becoming more and more important. The change in the roles ofschool administrators has a great effect on that increase. At present, school administrators are consideredmore than as technical directors, but as instructional leaders. This increased the importance of schooladministrators’ expected supervision acts. In this respect, the aim of this study is to make a conceptualanalysis about school administrators’ supervision duties. For this reason, a literature review related withsupervision and contemporary supervision approaches was done, and the official documents concerningsupervision were examined. As a result, it can be said that school administrators’ supervision duties havebecome very important. And these duties must certainly be carried out by school administrators.

  2. Nursing supervision for care comprehensiveness

    Directory of Open Access Journals (Sweden)

    Lucieli Dias Pedreschi Chaves

    Full Text Available ABSTRACT Objective: To reflect on nursing supervision as a management tool for care comprehensiveness by nurses, considering its potential and limits in the current scenario. Method: A reflective study based on discourse about nursing supervision, presenting theoretical and practical concepts and approaches. Results: Limits on the exercise of supervision are related to the organization of healthcare services based on the functional and clinical model of care, in addition to possible gaps in the nurse training process and work overload. Regarding the potential, researchers emphasize that supervision is a tool for coordinating care and management actions, which may favor care comprehensiveness, and stimulate positive attitudes toward cooperation and contribution within teams, co-responsibility, and educational development at work. Final considerations: Nursing supervision may help enhance care comprehensiveness by implying continuous reflection on including the dynamics of the healthcare work process and user needs in care networks.

  3. Development and implementation of full-automatic supervision and control programme for CEFR refueling control system

    International Nuclear Information System (INIS)

    Zhu Hao; Dong Shengguo; Ma Hongsheng; Zhao Lixia

    2011-01-01

    In order to make the process of CEFR refueling more convenient and reliable, the computer supervision and control system was designed according to the CEFR refueling technology. Meanwhile, the supervision and control function and database function were developed on the basis of KingView and SQL Server2000. The fuel of reactor core was fully loaded by the system, and full-automation of CEFR refueling process was implemented. (authors)

  4. Discriminative semi-supervised feature selection via manifold regularization.

    Science.gov (United States)

    Xu, Zenglin; King, Irwin; Lyu, Michael Rung-Tsong; Jin, Rong

    2010-07-01

    Feature selection has attracted a huge amount of interest in both research and application communities of data mining. We consider the problem of semi-supervised feature selection, where we are given a small amount of labeled examples and a large amount of unlabeled examples. Since a small number of labeled samples are usually insufficient for identifying the relevant features, the critical problem arising from semi-supervised feature selection is how to take advantage of the information underneath the unlabeled data. To address this problem, we propose a novel discriminative semi-supervised feature selection method based on the idea of manifold regularization. The proposed approach selects features through maximizing the classification margin between different classes and simultaneously exploiting the geometry of the probability distribution that generates both labeled and unlabeled data. In comparison with previous semi-supervised feature selection algorithms, our proposed semi-supervised feature selection method is an embedded feature selection method and is able to find more discriminative features. We formulate the proposed feature selection method into a convex-concave optimization problem, where the saddle point corresponds to the optimal solution. To find the optimal solution, the level method, a fairly recent optimization method, is employed. We also present a theoretic proof of the convergence rate for the application of the level method to our problem. Empirical evaluation on several benchmark data sets demonstrates the effectiveness of the proposed semi-supervised feature selection method.

  5. Remote supervision of GIS monitoring system

    Energy Technology Data Exchange (ETDEWEB)

    Pannunzio, J.; Juge, P.; Ficheux, A.; Rayon, J.L. [Areva T and D Automation Canada Inc., Monteal, PQ (Canada)

    2007-07-01

    Operators of gas-insulated substations (GIS) are increasingly concerned with failure prevention, scheduled maintenance, personnel safety and shortage of maintenance crews. Until recently, the density levels of the insulating gas sulfur hexafluoride (SF6) was the only parameter controlled in gas-insulated substations. Modern digital type control and monitoring equipment have been widely used in the past decade. Remote indication of gas density and status of dynamic components was made possible and shown on local control panels. Modern GIS monitoring systems offer features such as SF6 monitoring, SF6 leakage trends, internal arc localization and detection. The required information is recorded in a local computer and displaced onto a local human machine interface (HMI) or a local industrial PC mounted next to the GIS. These monitoring systems are used as decision making tools to facilitate maintenance activities and optimize the management of assets. This paper presented the latest developments in digital monitoring systems in terms of modern digital architecture; management of information flows between monitoring systems and control systems; operation of remote supervision; configuration of high voltage substations and information sharing; and, types of links between GIS room and remote supervision. This paper also demonstrated what can be achieved by moving the central HMI of a GIS monitoring system to the decision-making centres. It was shown that integrated features that allow remote on-line or automated management have reached an acceptable level of reliability and comfort for operators. 5 figs.

  6. Automated Student Model Improvement

    Science.gov (United States)

    Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

    2012-01-01

    Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…

  7. Social construction : discursive perspective towards supervision

    OpenAIRE

    Naujanienė, Rasa

    2010-01-01

    The aim of publication is to discuss the development of supervision theory in relation with social and social work theory and practice. Main focus in the analysis is done to social constructionist ideas and its’ relevance to supervision practice. The development of supervision is related with supervision practice. Starting in 19th century supervision from giving practical advices supervision came to 21st century as dialog based on critical and philosophical reflection. Different theory and pr...

  8. Automated algorithms for detecting sleep period time using a multi-sensor pattern-recognition activity monitor from 24 h free-living data in older adults.

    Science.gov (United States)

    Cabanas-Sánchez, Verónica; Higueras-Fresnillo, Sara; De la Cámara, Miguel Ángel; Veiga, Oscar L; Martinez-Gomez, David

    2018-05-16

    The aims of the present study were (i) to develop automated algorithms to identify the sleep period time in 24 h data from the Intelligent Device for Energy Expenditure and Activity (IDEEA) in older adults, and (ii) to analyze the agreement between these algorithms to identify the sleep period time as compared to self-reported data and expert visual analysis of accelerometer raw data. This study comprised 50 participants, aged 65-85 years. Fourteen automated algorithms were developed. Participants reported their bedtime and waking time on the days on which they wore the device. A well-trained expert reviewed each IDEEA file in order to visually identify bedtime and waking time on each day. To explore the agreement between methods, Pearson correlations, mean differences, mean percentage errors, accuracy, sensitivity and specificity, and the Bland-Altman method were calculated. With 87 d of valid data, algorithms 6, 7, 11 and 12 achieved higher levels of agreement in determining sleep period time when compared to self-reported data (mean difference  =  -0.34 to 0.01 h d -1 ; mean absolute error  =  10.66%-11.44%; r  =  0.515-0.686; accuracy  =  95.0%-95.6%; sensitivity  =  93.0%-95.8%; specificity  =  95.7%-96.4%) and expert visual analysis (mean difference  =  -0.04 to 0.31 h d -1 ; mean absolute error  =  5.0%-6.97%; r  =  0.620-0.766; accuracy  =  97.2%-98.0%; sensitivity  =  94.5%-97.6%; specificity  =  98.4%-98.8%). Bland-Altman plots showed no systematic biases in these comparisons (all p  >  0.05). Differences between methods did not vary significantly by gender, age, obesity, self-rated health, or the presence of chronic conditions. These four algorithms can be used to identify easily and with adequate accuracy the sleep period time using the IDEEA activity monitor from 24 h free-living data in older adults.

  9. Quality-Related Monitoring and Grading of Granulated Products by Weibull-Distribution Modeling of Visual Images with Semi-Supervised Learning.

    Science.gov (United States)

    Liu, Jinping; Tang, Zhaohui; Xu, Pengfei; Liu, Wenzhong; Zhang, Jin; Zhu, Jianyong

    2016-06-29

    The topic of online product quality inspection (OPQI) with smart visual sensors is attracting increasing interest in both the academic and industrial communities on account of the natural connection between the visual appearance of products with their underlying qualities. Visual images captured from granulated products (GPs), e.g., cereal products, fabric textiles, are comprised of a large number of independent particles or stochastically stacking locally homogeneous fragments, whose analysis and understanding remains challenging. A method of image statistical modeling-based OPQI for GP quality grading and monitoring by a Weibull distribution(WD) model with a semi-supervised learning classifier is presented. WD-model parameters (WD-MPs) of GP images' spatial structures, obtained with omnidirectional Gaussian derivative filtering (OGDF), which were demonstrated theoretically to obey a specific WD model of integral form, were extracted as the visual features. Then, a co-training-style semi-supervised classifier algorithm, named COSC-Boosting, was exploited for semi-supervised GP quality grading, by integrating two independent classifiers with complementary nature in the face of scarce labeled samples. Effectiveness of the proposed OPQI method was verified and compared in the field of automated rice quality grading with commonly-used methods and showed superior performance, which lays a foundation for the quality control of GP on assembly lines.

  10. Automation Processes and Blockchain Systems

    OpenAIRE

    Hegadekatti, Kartik

    2017-01-01

    Blockchain Systems and Ubiquitous computing are changing the way we do business and lead our lives. One of the most important applications of Blockchain technology is in automation processes and Internet-of-Things (IoT). Machines have so far been limited in ability primarily because they have restricted capacity to exchange value. Any monetary exchange of value has to be supervised by humans or human-based centralised ledgers. Blockchain technology changes all that. It allows machines to have...

  11. Online supervision at the university

    DEFF Research Database (Denmark)

    Bengtsen, Søren Smedegaard; Jensen, Gry Sandholm

    2015-01-01

    supervision proves unhelpful when trying to understand how online supervision and feedback is a pedagogical phenomenon in its own right, and irreducible to the face-to-face context. Secondly we show that not enough attention has been given to the way different digital tools and platforms influence...... pedagogy we forge a new concept of “format supervision” that enables supervisors to understand and reflect their supervision practice, not as caught in the physical-virtual divide, but as a choice between face-to-face and online formats that each conditions the supervisory dialogue in their own particular...

  12. Local supervision of solariums

    International Nuclear Information System (INIS)

    2004-01-01

    In Norway, new regulations on radiation protection and application of radiation came into force on the first of January 2004. Local authorities may now perform the supervision of solariums. There are over 500 solar studios in Norway, with over 5000 solariums accessible to the public. An unknown number of solariums are in private homes, on workplaces, and in hotels and fitness studios. Norway currently has the highest frequency of skin cancer in Europe. The frequency of mole cancer has increased sixfold during the last 30 years, and 200 people die each year of this type of cancer. The Nordic cancer registers estimate that 95 per cent of the skin cancer incidences would have been avoided by limiting sunbathing. It is unknown how many cases are due to the use of solariums. But several studies indicate increased risk of mole cancer caused by solariums. It was found in previous inspection of 130 solariums that only 30 per cent had correct tubes and lamps. Only one solarium satisfied all the requirements of the regulations. But this has since improved. With the new regulations, all solarium businesses offering cosmetic solariums for sale, renting out or use have an obligation to submit reports to the Radiation Protection Authority

  13. Supervised learning of probability distributions by neural networks

    Science.gov (United States)

    Baum, Eric B.; Wilczek, Frank

    1988-01-01

    Supervised learning algorithms for feedforward neural networks are investigated analytically. The back-propagation algorithm described by Werbos (1974), Parker (1985), and Rumelhart et al. (1986) is generalized by redefining the values of the input and output neurons as probabilities. The synaptic weights are then varied to follow gradients in the logarithm of likelihood rather than in the error. This modification is shown to provide a more rigorous theoretical basis for the algorithm and to permit more accurate predictions. A typical application involving a medical-diagnosis expert system is discussed.

  14. Nonparametric Mixture Models for Supervised Image Parcellation.

    Science.gov (United States)

    Sabuncu, Mert R; Yeo, B T Thomas; Van Leemput, Koen; Fischl, Bruce; Golland, Polina

    2009-09-01

    We present a nonparametric, probabilistic mixture model for the supervised parcellation of images. The proposed model yields segmentation algorithms conceptually similar to the recently developed label fusion methods, which register a new image with each training image separately. Segmentation is achieved via the fusion of transferred manual labels. We show that in our framework various settings of a model parameter yield algorithms that use image intensity information differently in determining the weight of a training subject during fusion. One particular setting computes a single, global weight per training subject, whereas another setting uses locally varying weights when fusing the training data. The proposed nonparametric parcellation approach capitalizes on recently developed fast and robust pairwise image alignment tools. The use of multiple registrations allows the algorithm to be robust to occasional registration failures. We report experiments on 39 volumetric brain MRI scans with expert manual labels for the white matter, cerebral cortex, ventricles and subcortical structures. The results demonstrate that the proposed nonparametric segmentation framework yields significantly better segmentation than state-of-the-art algorithms.

  15. BRONCHIAL ASTHMA SUPERVISION AMONG TEENAGERS

    Directory of Open Access Journals (Sweden)

    N.M. Nenasheva

    2008-01-01

    Full Text Available The article highlights the results of the act test based bronchial asthma supervision evaluation among teenagers and defines the interrelation of the objective and subjective asthma supervision parameters. The researchers examined 214 male teenagers aged from 16 to 18, suffering from the bronchial asthma, who were sent to the allergy department to verify the diagnosis. Bronchial asthma supervision evaluation was assisted by the act test. The research has showed that over a half (56% of teenagers, suffering from mild bronchial asthma, mention its un control course, do not receive any adequate pharmacotherapy and are consequently a risk group in terms of the bronchial asthma exacerbation. Act test results correlate with the functional indices (fev1, as well as with the degree of the bronchial hyperresponsiveness, which is one of the markers of an allergic inflammation in the lower respiratory passages.Key words: bronchial asthma supervision, act test, teenagers.

  16. Recent advances in automated system model extraction (SME)

    International Nuclear Information System (INIS)

    Narayanan, Nithin; Bloomsburgh, John; He Yie; Mao Jianhua; Patil, Mahesh B; Akkaraju, Sandeep

    2006-01-01

    In this paper we present two different techniques for automated extraction of system models from FEA models. We discuss two different algorithms: for (i) automated N-DOF SME for electrostatically actuated MEMS and (ii) automated N-DOF SME for MEMS inertial sensors. We will present case studies for the two different algorithms presented

  17. A Cross-Correlated Delay Shift Supervised Learning Method for Spiking Neurons with Application to Interictal Spike Detection in Epilepsy.

    Science.gov (United States)

    Guo, Lilin; Wang, Zhenzhong; Cabrerizo, Mercedes; Adjouadi, Malek

    2017-05-01

    This study introduces a novel learning algorithm for spiking neurons, called CCDS, which is able to learn and reproduce arbitrary spike patterns in a supervised fashion allowing the processing of spatiotemporal information encoded in the precise timing of spikes. Unlike the Remote Supervised Method (ReSuMe), synapse delays and axonal delays in CCDS are variants which are modulated together with weights during learning. The CCDS rule is both biologically plausible and computationally efficient. The properties of this learning rule are investigated extensively through experimental evaluations in terms of reliability, adaptive learning performance, generality to different neuron models, learning in the presence of noise, effects of its learning parameters and classification performance. Results presented show that the CCDS learning method achieves learning accuracy and learning speed comparable with ReSuMe, but improves classification accuracy when compared to both the Spike Pattern Association Neuron (SPAN) learning rule and the Tempotron learning rule. The merit of CCDS rule is further validated on a practical example involving the automated detection of interictal spikes in EEG records of patients with epilepsy. Results again show that with proper encoding, the CCDS rule achieves good recognition performance.

  18. Magnetic Resonance Parkinsonism Index: diagnostic accuracy of a fully automated algorithm in comparison with the manual measurement in a large Italian multicentre study in patients with progressive supranuclear palsy

    International Nuclear Information System (INIS)

    Nigro, Salvatore; Arabia, Gennarina; Antonini, Angelo; Weis, Luca; Marcante, Andrea; Tessitore, Alessandro; Cirillo, Mario; Tedeschi, Gioacchino; Zanigni, Stefano; Tonon, Caterina; Calandra-Buonaura, Giovanna; Pezzoli, Gianni; Cilia, Roberto; Zappia, Mario; Nicoletti, Alessandra; Cicero, Calogero Edoardo; Tinazzi, Michele; Tocco, Pierluigi; Cardobi, Nicolo; Quattrone, Aldo

    2017-01-01

    To investigate the reliability of a new in-house automatic algorithm for calculating the Magnetic Resonance Parkinsonism Index (MRPI), in a large multicentre study population of patients affected by progressive supranuclear palsy (PSP) or Parkinson's disease (PD), and healthy controls (HC), and to compare the diagnostic accuracy of the automatic and manual MRPI values. The study included 88 PSP patients, 234 PD patients and 117 controls. MRI was performed using both 3T and 1.5T scanners. Automatic and manual MRPI values were evaluated, and accuracy of both methods in distinguishing PSP from PD and controls was calculated. No statistical differences were found between automated and manual MRPI values in all groups. The automatic MRPI values differentiated PSP from PD with an accuracy of 95 % (manual MRPI accuracy 96 %) and 97 % (manual MRPI accuracy 100 %) for 1.5T and 3T scanners, respectively. Our study showed that the new in-house automated method for MRPI calculation was highly accurate in distinguishing PSP from PD. Our automatic approach allows a widespread use of MRPI in clinical practice and in longitudinal research studies. (orig.)

  19. Magnetic Resonance Parkinsonism Index: diagnostic accuracy of a fully automated algorithm in comparison with the manual measurement in a large Italian multicentre study in patients with progressive supranuclear palsy

    Energy Technology Data Exchange (ETDEWEB)

    Nigro, Salvatore [National Research Council, Institute of Bioimaging and Molecular Physiology, Catanzaro (Italy); Arabia, Gennarina [University ' ' Magna Graecia' ' , Institute of Neurology, Department of Medical and Surgical Sciences, Catanzaro (Italy); Antonini, Angelo; Weis, Luca; Marcante, Andrea [' ' Fondazione Ospedale San Camillo' ' - I.R.C.C.S, Parkinson' s Disease and Movement Disorders Unit, Venice-Lido (Italy); Tessitore, Alessandro; Cirillo, Mario; Tedeschi, Gioacchino [Second University of Naples, Department of Medical, Surgical, Neurological, Metabolic and Aging Sciences, Naples (Italy); Second University of Naples, MRI Research Center SUN-FISM, Naples (Italy); Zanigni, Stefano; Tonon, Caterina [Policlinico S. Orsola - Malpighi, Functional MR Unit, Bologna (Italy); University of Bologna, Department of Biomedical and Neuromotor Sciences, Bologna (Italy); Calandra-Buonaura, Giovanna [University of Bologna, Department of Biomedical and Neuromotor Sciences, Bologna (Italy); IRCCS Istituto delle Scienze Neurologiche di Bologna, Bologna (Italy); Pezzoli, Gianni; Cilia, Roberto [ASST G.Pini - CTO, ex ICP, Parkinson Institute, Milano (Italy); Zappia, Mario; Nicoletti, Alessandra; Cicero, Calogero Edoardo [University of Catania, Department ' ' G.F. Ingrassia' ' , Section of Neurosciences, Catania (Italy); Tinazzi, Michele; Tocco, Pierluigi [University Hospital of Verona, Department of Neurological and Movement Sciences, Verona (Italy); Cardobi, Nicolo [University Hospital of Verona, Institute of Radiology, Verona (Italy); Quattrone, Aldo [National Research Council, Institute of Bioimaging and Molecular Physiology, Catanzaro (Italy); University ' ' Magna Graecia' ' , Institute of Neurology, Department of Medical and Surgical Sciences, Catanzaro (Italy)

    2017-06-15

    To investigate the reliability of a new in-house automatic algorithm for calculating the Magnetic Resonance Parkinsonism Index (MRPI), in a large multicentre study population of patients affected by progressive supranuclear palsy (PSP) or Parkinson's disease (PD), and healthy controls (HC), and to compare the diagnostic accuracy of the automatic and manual MRPI values. The study included 88 PSP patients, 234 PD patients and 117 controls. MRI was performed using both 3T and 1.5T scanners. Automatic and manual MRPI values were evaluated, and accuracy of both methods in distinguishing PSP from PD and controls was calculated. No statistical differences were found between automated and manual MRPI values in all groups. The automatic MRPI values differentiated PSP from PD with an accuracy of 95 % (manual MRPI accuracy 96 %) and 97 % (manual MRPI accuracy 100 %) for 1.5T and 3T scanners, respectively. Our study showed that the new in-house automated method for MRPI calculation was highly accurate in distinguishing PSP from PD. Our automatic approach allows a widespread use of MRPI in clinical practice and in longitudinal research studies. (orig.)

  20. The effectiveness of banking supervision

    OpenAIRE

    Davis, EP; Obasi, U

    2009-01-01

    Banking supervision is an essential aspect of modern financial systems, seeking crucially to monitor risk-taking by banks so as to protect depositors, the government safety net and the economy as a whole against systemic bank failure and its consequences. In this context, this paper seeks to explore the relationship between risk indicators for individual banks and the different approaches to banking supervision adopted around the world. This is the first work to make use of the currently avai...

  1. Sensitivity and specificity of automated analysis of single-field non-mydriatic fundus photographs by Bosch DR Algorithm-Comparison with mydriatic fundus photography (ETDRS for screening in undiagnosed diabetic retinopathy.

    Directory of Open Access Journals (Sweden)

    Pritam Bawankar

    Full Text Available Diabetic retinopathy (DR is a leading cause of blindness among working-age adults. Early diagnosis through effective screening programs is likely to improve vision outcomes. The ETDRS seven-standard-field 35-mm stereoscopic color retinal imaging (ETDRS of the dilated eye is elaborate and requires mydriasis, and is unsuitable for screening. We evaluated an image analysis application for the automated diagnosis of DR from non-mydriatic single-field images. Patients suffering from diabetes for at least 5 years were included if they were 18 years or older. Patients already diagnosed with DR were excluded. Physiologic mydriasis was achieved by placing the subjects in a dark room. Images were captured using a Bosch Mobile Eye Care fundus camera. The images were analyzed by the Retinal Imaging Bosch DR Algorithm for the diagnosis of DR. All subjects also subsequently underwent pharmacological mydriasis and ETDRS imaging. Non-mydriatic and mydriatic images were read by ophthalmologists. The ETDRS readings were used as the gold standard for calculating the sensitivity and specificity for the software. 564 consecutive subjects (1128 eyes were recruited from six centers in India. Each subject was evaluated at a single outpatient visit. Forty-four of 1128 images (3.9% could not be read by the algorithm, and were categorized as inconclusive. In four subjects, neither eye provided an acceptable image: these four subjects were excluded from the analysis. This left 560 subjects for analysis (1084 eyes. The algorithm correctly diagnosed 531 of 560 cases. The sensitivity, specificity, and positive and negative predictive values were 91%, 97%, 94%, and 95% respectively. The Bosch DR Algorithm shows favorable sensitivity and specificity in diagnosing DR from non-mydriatic images, and can greatly simplify screening for DR. This also has major implications for telemedicine in the use of screening for retinopathy in patients with diabetes mellitus.

  2. 自动化集装箱码头立体轨道设备混合分配新算法%A Novel Hybrid Equipment Allocation Algorithm for Automated Container Terminal Based on Tri-dimensional Rail Network

    Institute of Scientific and Technical Information of China (English)

    石小法; 梁林林; 陆青

    2013-01-01

    In the existing equipment allocation algorithms of automated container terminal,the one-one corresponding mechanism was used.In this paper,we proposed an algorithm used mixed allocation mechanism to solve the problem caused by one-one corresponding mechanism,such as one device always busy but the other free continuously; At the same time,in our algorithm,the overall synchronization mobile strategy was adopted instead of current step by step moving strategy,that means when handling tasks was received,the device in the equipment set move to the target location simultaneously.It is proved by simulation experiments that,terminal efficiency is greatly improved and the device busy rate is relatively balanced by using our algorithm.%针对自动化集装箱码头现有设备分配算法采用的绑定机制,提出混合分配机制算法,解决设备忙率参差不齐,即某低架桥轨道小车一直繁忙而其他一直空闲的问题;同时,本算法流程中,各设备采用总体同步移动策略,即接到装卸任务后,设备组合中各设备同时向目标位置移动,而非目前采用的分步移动方式.仿真实验证明,采用本算法后,一方面码头装卸效率得到较大提高;另一方面各设备的忙率也取得相对的均衡.

  3. Performance analysis of a cellular automation algorithm for the solution of the track reconstruction problem on a manycore server at LIT, JINR

    International Nuclear Information System (INIS)

    Kulakov, I.S.; Baginyan, S.A.; Ivanov, V.V.; Kisel', P.I.

    2013-01-01

    The results of the tests for the tracks reconstruction efficiency, the speed of the algorithm and its scalability with respect to the number of cores of the server with two Intel Xeon E5640 CPUs (in total 8 physical or 16 logical cores) are presented and discussed

  4. Evaluation of a semi-automated computer algorithm for measuring total fat and visceral fat content in lambs undergoing in vivo whole body computed tomography.

    Science.gov (United States)

    Rosenblatt, Alana J; Scrivani, Peter V; Boisclair, Yves R; Reeves, Anthony P; Ramos-Nieves, Jose M; Xie, Yiting; Erb, Hollis N

    2017-10-01

    Computed tomography (CT) is a suitable tool for measuring body fat, since it is non-destructive and can be used to differentiate metabolically active visceral fat from total body fat. Whole body analysis of body fat is likely to be more accurate than single CT slice estimates of body fat. The aim of this study was to assess the agreement between semi-automated computer analysis of whole body volumetric CT data and conventional proximate (chemical) analysis of body fat in lambs. Data were collected prospectively from 12 lambs that underwent duplicate whole body CT, followed by slaughter and carcass analysis by dissection and chemical analysis. Agreement between methods for quantification of total and visceral fat was assessed by Bland-Altman plot analysis. The repeatability of CT was assessed for these measures using the mean difference of duplicated measures. When compared to chemical analysis, CT systematically underestimated total and visceral fat contents by more than 10% of the mean fat weight. Therefore, carcass analysis and semi-automated CT computer measurements were not interchangeable for quantifying body fat content without the use of a correction factor. CT acquisition was repeatable, with a mean difference of repeated measures being close to zero. Therefore, uncorrected whole body CT might have an application for assessment of relative changes in fat content, especially in growing lambs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Ixtetreco: recent extensions for the supervision of a power network; Ixtetreco: extensions recentes pour la supervision d`un reseau electrique

    Energy Technology Data Exchange (ETDEWEB)

    Despouys, O.

    1998-04-01

    A chronicle model describes a class of evolutions or possible behaviours for a given dynamical system. Ixtetreco uses a reified logics and time constraints to describe these chronicles and algorithms which allow to recognize all instances of these chronicles in a given stream of observations in entry. Ixtetreco is used in several applications for the supervision of complex processes. This paper presents the recent extensions of Ixtetreco which allow to use it in the supervision of power networks. (J.S.) 14 refs.

  6. 20 CFR 656.21 - Supervised recruitment.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Supervised recruitment. 656.21 Section 656.21... Supervised recruitment. (a) Supervised recruitment. Where the Certifying Officer determines it appropriate, post-filing supervised recruitment may be required of the employer for the pending application or...

  7. Educational Supervision Appropriate for Psychiatry Trainee's Needs

    Science.gov (United States)

    Rele, Kiran; Tarrant, C. Jane

    2010-01-01

    Objective: The authors studied the regularity and content of supervision sessions in one of the U.K. postgraduate psychiatric training schemes (Mid-Trent). Methods: A questionnaire sent to psychiatry trainees assessed the timing and duration of supervision, content and protection of supervision time, and overall quality of supervision. The authors…

  8. Automation of scanning technique by gamma radiation

    International Nuclear Information System (INIS)

    Aamira, Yahya

    2011-01-01

    The gamma scan technique is a nuclear test allowing the analysis of the internal mechanical properties of distillation columns used in petrochemical industries. Such technique is performed manually. So we propose in this work to automate the gamma scan procedure test by using a PLC. In addition, supervision and data acquisition interfaces are proposed.

  9. Weakly supervised semantic segmentation using fore-background priors

    Science.gov (United States)

    Han, Zheng; Xiao, Zhitao; Yu, Mingjun

    2017-07-01

    Weakly-supervised semantic segmentation is a challenge in the field of computer vision. Most previous works utilize the labels of the whole training set and thereby need the construction of a relationship graph about image labels, thus result in expensive computation. In this study, we tackle this problem from a different perspective. We proposed a novel semantic segmentation algorithm based on background priors, which avoids the construction of a huge graph in whole training dataset. Specifically, a random forest classifier is obtained using weakly supervised training data .Then semantic texton forest (STF) feature is extracted from image superpixels. Finally, a CRF based optimization algorithm is proposed. The unary potential of CRF derived from the outputting probability of random forest classifier and the robust saliency map as background prior. Experiments on the MSRC21 dataset show that the new algorithm outperforms some previous influential weakly-supervised segmentation algorithms. Furthermore, the use of efficient decision forests classifier and parallel computing of saliency map significantly accelerates the implementation.

  10. Home Automation

    OpenAIRE

    Ahmed, Zeeshan

    2010-01-01

    In this paper I briefly discuss the importance of home automation system. Going in to the details I briefly present a real time designed and implemented software and hardware oriented house automation research project, capable of automating house's electricity and providing a security system to detect the presence of unexpected behavior.

  11. Distributed data collection and supervision based on web sensor

    Science.gov (United States)

    He, Pengju; Dai, Guanzhong; Fu, Lei; Li, Xiangjun

    2006-11-01

    As a node in Internet/Intranet, web sensor has been promoted in recent years and wildly applied in remote manufactory, workshop measurement and control field. However, the conventional scheme can only support HTTP protocol, and the remote users supervise and control the collected data published by web in the standard browser because of the limited resource of the microprocessor in the sensor; moreover, only one node of data acquirement can be supervised and controlled in one instant therefore the requirement of centralized remote supervision, control and data process can not be satisfied in some fields. In this paper, the centralized remote supervision, control and data process by the web sensor are proposed and implemented by the principle of device driver program. The useless information of the every collected web page embedded in the sensor is filtered and the useful data is transmitted to the real-time database in the workstation, and different filter algorithms are designed for different sensors possessing independent web pages. Every sensor node has its own filter program of web, called "web data collection driver program", the collecting details are shielded, and the supervision, control and configuration software can be implemented by the call of web data collection driver program just like the use of the I/O driver program. The proposed technology can be applied in the data acquirement where relative low real-time is required.

  12. An automated three-dimensional detection and segmentation method for touching cells by integrating concave points clustering and random walker algorithm.

    Directory of Open Access Journals (Sweden)

    Yong He

    Full Text Available Characterizing cytoarchitecture is crucial for understanding brain functions and neural diseases. In neuroanatomy, it is an important task to accurately extract cell populations' centroids and contours. Recent advances have permitted imaging at single cell resolution for an entire mouse brain using the Nissl staining method. However, it is difficult to precisely segment numerous cells, especially those cells touching each other. As presented herein, we have developed an automated three-dimensional detection and segmentation method applied to the Nissl staining data, with the following two key steps: 1 concave points clustering to determine the seed points of touching cells; and 2 random walker segmentation to obtain cell contours. Also, we have evaluated the performance of our proposed method with several mouse brain datasets, which were captured with the micro-optical sectioning tomography imaging system, and the datasets include closely touching cells. Comparing with traditional detection and segmentation methods, our approach shows promising detection accuracy and high robustness.

  13. Automated supervised classification of variable stars. I. Methodology

    NARCIS (Netherlands)

    Debosscher, J.; Sarro, L.M.; Aerts, C.C.; Cuypers, J.; Vandenbussche, B.; Garrido, R.; Solano, E.

    2007-01-01

    Context: The fast classification of new variable stars is an important step in making them available for further research. Selection of science targets from large databases is much more efficient if they have been classified first. Defining the classes in terms of physical parameters is also

  14. Integrating the Supervised Information into Unsupervised Learning

    Directory of Open Access Journals (Sweden)

    Ping Ling

    2013-01-01

    Full Text Available This paper presents an assembling unsupervised learning framework that adopts the information coming from the supervised learning process and gives the corresponding implementation algorithm. The algorithm consists of two phases: extracting and clustering data representatives (DRs firstly to obtain labeled training data and then classifying non-DRs based on labeled DRs. The implementation algorithm is called SDSN since it employs the tuning-scaled Support vector domain description to collect DRs, uses spectrum-based method to cluster DRs, and adopts the nearest neighbor classifier to label non-DRs. The validation of the clustering procedure of the first-phase is analyzed theoretically. A new metric is defined data dependently in the second phase to allow the nearest neighbor classifier to work with the informed information. A fast training approach for DRs’ extraction is provided to bring more efficiency. Experimental results on synthetic and real datasets verify that the proposed idea is of correctness and performance and SDSN exhibits higher popularity in practice over the traditional pure clustering procedure.

  15. Human semi-supervised learning.

    Science.gov (United States)

    Gibson, Bryan R; Rogers, Timothy T; Zhu, Xiaojin

    2013-01-01

    Most empirical work in human categorization has studied learning in either fully supervised or fully unsupervised scenarios. Most real-world learning scenarios, however, are semi-supervised: Learners receive a great deal of unlabeled information from the world, coupled with occasional experiences in which items are directly labeled by a knowledgeable source. A large body of work in machine learning has investigated how learning can exploit both labeled and unlabeled data provided to a learner. Using equivalences between models found in human categorization and machine learning research, we explain how these semi-supervised techniques can be applied to human learning. A series of experiments are described which show that semi-supervised learning models prove useful for explaining human behavior when exposed to both labeled and unlabeled data. We then discuss some machine learning models that do not have familiar human categorization counterparts. Finally, we discuss some challenges yet to be addressed in the use of semi-supervised models for modeling human categorization. Copyright © 2013 Cognitive Science Society, Inc.

  16. A supervised learning rule for classification of spatiotemporal spike patterns.

    Science.gov (United States)

    Lilin Guo; Zhenzhong Wang; Adjouadi, Malek

    2016-08-01

    This study introduces a novel supervised algorithm for spiking neurons that take into consideration synapse delays and axonal delays associated with weights. It can be utilized for both classification and association and uses several biologically influenced properties, such as axonal and synaptic delays. This algorithm also takes into consideration spike-timing-dependent plasticity as in Remote Supervised Method (ReSuMe). This paper focuses on the classification aspect alone. Spiked neurons trained according to this proposed learning rule are capable of classifying different categories by the associated sequences of precisely timed spikes. Simulation results have shown that the proposed learning method greatly improves classification accuracy when compared to the Spike Pattern Association Neuron (SPAN) and the Tempotron learning rule.

  17. Automated reliability assessment for spectroscopic redshift measurements

    Science.gov (United States)

    Jamal, S.; Le Brun, V.; Le Fèvre, O.; Vibert, D.; Schmitt, A.; Surace, C.; Copin, Y.; Garilli, B.; Moresco, M.; Pozzetti, L.

    2018-03-01

    Context. Future large-scale surveys, such as the ESA Euclid mission, will produce a large set of galaxy redshifts (≥106) that will require fully automated data-processing pipelines to analyze the data, extract crucial information and ensure that all requirements are met. A fundamental element in these pipelines is to associate to each galaxy redshift measurement a quality, or reliability, estimate. Aim. In this work, we introduce a new approach to automate the spectroscopic redshift reliability assessment based on machine learning (ML) and characteristics of the redshift probability density function. Methods: We propose to rephrase the spectroscopic redshift estimation into a Bayesian framework, in order to incorporate all sources of information and uncertainties related to the redshift estimation process and produce a redshift posterior probability density function (PDF). To automate the assessment of a reliability flag, we exploit key features in the redshift posterior PDF and machine learning algorithms. Results: As a working example, public data from the VIMOS VLT Deep Survey is exploited to present and test this new methodology. We first tried to reproduce the existing reliability flags using supervised classification in order to describe different types of redshift PDFs, but due to the subjective definition of these flags (classification accuracy 58%), we soon opted for a new homogeneous partitioning of the data into distinct clusters via unsupervised classification. After assessing the accuracy of the new clusters via resubstitution and test predictions (classification accuracy 98%), we projected unlabeled data from preliminary mock simulations for the Euclid space mission into this mapping to predict their redshift reliability labels. Conclusions: Through the development of a methodology in which a system can build its own experience to assess the quality of a parameter, we are able to set a preliminary basis of an automated reliability assessment for

  18. Automatic Classification Using Supervised Learning in a Medical Document Filtering Application.

    Science.gov (United States)

    Mostafa, J.; Lam, W.

    2000-01-01

    Presents a multilevel model of the information filtering process that permits document classification. Evaluates a document classification approach based on a supervised learning algorithm, measures the accuracy of the algorithm in a neural network that was trained to classify medical documents on cell biology, and discusses filtering…

  19. Semi-supervised learning and domain adaptation in natural language processing

    CERN Document Server

    Søgaard, Anders

    2013-01-01

    This book introduces basic supervised learning algorithms applicable to natural language processing (NLP) and shows how the performance of these algorithms can often be improved by exploiting the marginal distribution of large amounts of unlabeled data. One reason for that is data sparsity, i.e., the limited amounts of data we have available in NLP. However, in most real-world NLP applications our labeled data is also heavily biased. This book introduces extensions of supervised learning algorithms to cope with data sparsity and different kinds of sampling bias.This book is intended to be both

  20. Semi-Supervised Multiple Feature Analysis for Action Recognition

    Science.gov (United States)

    2013-11-26

    in saving la- beling costs while simultaneously achieving good performance. Most semi-supervised learning methods assume that nearby points are likely...3, 5, 10 and 15) per category in the training set, thus resulting in , , , and randomly la- beled videos, with the remaining training videos unlabeled...with the increase of la- beled training samples, the performance of all algorithms rises. Meanwhile, the performance differences between our method and

  1. Broad Absorption Line Quasar catalogues with Supervised Neural Networks

    International Nuclear Information System (INIS)

    Scaringi, Simone; Knigge, Christian; Cottis, Christopher E.; Goad, Michael R.

    2008-01-01

    We have applied a Learning Vector Quantization (LVQ) algorithm to SDSS DR5 quasar spectra in order to create a large catalogue of broad absorption line quasars (BALQSOs). We first discuss the problems with BALQSO catalogues constructed using the conventional balnicity and/or absorption indices (BI and AI), and then describe the supervised LVQ network we have trained to recognise BALQSOs. The resulting BALQSO catalogue should be substantially more robust and complete than BI-or AI-based ones.

  2. Challenges for Better thesis supervision.

    Science.gov (United States)

    Ghadirian, Laleh; Sayarifard, Azadeh; Majdzadeh, Reza; Rajabi, Fatemeh; Yunesian, Masoud

    2014-01-01

    Conduction of thesis by the students is one of their major academic activities. Thesis quality and acquired experiences are highly dependent on the supervision. Our study is aimed at identifing the challenges in thesis supervision from both students and faculty members point of view. This study was conducted using individual in-depth interviews and Focus Group Discussions (FGD). The participants were 43 students and faculty members selected by purposive sampling. It was carried out in Tehran University of Medical Sciences in 2012. Data analysis was done concurrently with data gathering using content analysis method. Our data analysis resulted in 162 codes, 17 subcategories and 4 major categories, "supervisory knowledge and skills", "atmosphere", "bylaws and regulations relating to supervision" and "monitoring and evaluation". This study showed that more attention and planning in needed for modifying related rules and regulations, qualitative and quantitative improvement in mentorship training, research atmosphere improvement and effective monitoring and evaluation in supervisory area.

  3. Cultural Humility in Psychotherapy Supervision.

    Science.gov (United States)

    Hook, Joshua N; Watkins, C Edward; Davis, Don E; Owen, Jesse; Van Tongeren, Daryl R; Ramos, Marciana J

    2016-01-01

    As a core component of multicultural orientation, cultural humility can be considered an important attitude for clinical supervisees to adopt and practically implement. How can cultural humility be most meaningfully incorporated in supervision? In what ways can supervisors stimulate the development of a culturally humble attitude in our supervisees? We consider those questions in this paper and present a model for addressing cultural humility in clinical supervision. The primary focus is given to two areas: (a) modeling and teaching of cultural humility through interpersonal interactions in supervision, and (b) teaching cultural humility through outside activities and experiences. Two case studies illustrating the model are presented, and a research agenda for work in this area is outlined.

  4. The efficiency of government supervision

    International Nuclear Information System (INIS)

    Paetzold, H.

    1992-01-01

    In 1970, fires as events initiating plant failure were included in the accident analyses of nuclear power plant design concepts. In the meantime, they have been expressed in more precise terms and incorporated into the bodies of nuclear technical rules and regulations. Following a suggestion by the Baden-Wuerttemberg State Ministry for the Environment, the efficiency of government supervision has been examined for the example of fire protection measures or the site of Phillipsburg with one BWR and one PWR plant in operation. The result of the examination indicated that pragmatic approaches and the establishment of key areas of supervision could further enhance the efficiency of government supervision under Section 19 of the German Atomic Energy Act and achieve improvements in plant safety. (orig.) [de

  5. Automated error-tolerant macromolecular structure determination from multidimensional nuclear Overhauser enhancement spectra and chemical shift assignments: improved robustness and performance of the PASD algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kuszewski, John J.; Thottungal, Robin Augustine [National Institutes of Health, Imaging Sciences Laboratory, Center for Information Technology (United States); Clore, G. Marius [National Institutes of Health, Laboratory of Chemical Physics, National Institute of Diabetes and Digestive and Kidney Diseases (United States)], E-mail: mariusc@mail.nih.gov; Schwieters, Charles D. [National Institutes of Health, Imaging Sciences Laboratory, Center for Information Technology (United States)], E-mail: Charles.Schwieters@nih.gov

    2008-08-15

    We report substantial improvements to the previously introduced automated NOE assignment and structure determination protocol known as PASD (Kuszewski et al. (2004) J Am Chem Soc 26:6258-6273). The improved protocol includes extensive analysis of input spectral data to create a low-resolution contact map of residues expected to be close in space. This map is used to obtain reasonable initial guesses of NOE assignment likelihoods which are refined during subsequent structure calculations. Information in the contact map about which residues are predicted to not be close in space is applied via conservative repulsive distance restraints which are used in early phases of the structure calculations. In comparison with the previous protocol, the new protocol requires significantly less computation time. We show results of running the new PASD protocol on six proteins and demonstrate that useful assignment and structural information is extracted on proteins of more than 220 residues. We show that useful assignment information can be obtained even in the case in which a unique structure cannot be determined.

  6. Comparison of Matrix Frequency-Doubling Technology (FDT) Perimetry with the SWEDISH Interactive Thresholding Algorithm (SITA) Standard Automated Perimetry (SAP) in Mild Glaucoma.

    Science.gov (United States)

    Doozandeh, Azadeh; Irandoost, Farnoosh; Mirzajani, Ali; Yazdani, Shahin; Pakravan, Mohammad; Esfandiari, Hamed

    2017-01-01

    This study aimed to compare second-generation frequency-doubling technology (FDT) perimetry with standard automated perimetry (SAP) in mild glaucoma. Forty-seven eyes of 47 participants who had mild visual field defect by SAP were included in this study. All participants were examined using SITA 24-2 (SITA-SAP) and matrix 24-2 (Matrix-FDT). The correlations of global indices and the number of defects on pattern deviation (PD) plots were determined. Agreement between two sets regarding the stage of visual field damage was assessed. Pearson's correlation, intra-cluster comparison, paired t-test, and 95% limit of agreement were calculated. Although there was no significant difference between global indices, the agreement between the two devices regarding the global indices was weak (the limit of agreement for mean deviation was -6.08 to 6.08 and that for pattern standard deviation was -4.42 to 3.42). The agreement between SITA-SAP and Matrix-FDT regarding the Glaucoma Hemifield Test (GHT) and the number of defective points in each quadrant and staging of the visual field damage was also weak. Because the correlation between SITA-SAP and Matrix-FDT regarding global indices, GHT, number of defective points, and stage of the visual field damage in mild glaucoma is weak, Matrix-FDT cannot be used interchangeably with SITA-SAP in the early stages of glaucoma.

  7. ECB Banking Supervision and beyond

    OpenAIRE

    Lannoo, Karel

    2014-01-01

    With publication of the results of its Comprehensive Assessment at the end of October 2014, the European Central Bank has set the standard for its new mandate as supervisor. But this was only the beginning. The heavy work started in early November, with the day-to-day supervision of the 120 most significant banks in the eurozone under the Single Supervisory Mechanism. The centralisation of the supervision in the eurozone will pose a number of challenges for the ECB in the coming months and ye...

  8. Clinical Supervision of International Supervisees: Suggestions for Multicultural Supervision

    Science.gov (United States)

    Lee, Ahram

    2018-01-01

    An increase of international students in various settings has been noted in a range of disciplines including counseling and other mental health professions. The author examined the literature on international counseling students related to their experiences in counseling training, particularly in supervision. From the counseling literature, five…

  9. Active semi-supervised learning method with hybrid deep belief networks.

    Science.gov (United States)

    Zhou, Shusen; Chen, Qingcai; Wang, Xiaolong

    2014-01-01

    In this paper, we develop a novel semi-supervised learning algorithm called active hybrid deep belief networks (AHD), to address the semi-supervised sentiment classification problem with deep learning. First, we construct the previous several hidden layers using restricted Boltzmann machines (RBM), which can reduce the dimension and abstract the information of the reviews quickly. Second, we construct the following hidden layers using convolutional restricted Boltzmann machines (CRBM), which can abstract the information of reviews effectively. Third, the constructed deep architecture is fine-tuned by gradient-descent based supervised learning with an exponential loss function. Finally, active learning method is combined based on the proposed deep architecture. We did several experiments on five sentiment classification datasets, and show that AHD is competitive with previous semi-supervised learning algorithm. Experiments are also conducted to verify the effectiveness of our proposed method with different number of labeled reviews and unlabeled reviews respectively.

  10. Automated Detection of Cancer Associated Genes Using a Combined Fuzzy-Rough-Set-Based F-Information and Water Swirl Algorithm of Human Gene Expression Data.

    Directory of Open Access Journals (Sweden)

    Pugalendhi Ganesh Kumar

    Full Text Available This study describes a novel approach to reducing the challenges of highly nonlinear multiclass gene expression values for cancer diagnosis. To build a fruitful system for cancer diagnosis, in this study, we introduced two levels of gene selection such as filtering and embedding for selection of potential genes and the most relevant genes associated with cancer, respectively. The filter procedure was implemented by developing a fuzzy rough set (FR-based method for redefining the criterion function of f-information (FI to identify the potential genes without discretizing the continuous gene expression values. The embedded procedure is implemented by means of a water swirl algorithm (WSA, which attempts to optimize the rule set and membership function required to classify samples using a fuzzy-rule-based multiclassification system (FRBMS. Two novel update equations are proposed in WSA, which have better exploration and exploitation abilities while designing a self-learning FRBMS. The efficiency of our new approach was evaluated on 13 multicategory and 9 binary datasets of cancer gene expression. Additionally, the performance of the proposed FRFI-WSA method in designing an FRBMS was compared with existing methods for gene selection and optimization such as genetic algorithm (GA, particle swarm optimization (PSO, and artificial bee colony algorithm (ABC on all the datasets. In the global cancer map with repeated measurements (GCM_RM dataset, the FRFI-WSA showed the smallest number of 16 most relevant genes associated with cancer using a minimal number of 26 compact rules with the highest classification accuracy (96.45%. In addition, the statistical validation used in this study revealed that the biological relevance of the most relevant genes associated with cancer and their linguistics detected by the proposed FRFI-WSA approach are better than those in the other methods. The simple interpretable rules with most relevant genes and effectively

  11. Principles and models of a co-operative systems of a supervision aid; SCAS: principes et modeles d`un systeme cooperatif d`assistance a la supervision

    Energy Technology Data Exchange (ETDEWEB)

    Penalva, J.M. [CEA Centre d`Etudes de la Vallee du Rhone, 30 - Marcoule (France). Dept. d`Exploitation du Retraitement et de Demantelement; Cases, E. [CEA Centre d`Etudes de la Vallee du Rhone, 30 - Marcoule (France). Dept. d`Exploitation du Retraitement et de Demantelement]|[Paris-6 Univ., 75 (France); Brezillon, P. [Paris-6 Univ., 75 (France); Minault, S.

    1994-12-31

    This paper presents the functioning principles and the necessary models for a cooperative system of supervision aid (SCAS) used for a high-automated workshop. A meta-system of supervision is made up of the operator and the SCAS. The SCAS can operate under 2 different modes: wakefulness and cooperation. On the first one the behaviours of the process and the operator is observed and analysed. On the second one, it helps to solve the problems occurred by the operator. (TEC). 3 refs.

  12. Automated Speech Rate Measurement in Dysarthria

    Science.gov (United States)

    Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc

    2015-01-01

    Purpose: In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. Method: The new algorithm was trained and tested using Dutch…

  13. Algorithms for Reinforcement Learning

    CERN Document Server

    Szepesvari, Csaba

    2010-01-01

    Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'

  14. PIXiE: an algorithm for automated ion mobility arrival time extraction and collision cross section calculation using global data association

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Jian; Casey, Cameron P.; Zheng, Xueyun; Ibrahim, Yehia M.; Wilkins, Christopher S.; Renslow, Ryan S.; Thomas, Dennis G.; Payne, Samuel H.; Monroe, Matthew E.; Smith, Richard D.; Teeguarden, Justin G.; Baker, Erin S.; Metz, Thomas O.

    2017-05-15

    Motivation: Drift tube ion mobility spectrometry (DTIMS) is increasingly implemented in high throughput omics workflows, and new informatics approaches are necessary for processing the associated data. To automatically extract arrival times for molecules measured by DTIMS coupled with mass spectrometry and compute their associated collisional cross sections (CCS) we created the PNNL Ion Mobility Cross Section Extractor (PIXiE). The primary application presented for this algorithm is the extraction of information necessary to create a reference library containing accu-rate masses, DTIMS arrival times and CCSs for use in high throughput omics analyses. Results: We demonstrate the utility of this approach by automatically extracting arrival times and calculating the associated CCSs for a set of endogenous metabolites and xenobiotics. The PIXiE-generated CCS values were identical to those calculated by hand and within error of those calcu-lated using commercially available instrument vendor software.

  15. Data Transformation Functions for Expanded Search Spaces in Geographic Sample Supervised Segment Generation

    OpenAIRE

    Christoff Fourie; Elisabeth Schoepfer

    2014-01-01

    Sample supervised image analysis, in particular sample supervised segment generation, shows promise as a methodological avenue applicable within Geographic Object-Based Image Analysis (GEOBIA). Segmentation is acknowledged as a constituent component within typically expansive image analysis processes. A general extension to the basic formulation of an empirical discrepancy measure directed segmentation algorithm parameter tuning approach is proposed. An expanded search landscape is defined, c...

  16. Adequate supervision for children and adolescents.

    Science.gov (United States)

    Anderst, James; Moffatt, Mary

    2014-11-01

    Primary care providers (PCPs) have the opportunity to improve child health and well-being by addressing supervision issues before an injury or exposure has occurred and/or after an injury or exposure has occurred. Appropriate anticipatory guidance on supervision at well-child visits can improve supervision of children, and may prevent future harm. Adequate supervision varies based on the child's development and maturity, and the risks in the child's environment. Consideration should be given to issues as wide ranging as swimming pools, falls, dating violence, and social media. By considering the likelihood of harm and the severity of the potential harm, caregivers may provide adequate supervision by minimizing risks to the child while still allowing the child to take "small" risks as needed for healthy development. Caregivers should initially focus on direct (visual, auditory, and proximity) supervision of the young child. Gradually, supervision needs to be adjusted as the child develops, emphasizing a safe environment and safe social interactions, with graduated independence. PCPs may foster adequate supervision by providing concrete guidance to caregivers. In addition to preventing injury, supervision includes fostering a safe, stable, and nurturing relationship with every child. PCPs should be familiar with age/developmentally based supervision risks, adequate supervision based on those risks, characteristics of neglectful supervision based on age/development, and ways to encourage appropriate supervision throughout childhood. Copyright 2014, SLACK Incorporated.

  17. Supervision af psykoterapi via Skype

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Grünbaum, Liselotte

    2011-01-01

    that although face-to-face meetings are to be preferred, today’s technology means that supervision via videoconference like Skype™ is better than just being acceptable. Thus, it offers a good alternative to face-to-face encounters, and in certain ways it even seems to boost the growth of the supervisees...

  18. Line supervision of alarm communications

    International Nuclear Information System (INIS)

    Chritton, M.R.

    1991-01-01

    The objective of this paper is to explain the role and application of alarm communication link supervision in security systems such as for nuclear facilities. The vulnerabilities of the various types of alarm communication links will be presented. Throughout the paper, an effort has been made to describe only those technologies commercially available and to avoid speculative theoretical solutions

  19. Consultative Instructor Supervision and Evaluation

    Science.gov (United States)

    Lee, William W.

    2010-01-01

    Organizations vary greatly in how they monitor training instructors. The methods used in monitoring vary greatly. This article presents a systematic process for improving instructor skills that result in better teaching and better learning, which results in better-prepared employees for the workforce. The consultative supervision and evaluation…

  20. Improving supervision: a team approach.

    Science.gov (United States)

    1993-01-01

    This issue of "The Family Planning Manager" outlines an interactive team supervision strategy as a means of improving family planning service quality and enabling staff to perform to their maximum potential. Such an approach to supervision requires a shift from a monitoring to a facilitative role. Because supervisory visits to the field are infrequent, the regional supervisor, clinic manager, and staff should form a team to share ongoing supervisory responsibilities. The team approach removes individual blame and builds consensus. An effective team is characterized by shared leadership roles, concrete work problems, mutual accountability, an emphasis on achieving team objectives, and problem resolution within the group. The team supervision process includes the following steps: prepare a visit plan and schedule; meet with the clinic manager and staff to explain how the visit will be conducted; supervise key activity areas (clinical, management, and personnel); conduct a problem-solving team meeting; conduct a debriefing meeting with the clinic manager; and prepare a report on the visit, including recommendations and follow-up plans. In Guatemala's Family Planning Unit, teams identify problem areas on the basis of agreement that a problem exists, belief that the problem can be solved with available resources, and individual willingness to accept responsibility for the specific actions identified to correct the problem.

  1. Evaluation of Semi-supervised Learning for Classification of Protein Crystallization Imagery.

    Science.gov (United States)

    Sigdel, Madhav; Dinç, İmren; Dinç, Semih; Sigdel, Madhu S; Pusey, Marc L; Aygün, Ramazan S

    2014-03-01

    In this paper, we investigate the performance of two wrapper methods for semi-supervised learning algorithms for classification of protein crystallization images with limited labeled images. Firstly, we evaluate the performance of semi-supervised approach using self-training with naïve Bayesian (NB) and sequential minimum optimization (SMO) as the base classifiers. The confidence values returned by these classifiers are used to select high confident predictions to be used for self-training. Secondly, we analyze the performance of Yet Another Two Stage Idea (YATSI) semi-supervised learning using NB, SMO, multilayer perceptron (MLP), J48 and random forest (RF) classifiers. These results are compared with the basic supervised learning using the same training sets. We perform our experiments on a dataset consisting of 2250 protein crystallization images for different proportions of training and test data. Our results indicate that NB and SMO using both self-training and YATSI semi-supervised approaches improve accuracies with respect to supervised learning. On the other hand, MLP, J48 and RF perform better using basic supervised learning. Overall, random forest classifier yields the best accuracy with supervised learning for our dataset.

  2. Supervised detection of exoplanets in high-contrast imaging sequences

    Science.gov (United States)

    Gomez Gonzalez, C. A.; Absil, O.; Van Droogenbroeck, M.

    2018-06-01

    Context. Post-processing algorithms play a key role in pushing the detection limits of high-contrast imaging (HCI) instruments. State-of-the-art image processing approaches for HCI enable the production of science-ready images relying on unsupervised learning techniques, such as low-rank approximations, for generating a model point spread function (PSF) and subtracting the residual starlight and speckle noise. Aims: In order to maximize the detection rate of HCI instruments and survey campaigns, advanced algorithms with higher sensitivities to faint companions are needed, especially for the speckle-dominated innermost region of the images. Methods: We propose a reformulation of the exoplanet detection task (for ADI sequences) that builds on well-established machine learning techniques to take HCI post-processing from an unsupervised to a supervised learning context. In this new framework, we present algorithmic solutions using two different discriminative models: SODIRF (random forests) and SODINN (neural networks). We test these algorithms on real ADI datasets from VLT/NACO and VLT/SPHERE HCI instruments. We then assess their performances by injecting fake companions and using receiver operating characteristic analysis. This is done in comparison with state-of-the-art ADI algorithms, such as ADI principal component analysis (ADI-PCA). Results: This study shows the improved sensitivity versus specificity trade-off of the proposed supervised detection approach. At the diffraction limit, SODINN improves the true positive rate by a factor ranging from 2 to 10 (depending on the dataset and angular separation) with respect to ADI-PCA when working at the same false-positive level. Conclusions: The proposed supervised detection framework outperforms state-of-the-art techniques in the task of discriminating planet signal from speckles. In addition, it offers the possibility of re-processing existing HCI databases to maximize their scientific return and potentially improve

  3. Multi combined Adlerian supervision in Counseling

    OpenAIRE

    Gungor, Abdi

    2017-01-01

    For counselor professional and counselor education, supervision is an important process, in which more experienced professional helps and guides less experienced professional. To provide an effective and beneficial supervision, various therapy, development, or process based approaches and models have been developed. In addition, different eclectic models integrating more than one model have been developed. In this paper, as a supervision model, multi combined Adlerian supervision model is pro...

  4. Multicultural Supervision: What Difference Does Difference Make?

    Science.gov (United States)

    Eklund, Katie; Aros-O'Malley, Megan; Murrieta, Imelda

    2014-01-01

    Multicultural sensitivity and competency represent critical components to contemporary practice and supervision in school psychology. Internship and supervision experiences are a capstone experience for many new school psychologists; however, few receive formal training and supervision in multicultural competencies. As an increased number of…

  5. Moment constrained semi-supervised LDA

    DEFF Research Database (Denmark)

    Loog, Marco

    2012-01-01

    This BNAIC compressed contribution provides a summary of the work originally presented at the First IAPR Workshop on Partially Supervised Learning and published in [5]. It outlines the idea behind supervised and semi-supervised learning and highlights the major shortcoming of many current methods...

  6. Skærpet bevidsthed om supervision

    DEFF Research Database (Denmark)

    Pedersen, Inge Nygaard

    2002-01-01

    This article presents a historical survey of the initiatives which have taken place in european music therapy towards developing a deeper consciousness about supervision. Supervision as a disciplin in music therapy training, as a maintenance of music therapy profession and as a postgraduate...... training for examined music therapists. Definitions are presented and methods developed by working groups in european music therapy supervision are presented....

  7. Methods of Feminist Family Therapy Supervision.

    Science.gov (United States)

    Prouty, Anne M.; Thomas, Volker; Johnson, Scott; Long, Janie K.

    2001-01-01

    Presents three supervision methods which emerged from a qualitative study of the experiences of feminist family therapy supervisors and the therapists they supervised: the supervision contract, collaborative methods, and hierarchical methods. Provides a description of the participants' experiences of these methods and discusses their fit with…

  8. 20 CFR 655.30 - Supervised recruitment.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Supervised recruitment. 655.30 Section 655.30... Workers) § 655.30 Supervised recruitment. (a) Supervised recruitment. Where an employer is found to have... failed to adequately conduct recruitment activities or failed in any obligation of this part, the CO may...

  9. 28 CFR 2.91 - Supervision responsibility.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Supervision responsibility. 2.91 Section 2.91 Judicial Administration DEPARTMENT OF JUSTICE PAROLE, RELEASE, SUPERVISION AND RECOMMITMENT OF PRISONERS, YOUTH OFFENDERS, AND JUVENILE DELINQUENTS District of Columbia Code: Prisoners and Parolees § 2.91 Supervision responsibility. (a) Pursuan...

  10. Postgraduate research supervision in a socially distributed ...

    African Journals Online (AJOL)

    Postgraduate supervision is a higher education practice with a long history. Through the conventional "apprenticeship" model postgraduate supervision has served as an important vehicle of intellectual inheritance between generations. However, this model of supervision has come under scrutiny as a consequence of the ...

  11. 32 CFR 727.11 - Supervision.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 5 2010-07-01 2010-07-01 false Supervision. 727.11 Section 727.11 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY PERSONNEL LEGAL ASSISTANCE § 727.11 Supervision. The Judge Advocate General will exercise supervision over all legal assistance activities in the Department of the Navy. Subject to the...

  12. Supervision Experiences of New Professional School Counselors

    Science.gov (United States)

    Bultsma, Shawn A.

    2012-01-01

    This qualitative study examined the supervision experiences of 11 new professional school counselors. They reported that their supervision experiences were most often administrative in nature; reports of clinical and developmental supervision were limited to participants whose supervisors were licensed as professional counselors. In addition,…

  13. 17 CFR 166.3 - Supervision.

    Science.gov (United States)

    2010-04-01

    ... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false Supervision. 166.3 Section 166.3 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION CUSTOMER PROTECTION RULES § 166.3 Supervision. Each Commission registrant, except an associated person who has no supervisory duties, must diligently supervise the handling b...

  14. Optimistic semi-supervised least squares classification

    DEFF Research Database (Denmark)

    Krijthe, Jesse H.; Loog, Marco

    2017-01-01

    The goal of semi-supervised learning is to improve supervised classifiers by using additional unlabeled training examples. In this work we study a simple self-learning approach to semi-supervised learning applied to the least squares classifier. We show that a soft-label and a hard-label variant ...

  15. Supervision of execution of dismantling; Supervision de ejecucion de desmantelamiento

    Energy Technology Data Exchange (ETDEWEB)

    Canizares, J.

    2015-07-01

    Enresa create and organizational structure that covers various areas involved in effective control of Decommissioning Project. One area is the Technical Supervision of Works Decommissioning Project, as Execution Department dependent Technical Management. In the structure, Execution Department acts as liaison between the project, disciplines involved in developing and specialized companies contracted work to achieve your intended target. Equally important is to ensure that such activities are carried out correctly, according to the project documentation. (Author)

  16. A National Survey of School Counselor Supervision Practices: Administrative, Clinical, Peer, and Technology Mediated Supervision

    Science.gov (United States)

    Perera-Diltz, Dilani M.; Mason, Kimberly L.

    2012-01-01

    Supervision is vital for personal and professional development of counselors. Practicing school counselors (n = 1557) across the nation were surveyed to explore current supervision practices. Results indicated that 41.1% of school counselors provide supervision. Although 89% receive some type of supervision, only 10.3% of school counselors receive…

  17. Supervision Experiences of Professional Counselors Providing Crisis Counseling

    Science.gov (United States)

    Dupre, Madeleine; Echterling, Lennis G.; Meixner, Cara; Anderson, Robin; Kielty, Michele

    2014-01-01

    In this phenomenological study, the authors explored supervision experiences of 13 licensed professional counselors in situations requiring crisis counseling. Five themes concerning crisis and supervision were identified from individual interviews. Findings support intensive, immediate crisis supervision and postlicensure clinical supervision.

  18. Application of Contingency Theories to the Supervision of Student Teachers.

    Science.gov (United States)

    Phelps, Julia D.

    1985-01-01

    This article examines selected approaches to student teacher supervision within the context of contingency theory. These include authentic supervision, developmental supervision, and supervision based on the student's level of maturity. (MT)

  19. Classification of Automated Search Traffic

    Science.gov (United States)

    Buehrer, Greg; Stokes, Jack W.; Chellapilla, Kumar; Platt, John C.

    As web search providers seek to improve both relevance and response times, they are challenged by the ever-increasing tax of automated search query traffic. Third party systems interact with search engines for a variety of reasons, such as monitoring a web site’s rank, augmenting online games, or possibly to maliciously alter click-through rates. In this paper, we investigate automated traffic (sometimes referred to as bot traffic) in the query stream of a large search engine provider. We define automated traffic as any search query not generated by a human in real time. We first provide examples of different categories of query logs generated by automated means. We then develop many different features that distinguish between queries generated by people searching for information, and those generated by automated processes. We categorize these features into two classes, either an interpretation of the physical model of human interactions, or as behavioral patterns of automated interactions. Using the these detection features, we next classify the query stream using multiple binary classifiers. In addition, a multiclass classifier is then developed to identify subclasses of both normal and automated traffic. An active learning algorithm is used to suggest which user sessions to label to improve the accuracy of the multiclass classifier, while also seeking to discover new classes of automated traffic. Performance analysis are then provided. Finally, the multiclass classifier is used to predict the subclass distribution for the search query stream.

  20. Process automation

    International Nuclear Information System (INIS)

    Moser, D.R.

    1986-01-01

    Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs

  1. BANKING SUPERVISION IN EUROPEAN UNION

    Directory of Open Access Journals (Sweden)

    Lavinia Mihaela GUȚU

    2013-10-01

    Full Text Available The need for prudential supervision imposed to banks by law arises from the action that banking market’s basic factors have. Therefore, it is about banks’ role in economy. The normal functioning of banks in all their important duties maintains the stability of banking system. Further, the stability of the entire economy depends on the stability of the banking system. Under conditions of imbalance regarding treasury or liquidity, banks are faced with unmanageable crisis and the consequences can be fatal. To ensure long-term stability of the banking system, supervisory regulations were constituted in order to prevent banks focusing on achieving rapidly high profits and protect the interests of depositors. Starting from this point, this paper will carry out a study on existing models of supervision in the European Union’s Member States. A comparison between them will support identifying the advantages and disadvantages of each of them.

  2. Declarative modeling for process supervision

    International Nuclear Information System (INIS)

    Leyval, L.

    1989-01-01

    Our work is a contribution to computer aided supervision of continuous processes. It is inspired by an area of Artificial Intelligence: qualitative physics. Here, supervision is based on a model which continuously provides operators with a synthetic view of the process; but this model is founded on general principles of control theory rather than on physics. It involves concepts such as high gain or small time response. It helps in linking temporally the evolution of various variables. Moreover, the model provides predictions of the future behaviour of the process, which allows action advice and alarm filtering. This should greatly reduce the famous cognitive overload associated to any complex and dangerous evolution of the process

  3. Shame, the scourge of supervision

    Directory of Open Access Journals (Sweden)

    Valérie Perret

    2017-07-01

    • How can the supervisor deal with it? My motivation in writing this article is born from my personal experience with shame. It inhibited my thinking, my spontaneity, my creativity, and therefore limited my personal and professional development. Freeing myself allowed me to recover liberty, energy and legitimacy. I gained in professional competence and assertiveness within my practice as supervisor. My purpose in writing this article is that we, as supervisors, reflect together on how we look at the process of shame in our supervision sessions.  Citation - APA format: Perret, V. (2017. Shame, the scourge of supervision. International Journal of Transactional Analysis Research & Practice, 8(2, 41-48.

  4. Future of Automated Insulin Delivery Systems

    NARCIS (Netherlands)

    Castle, Jessica R.; DeVries, J. Hans; Kovatchev, Boris

    2017-01-01

    Advances in continuous glucose monitoring (CGM) have brought on a paradigm shift in the management of type 1 diabetes. These advances have enabled the automation of insulin delivery, where an algorithm determines the insulin delivery rate in response to the CGM values. There are multiple automated

  5. BANKING SUPERVISION IN EUROPEAN UNION

    OpenAIRE

    Lavinia Mihaela GUȚU; Vasile ILIE

    2013-01-01

    The need for prudential supervision imposed to banks by law arises from the action that banking market’s basic factors have. Therefore, it is about banks’ role in economy. The normal functioning of banks in all their important duties maintains the stability of banking system. Further, the stability of the entire economy depends on the stability of the banking system. Under conditions of imbalance regarding treasury or liquidity, banks are faced with unmanageable crisis and the consequences ca...

  6. Medical supervision of radiation workers

    International Nuclear Information System (INIS)

    1968-01-01

    The first part of this volume describes the effects of radiation on living organism, both at the overall and at the molecular level. Special attention is paid to the metabolism and toxicity of radioactivity substances. The second part deals with radiological exposure, natural, medical and occupational. The third part provides data on radiological protection standards, and the fourth part addresses the health supervision of workers exposed to ionizing radiation, covering both physical and medical control.

  7. Coupled Semi-Supervised Learning

    Science.gov (United States)

    2010-05-01

    Additionally, specify the expected category of each relation argument to enable type-checking. Subsystem components and the KI can benefit from methods that...confirm that our coupled semi-supervised learning approaches can scale to hun- dreds of predicates and can benefit from using a diverse set of...organization yes California Institute of Technology vegetable food yes carrots vehicle item yes airplanes vertebrate animal yes videoGame product yes

  8. Supervision of execution of dismantling

    International Nuclear Information System (INIS)

    Canizares, J.

    2015-01-01

    Enresa create and organizational structure that covers various areas involved in effective control of Decommissioning Project. One area is the Technical Supervision of Works Decommissioning Project, as Execution Department dependent Technical Management. In the structure, Execution Department acts as liaison between the project, disciplines involved in developing and specialized companies contracted work to achieve your intended target. Equally important is to ensure that such activities are carried out correctly, according to the project documentation. (Author)

  9. Fatigue Level Estimation of Bill Based on Acoustic Signal Feature by Supervised SOM

    Science.gov (United States)

    Teranishi, Masaru; Omatu, Sigeru; Kosaka, Toshihisa

    Fatigued bills have harmful influence on daily operation of Automated Teller Machine(ATM). To make the fatigued bills classification more efficient, development of an automatic fatigued bill classification method is desired. We propose a new method to estimate bending rigidity of bill from acoustic signal feature of banking machines. The estimated bending rigidities are used as continuous fatigue level for classification of fatigued bill. By using the supervised Self-Organizing Map(supervised SOM), we estimate the bending rigidity from only the acoustic energy pattern effectively. The experimental result with real bill samples shows the effectiveness of the proposed method.

  10. Development of Raman microspectroscopy for automated detection and imaging of basal cell carcinoma

    Science.gov (United States)

    Larraona-Puy, Marta; Ghita, Adrian; Zoladek, Alina; Perkins, William; Varma, Sandeep; Leach, Iain H.; Koloydenko, Alexey A.; Williams, Hywel; Notingher, Ioan

    2009-09-01

    We investigate the potential of Raman microspectroscopy (RMS) for automated evaluation of excised skin tissue during Mohs micrographic surgery (MMS). The main aim is to develop an automated method for imaging and diagnosis of basal cell carcinoma (BCC) regions. Selected Raman bands responsible for the largest spectral differences between BCC and normal skin regions and linear discriminant analysis (LDA) are used to build a multivariate supervised classification model. The model is based on 329 Raman spectra measured on skin tissue obtained from 20 patients. BCC is discriminated from healthy tissue with 90+/-9% sensitivity and 85+/-9% specificity in a 70% to 30% split cross-validation algorithm. This multivariate model is then applied on tissue sections from new patients to image tumor regions. The RMS images show excellent correlation with the gold standard of histopathology sections, BCC being detected in all positive sections. We demonstrate the potential of RMS as an automated objective method for tumor evaluation during MMS. The replacement of current histopathology during MMS by a ``generalization'' of the proposed technique may improve the feasibility and efficacy of MMS, leading to a wider use according to clinical need.

  11. Automated measurement of mouse social behaviors using depth sensing, video tracking, and machine learning.

    Science.gov (United States)

    Hong, Weizhe; Kennedy, Ann; Burgos-Artizzu, Xavier P; Zelikowsky, Moriel; Navonne, Santiago G; Perona, Pietro; Anderson, David J

    2015-09-22

    A lack of automated, quantitative, and accurate assessment of social behaviors in mammalian animal models has limited progress toward understanding mechanisms underlying social interactions and their disorders such as autism. Here we present a new integrated hardware and software system that combines video tracking, depth sensing, and machine learning for automatic detection and quantification of social behaviors involving close and dynamic interactions between two mice of different coat colors in their home cage. We designed a hardware setup that integrates traditional video cameras with a depth camera, developed computer vision tools to extract the body "pose" of individual animals in a social context, and used a supervised learning algorithm to classify several well-described social behaviors. We validated the robustness of the automated classifiers in various experimental settings and used them to examine how genetic background, such as that of Black and Tan Brachyury (BTBR) mice (a previously reported autism model), influences social behavior. Our integrated approach allows for rapid, automated measurement of social behaviors across diverse experimental designs and also affords the ability to develop new, objective behavioral metrics.

  12. Nursing supervision for care comprehensiveness.

    Science.gov (United States)

    Chaves, Lucieli Dias Pedreschi; Mininel, Vivian Aline; Silva, Jaqueline Alcântara Marcelino da; Alves, Larissa Roberta; Silva, Maria Ferreira da; Camelo, Silvia Helena Henriques

    2017-01-01

    To reflect on nursing supervision as a management tool for care comprehensiveness by nurses, considering its potential and limits in the current scenario. A reflective study based on discourse about nursing supervision, presenting theoretical and practical concepts and approaches. Limits on the exercise of supervision are related to the organization of healthcare services based on the functional and clinical model of care, in addition to possible gaps in the nurse training process and work overload. Regarding the potential, researchers emphasize that supervision is a tool for coordinating care and management actions, which may favor care comprehensiveness, and stimulate positive attitudes toward cooperation and contribution within teams, co-responsibility, and educational development at work. Nursing supervision may help enhance care comprehensiveness by implying continuous reflection on including the dynamics of the healthcare work process and user needs in care networks. refletir a supervisão de enfermagem como instrumento gerencial do enfermeiro para integralidade do cuidado, considerando suas potencialidades e limitações no cenário atual. estudo reflexivo baseado na formulação discursiva sobre a supervisão de enfermagem, apresentando conceitos e enfoques teóricos e/ou práticos. limitações no exercício da supervisão estão relacionadas à organização dos serviços de saúde embasada no modelo funcional e clínico de atenção, assim como possíveis lacunas no processo de formação do enfermeiro e sobrecarga de trabalho. Quanto às potencialidades, destaca-se a supervisão como instrumento de articulação de ações assistenciais e gerenciais, que pode favorecer integralidade da atenção, estimular atitudes de cooperação e colaboração em equipe, além da corresponsabilização e promoção da educação no trabalho. supervisão de enfermagem pode contribuir para fortalecimento da integralidade do cuidado, pressupondo reflexão cont

  13. Cognitive Inference Device for Activity Supervision in the Elderly

    Directory of Open Access Journals (Sweden)

    Nilamadhab Mishra

    2014-01-01

    Full Text Available Human activity, life span, and quality of life are enhanced by innovations in science and technology. Aging individual needs to take advantage of these developments to lead a self-regulated life. However, maintaining a self-regulated life at old age involves a high degree of risk, and the elderly often fail at this goal. Thus, the objective of our study is to investigate the feasibility of implementing a cognitive inference device (CI-device for effective activity supervision in the elderly. To frame the CI-device, we propose a device design framework along with an inference algorithm and implement the designs through an artificial neural model with different configurations, mapping the CI-device’s functions to minimise the device’s prediction error. An analysis and discussion are then provided to validate the feasibility of CI-device implementation for activity supervision in the elderly.

  14. Using Supervised Deep Learning for Human Age Estimation Problem

    Science.gov (United States)

    Drobnyh, K. A.; Polovinkin, A. N.

    2017-05-01

    Automatic facial age estimation is a challenging task upcoming in recent years. In this paper, we propose using the supervised deep learning features to improve an accuracy of the existing age estimation algorithms. There are many approaches solving the problem, an active appearance model and the bio-inspired features are two of them which showed the best accuracy. For experiments we chose popular publicly available FG-NET database, which contains 1002 images with a broad variety of light, pose, and expression. LOPO (leave-one-person-out) method was used to estimate the accuracy. Experiments demonstrated that adding supervised deep learning features has improved accuracy for some basic models. For example, adding the features to an active appearance model gave the 4% gain (the error decreased from 4.59 to 4.41).

  15. Learning Supervised Topic Models for Classification and Regression from Crowds.

    Science.gov (United States)

    Rodrigues, Filipe; Lourenco, Mariana; Ribeiro, Bernardete; Pereira, Francisco C

    2017-12-01

    The growing need to analyze large collections of documents has led to great developments in topic modeling. Since documents are frequently associated with other related variables, such as labels or ratings, much interest has been placed on supervised topic models. However, the nature of most annotation tasks, prone to ambiguity and noise, often with high volumes of documents, deem learning under a single-annotator assumption unrealistic or unpractical for most real-world applications. In this article, we propose two supervised topic models, one for classification and another for regression problems, which account for the heterogeneity and biases among different annotators that are encountered in practice when learning from crowds. We develop an efficient stochastic variational inference algorithm that is able to scale to very large datasets, and we empirically demonstrate the advantages of the proposed model over state-of-the-art approaches.

  16. Distribution automation

    International Nuclear Information System (INIS)

    Gruenemeyer, D.

    1991-01-01

    This paper reports on a Distribution Automation (DA) System enhances the efficiency and productivity of a utility. It also provides intangible benefits such as improved public image and market advantages. A utility should evaluate the benefits and costs of such a system before committing funds. The expenditure for distribution automation is economical when justified by the deferral of a capacity increase, a decrease in peak power demand, or a reduction in O and M requirements

  17. Automated analysis of autoradiographic imagery

    International Nuclear Information System (INIS)

    Bisignani, W.T.; Greenhouse, S.C.

    1975-01-01

    A research programme is described which has as its objective the automated characterization of neurological tissue regions from autoradiographs by utilizing hybrid-resolution image processing techniques. An experimental system is discussed which includes raw imagery, scanning an digitizing equipments, feature-extraction algorithms, and regional characterization techniques. The parameters extracted by these algorithms are presented as well as the regional characteristics which are obtained by operating on the parameters with statistical sampling techniques. An approach is presented for validating the techniques and initial experimental results are obtained from an anlysis of an autoradiograph of a region of the hypothalamus. An extension of these automated techniques to other biomedical research areas is discussed as well as the implications of applying automated techniques to biomedical research problems. (author)

  18. Classifier Directed Data Hybridization for Geographic Sample Supervised Segment Generation

    Directory of Open Access Journals (Sweden)

    Christoff Fourie

    2014-11-01

    Full Text Available Quality segment generation is a well-known challenge and research objective within Geographic Object-based Image Analysis (GEOBIA. Although methodological avenues within GEOBIA are diverse, segmentation commonly plays a central role in most approaches, influencing and being influenced by surrounding processes. A general approach using supervised quality measures, specifically user provided reference segments, suggest casting the parameters of a given segmentation algorithm as a multidimensional search problem. In such a sample supervised segment generation approach, spatial metrics observing the user provided reference segments may drive the search process. The search is commonly performed by metaheuristics. A novel sample supervised segment generation approach is presented in this work, where the spectral content of provided reference segments is queried. A one-class classification process using spectral information from inside the provided reference segments is used to generate a probability image, which in turn is employed to direct a hybridization of the original input imagery. Segmentation is performed on such a hybrid image. These processes are adjustable, interdependent and form a part of the search problem. Results are presented detailing the performances of four method variants compared to the generic sample supervised segment generation approach, under various conditions in terms of resultant segment quality, required computing time and search process characteristics. Multiple metrics, metaheuristics and segmentation algorithms are tested with this approach. Using the spectral data contained within user provided reference segments to tailor the output generally improves the results in the investigated problem contexts, but at the expense of additional required computing time.

  19. A review of supervised machine learning applied to ageing research.

    Science.gov (United States)

    Fabris, Fabio; Magalhães, João Pedro de; Freitas, Alex A

    2017-04-01

    Broadly speaking, supervised machine learning is the computational task of learning correlations between variables in annotated data (the training set), and using this information to create a predictive model capable of inferring annotations for new data, whose annotations are not known. Ageing is a complex process that affects nearly all animal species. This process can be studied at several levels of abstraction, in different organisms and with different objectives in mind. Not surprisingly, the diversity of the supervised machine learning algorithms applied to answer biological questions reflects the complexities of the underlying ageing processes being studied. Many works using supervised machine learning to study the ageing process have been recently published, so it is timely to review these works, to discuss their main findings and weaknesses. In summary, the main findings of the reviewed papers are: the link between specific types of DNA repair and ageing; ageing-related proteins tend to be highly connected and seem to play a central role in molecular pathways; ageing/longevity is linked with autophagy and apoptosis, nutrient receptor genes, and copper and iron ion transport. Additionally, several biomarkers of ageing were found by machine learning. Despite some interesting machine learning results, we also identified a weakness of current works on this topic: only one of the reviewed papers has corroborated the computational results of machine learning algorithms through wet-lab experiments. In conclusion, supervised machine learning has contributed to advance our knowledge and has provided novel insights on ageing, yet future work should have a greater emphasis in validating the predictions.

  20. An Effective Big Data Supervised Imbalanced Classification Approach for Ortholog Detection in Related Yeast Species

    Directory of Open Access Journals (Sweden)

    Deborah Galpert

    2015-01-01

    Full Text Available Orthology detection requires more effective scaling algorithms. In this paper, a set of gene pair features based on similarity measures (alignment scores, sequence length, gene membership to conserved regions, and physicochemical profiles are combined in a supervised pairwise ortholog detection approach to improve effectiveness considering low ortholog ratios in relation to the possible pairwise comparison between two genomes. In this scenario, big data supervised classifiers managing imbalance between ortholog and nonortholog pair classes allow for an effective scaling solution built from two genomes and extended to other genome pairs. The supervised approach was compared with RBH, RSD, and OMA algorithms by using the following yeast genome pairs: Saccharomyces cerevisiae-Kluyveromyces lactis, Saccharomyces cerevisiae-Candida glabrata, and Saccharomyces cerevisiae-Schizosaccharomyces pombe as benchmark datasets. Because of the large amount of imbalanced data, the building and testing of the supervised model were only possible by using big data supervised classifiers managing imbalance. Evaluation metrics taking low ortholog ratios into account were applied. From the effectiveness perspective, MapReduce Random Oversampling combined with Spark SVM outperformed RBH, RSD, and OMA, probably because of the consideration of gene pair features beyond alignment similarities combined with the advances in big data supervised classification.

  1. Integrative gene network construction to analyze cancer recurrence using semi-supervised learning.

    Science.gov (United States)

    Park, Chihyun; Ahn, Jaegyoon; Kim, Hyunjin; Park, Sanghyun

    2014-01-01

    The prognosis of cancer recurrence is an important research area in bioinformatics and is challenging due to the small sample sizes compared to the vast number of genes. There have been several attempts to predict cancer recurrence. Most studies employed a supervised approach, which uses only a few labeled samples. Semi-supervised learning can be a great alternative to solve this problem. There have been few attempts based on manifold assumptions to reveal the detailed roles of identified cancer genes in recurrence. In order to predict cancer recurrence, we proposed a novel semi-supervised learning algorithm based on a graph regularization approach. We transformed the gene expression data into a graph structure for semi-supervised learning and integrated protein interaction data with the gene expression data to select functionally-related gene pairs. Then, we predicted the recurrence of cancer by applying a regularization approach to the constructed graph containing both labeled and unlabeled nodes. The average improvement rate of accuracy for three different cancer datasets was 24.9% compared to existing supervised and semi-supervised methods. We performed functional enrichment on the gene networks used for learning. We identified that those gene networks are significantly associated with cancer-recurrence-related biological functions. Our algorithm was developed with standard C++ and is available in Linux and MS Windows formats in the STL library. The executable program is freely available at: http://embio.yonsei.ac.kr/~Park/ssl.php.

  2. Integrative gene network construction to analyze cancer recurrence using semi-supervised learning.

    Directory of Open Access Journals (Sweden)

    Chihyun Park

    Full Text Available BACKGROUND: The prognosis of cancer recurrence is an important research area in bioinformatics and is challenging due to the small sample sizes compared to the vast number of genes. There have been several attempts to predict cancer recurrence. Most studies employed a supervised approach, which uses only a few labeled samples. Semi-supervised learning can be a great alternative to solve this problem. There have been few attempts based on manifold assumptions to reveal the detailed roles of identified cancer genes in recurrence. RESULTS: In order to predict cancer recurrence, we proposed a novel semi-supervised learning algorithm based on a graph regularization approach. We transformed the gene expression data into a graph structure for semi-supervised learning and integrated protein interaction data with the gene expression data to select functionally-related gene pairs. Then, we predicted the recurrence of cancer by applying a regularization approach to the constructed graph containing both labeled and unlabeled nodes. CONCLUSIONS: The average improvement rate of accuracy for three different cancer datasets was 24.9% compared to existing supervised and semi-supervised methods. We performed functional enrichment on the gene networks used for learning. We identified that those gene networks are significantly associated with cancer-recurrence-related biological functions. Our algorithm was developed with standard C++ and is available in Linux and MS Windows formats in the STL library. The executable program is freely available at: http://embio.yonsei.ac.kr/~Park/ssl.php.

  3. Cognitive Radio for Smart Grid: Theory, Algorithms, and Security

    Directory of Open Access Journals (Sweden)

    Raghuram Ranganathan

    2011-01-01

    Full Text Available Recently, cognitive radio and smart grid are two areas which have received considerable research impetus. Cognitive radios are intelligent software defined radios (SDRs that efficiently utilize the unused regions of the spectrum, to achieve higher data rates. The smart grid is an automated electric power system that monitors and controls grid activities. In this paper, the novel concept of incorporating a cognitive radio network as the communications infrastructure for the smart grid is presented. A brief overview of the cognitive radio, IEEE 802.22 standard and smart grid, is provided. Experimental results obtained by using dimensionality reduction techniques such as principal component analysis (PCA, kernel PCA, and landmark maximum variance unfolding (LMVU on Wi-Fi signal measurements are presented in a spectrum sensing context. Furthermore, compressed sensing algorithms such as Bayesian compressed sensing and the compressed sensing Kalman filter is employed for recovering the sparse smart meter transmissions. From the power system point of view, a supervised learning method called support vector machine (SVM is used for the automated classification of power system disturbances. The impending problem of securing the smart grid is also addressed, in addition to the possibility of applying FPGA-based fuzzy logic intrusion detection for the smart grid.

  4. The Science of Home Automation

    Science.gov (United States)

    Thomas, Brian Louis

    Smart home technologies and the concept of home automation have become more popular in recent years. This popularity has been accompanied by social acceptance of passive sensors installed throughout the home. The subsequent increase in smart homes facilitates the creation of home automation strategies. We believe that home automation strategies can be generated intelligently by utilizing smart home sensors and activity learning. In this dissertation, we hypothesize that home automation can benefit from activity awareness. To test this, we develop our activity-aware smart automation system, CARL (CASAS Activity-aware Resource Learning). CARL learns the associations between activities and device usage from historical data and utilizes the activity-aware capabilities to control the devices. To help validate CARL we deploy and test three different versions of the automation system in a real-world smart environment. To provide a foundation of activity learning, we integrate existing activity recognition and activity forecasting into CARL home automation. We also explore two alternatives to using human-labeled data to train the activity learning models. The first unsupervised method is Activity Detection, and the second is a modified DBSCAN algorithm that utilizes Dynamic Time Warping (DTW) as a distance metric. We compare the performance of activity learning with human-defined labels and with automatically-discovered activity categories. To provide evidence in support of our hypothesis, we evaluate CARL automation in a smart home testbed. Our results indicate that home automation can be boosted through activity awareness. We also find that the resulting automation has a high degree of usability and comfort for the smart home resident.

  5. Analysis And Control System For Automated Welding

    Science.gov (United States)

    Powell, Bradley W.; Burroughs, Ivan A.; Kennedy, Larry Z.; Rodgers, Michael H.; Goode, K. Wayne

    1994-01-01

    Automated variable-polarity plasma arc (VPPA) welding apparatus operates under electronic supervision by welding analysis and control system. System performs all major monitoring and controlling functions. It acquires, analyzes, and displays weld-quality data in real time and adjusts process parameters accordingly. Also records pertinent data for use in post-weld analysis and documentation of quality. System includes optoelectronic sensors and data processors that provide feedback control of welding process.

  6. Supervision of sunbeds in Finland

    International Nuclear Information System (INIS)

    Visuri, R.

    2003-01-01

    Sunbeds emitting ultraviolet radiation (UV radiation) are used for cosmetic tanning. UV radiation incontrovertibly causes skin diseases such as skin cancer and eye diseases. UV exposure from natural sun should be moderate and from sunbeds it should be avoided. The aim of the supervision of sunbeds and tanning facilities is to ensure that they comply with valid safety requirements. The basis for the requirements is that acute effects such as sunburns will not occur and the yearly UV dose would not increase excessively. (orig.)

  7. Supervision of sunbeds in Finland

    Energy Technology Data Exchange (ETDEWEB)

    Visuri, R. [Radiation and Nuclear Safety Authority, Non-ionizing Radiation Laboratory, Helsinki (Finland)

    2003-06-01

    Sunbeds emitting ultraviolet radiation (UV radiation) are used for cosmetic tanning. UV radiation incontrovertibly causes skin diseases such as skin cancer and eye diseases. UV exposure from natural sun should be moderate and from sunbeds it should be avoided. The aim of the supervision of sunbeds and tanning facilities is to ensure that they comply with valid safety requirements. The basis for the requirements is that acute effects such as sunburns will not occur and the yearly UV dose would not increase excessively. (orig.)

  8. Self-Supervised Dynamical Systems

    Science.gov (United States)

    Zak, Michail

    2003-01-01

    Some progress has been made in a continuing effort to develop mathematical models of the behaviors of multi-agent systems known in biology, economics, and sociology (e.g., systems ranging from single or a few biomolecules to many interacting higher organisms). Living systems can be characterized by nonlinear evolution of probability distributions over different possible choices of the next steps in their motions. One of the main challenges in mathematical modeling of living systems is to distinguish between random walks of purely physical origin (for instance, Brownian motions) and those of biological origin. Following a line of reasoning from prior research, it has been assumed, in the present development, that a biological random walk can be represented by a nonlinear mathematical model that represents coupled mental and motor dynamics incorporating the psychological concept of reflection or self-image. The nonlinear dynamics impart the lifelike ability to behave in ways and to exhibit patterns that depart from thermodynamic equilibrium. Reflection or self-image has traditionally been recognized as a basic element of intelligence. The nonlinear mathematical models of the present development are denoted self-supervised dynamical systems. They include (1) equations of classical dynamics, including random components caused by uncertainties in initial conditions and by Langevin forces, coupled with (2) the corresponding Liouville or Fokker-Planck equations that describe the evolutions of probability densities that represent the uncertainties. The coupling is effected by fictitious information-based forces, denoted supervising forces, composed of probability densities and functionals thereof. The equations of classical mechanics represent motor dynamics that is, dynamics in the traditional sense, signifying Newton s equations of motion. The evolution of the probability densities represents mental dynamics or self-image. Then the interaction between the physical and

  9. Quick fuzzy backpropagation algorithm.

    Science.gov (United States)

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  10. Applying active learning to supervised word sense disambiguation in MEDLINE

    Science.gov (United States)

    Chen, Yukun; Cao, Hongxin; Mei, Qiaozhu; Zheng, Kai; Xu, Hua

    2013-01-01

    Objectives This study was to assess whether active learning strategies can be integrated with supervised word sense disambiguation (WSD) methods, thus reducing the number of annotated samples, while keeping or improving the quality of disambiguation models. Methods We developed support vector machine (SVM) classifiers to disambiguate 197 ambiguous terms and abbreviations in the MSH WSD collection. Three different uncertainty sampling-based active learning algorithms were implemented with the SVM classifiers and were compared with a passive learner (PL) based on random sampling. For each ambiguous term and each learning algorithm, a learning curve that plots the accuracy computed from the test set as a function of the number of annotated samples used in the model was generated. The area under the learning curve (ALC) was used as the primary metric for evaluation. Results Our experiments demonstrated that active learners (ALs) significantly outperformed the PL, showing better performance for 177 out of 197 (89.8%) WSD tasks. Further analysis showed that to achieve an average accuracy of 90%, the PL needed 38 annotated samples, while the ALs needed only 24, a 37% reduction in annotation effort. Moreover, we analyzed cases where active learning algorithms did not achieve superior performance and identified three causes: (1) poor models in the early learning stage; (2) easy WSD cases; and (3) difficult WSD cases, which provide useful insight for future improvements. Conclusions This study demonstrated that integrating active learning strategies with supervised WSD methods could effectively reduce annotation cost and improve the disambiguation models. PMID:23364851

  11. Applying active learning to supervised word sense disambiguation in MEDLINE.

    Science.gov (United States)

    Chen, Yukun; Cao, Hongxin; Mei, Qiaozhu; Zheng, Kai; Xu, Hua

    2013-01-01

    This study was to assess whether active learning strategies can be integrated with supervised word sense disambiguation (WSD) methods, thus reducing the number of annotated samples, while keeping or improving the quality of disambiguation models. We developed support vector machine (SVM) classifiers to disambiguate 197 ambiguous terms and abbreviations in the MSH WSD collection. Three different uncertainty sampling-based active learning algorithms were implemented with the SVM classifiers and were compared with a passive learner (PL) based on random sampling. For each ambiguous term and each learning algorithm, a learning curve that plots the accuracy computed from the test set as a function of the number of annotated samples used in the model was generated. The area under the learning curve (ALC) was used as the primary metric for evaluation. Our experiments demonstrated that active learners (ALs) significantly outperformed the PL, showing better performance for 177 out of 197 (89.8%) WSD tasks. Further analysis showed that to achieve an average accuracy of 90%, the PL needed 38 annotated samples, while the ALs needed only 24, a 37% reduction in annotation effort. Moreover, we analyzed cases where active learning algorithms did not achieve superior performance and identified three causes: (1) poor models in the early learning stage; (2) easy WSD cases; and (3) difficult WSD cases, which provide useful insight for future improvements. This study demonstrated that integrating active learning strategies with supervised WSD methods could effectively reduce annotation cost and improve the disambiguation models.

  12. Robust head pose estimation via supervised manifold learning.

    Science.gov (United States)

    Wang, Chao; Song, Xubo

    2014-05-01

    Head poses can be automatically estimated using manifold learning algorithms, with the assumption that with the pose being the only variable, the face images should lie in a smooth and low-dimensional manifold. However, this estimation approach is challenging due to other appearance variations related to identity, head location in image, background clutter, facial expression, and illumination. To address the problem, we propose to incorporate supervised information (pose angles of training samples) into the process of manifold learning. The process has three stages: neighborhood construction, graph weight computation and projection learning. For the first two stages, we redefine inter-point distance for neighborhood construction as well as graph weight by constraining them with the pose angle information. For Stage 3, we present a supervised neighborhood-based linear feature transformation algorithm to keep the data points with similar pose angles close together but the data points with dissimilar pose angles far apart. The experimental results show that our method has higher estimation accuracy than the other state-of-art algorithms and is robust to identity and illumination variations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Application of ANN and fuzzy logic algorithms for streamflow ...

    Indian Academy of Sciences (India)

    1Department of Soil and Water Engineering, College of Technology and Engineering, Maharana Pratap. University of ... It was found that, ANN model performance improved with increasing .... algorithm uses supervised learning that provides.

  14. Virtual automation.

    Science.gov (United States)

    Casis, E; Garrido, A; Uranga, B; Vives, A; Zufiaurre, C

    2001-01-01

    Total laboratory automation (TLA) can be substituted in mid-size laboratories by a computer sample workflow control (virtual automation). Such a solution has been implemented in our laboratory using PSM, software developed in cooperation with Roche Diagnostics (Barcelona, Spain), to this purpose. This software is connected to the online analyzers and to the laboratory information system and is able to control and direct the samples working as an intermediate station. The only difference with TLA is the replacement of transport belts by personnel of the laboratory. The implementation of this virtual automation system has allowed us the achievement of the main advantages of TLA: workload increase (64%) with reduction in the cost per test (43%), significant reduction in the number of biochemistry primary tubes (from 8 to 2), less aliquoting (from 600 to 100 samples/day), automation of functional testing, drastic reduction of preanalytical errors (from 11.7 to 0.4% of the tubes) and better total response time for both inpatients (from up to 48 hours to up to 4 hours) and outpatients (from up to 10 days to up to 48 hours). As an additional advantage, virtual automation could be implemented without hardware investment and significant headcount reduction (15% in our lab).

  15. Subsampled Hessian Newton Methods for Supervised Learning.

    Science.gov (United States)

    Wang, Chien-Chih; Huang, Chun-Heng; Lin, Chih-Jen

    2015-08-01

    Newton methods can be applied in many supervised learning approaches. However, for large-scale data, the use of the whole Hessian matrix can be time-consuming. Recently, subsampled Newton methods have been proposed to reduce the computational time by using only a subset of data for calculating an approximation of the Hessian matrix. Unfortunately, we find that in some situations, the running speed is worse than the standard Newton method because cheaper but less accurate search directions are used. In this work, we propose some novel techniques to improve the existing subsampled Hessian Newton method. The main idea is to solve a two-dimensional subproblem per iteration to adjust the search direction to better minimize the second-order approximation of the function value. We prove the theoretical convergence of the proposed method. Experiments on logistic regression, linear SVM, maximum entropy, and deep networks indicate that our techniques significantly reduce the running time of the subsampled Hessian Newton method. The resulting algorithm becomes a compelling alternative to the standard Newton method for large-scale data classification.

  16. Current Risk Management Practices in Psychotherapy Supervision.

    Science.gov (United States)

    Mehrtens, Ilayna K; Crapanzano, Kathleen; Tynes, L Lee

    2017-12-01

    Psychotherapy competence is a core skill for psychiatry residents, and psychotherapy supervision is a time-honored approach to teaching this skill. To explore the current supervision practices of psychiatry training programs, a 24-item questionnaire was sent to all program directors of Accreditation Council for Graduate Medical Education (ACGME)-approved adult psychiatry programs. The questionnaire included items regarding adherence to recently proposed therapy supervision practices aimed at reducing potential liability risk. The results suggested that current therapy supervision practices do not include sufficient management of the potential liability involved in therapy supervision. Better protections for patients, residents, supervisors and the institutions would be possible with improved credentialing practices and better documentation of informed consent and supervision policies and procedures. © 2017 American Academy of Psychiatry and the Law.

  17. Semi-Supervised Half-Quadratic Nonnegative Matrix Factorization for Face Recognition

    KAUST Repository

    Alghamdi, Masheal M.

    2014-05-01

    Face recognition is a challenging problem in computer vision. Difficulties such as slight differences between similar faces of different people, changes in facial expressions, light and illumination condition, and pose variations add extra complications to the face recognition research. Many algorithms are devoted to solving the face recognition problem, among which the family of nonnegative matrix factorization (NMF) algorithms has been widely used as a compact data representation method. Different versions of NMF have been proposed. Wang et al. proposed the graph-based semi-supervised nonnegative learning (S2N2L) algorithm that uses labeled data in constructing intrinsic and penalty graph to enforce separability of labeled data, which leads to a greater discriminating power. Moreover the geometrical structure of labeled and unlabeled data is preserved through using the smoothness assumption by creating a similarity graph that conserves the neighboring information for all labeled and unlabeled data. However, S2N2L is sensitive to light changes, illumination, and partial occlusion. In this thesis, we propose a Semi-Supervised Half-Quadratic NMF (SSHQNMF) algorithm that combines the benefits of S2N2L and the robust NMF by the half- quadratic minimization (HQNMF) algorithm.Our algorithm improves upon the S2N2L algorithm by replacing the Frobenius norm with a robust M-Estimator loss function. A multiplicative update solution for our SSHQNMF algorithmis driven using the half- 4 quadratic (HQ) theory. Extensive experiments on ORL, Yale-A and a subset of the PIE data sets for nine M-estimator loss functions for both SSHQNMF and HQNMF algorithms are investigated, and compared with several state-of-the-art supervised and unsupervised algorithms, along with the original S2N2L algorithm in the context of classification, clustering, and robustness against partial occlusion. The proposed algorithm outperformed the other algorithms. Furthermore, SSHQNMF with Maximum Correntropy

  18. Abusive Supervision Scale Development in Indonesia

    OpenAIRE

    Wulani, Fenika; Purwanto, Bernadinus M; Handoko, Hani

    2014-01-01

    The purpose of this study was to develop a scale of abusive supervision in Indonesia. The study was conducted with a different context and scale development method from Tepper’s (2000) abusive supervision scale. The abusive supervision scale from Tepper (2000) was developed in the U.S., which has a cultural orientation of low power distance. The current study was conducted in Indonesia, which has a high power distance. This study used interview procedures to obtain information about superviso...

  19. Human Supervision of Multiple Autonomous Vehicles

    Science.gov (United States)

    2013-03-22

    AFRL-RH-WP-TR-2013-0143 HUMAN SUPERVISION OF MULTIPLE AUTONOMOUS VEHICLES Heath A. Ruff Ball...REPORT TYPE Interim 3. DATES COVERED (From – To) 09-16-08 – 03-22-13 4. TITLE AND SUBTITLE HUMAN SUPERVISION OF MULTIPLE AUTONOMOUS VEHICLES 5a...Supervision of Multiple Autonomous Vehicles To support the vision of a system that enables a single operator to control multiple next-generation

  20. Constructing Aligned Assessments Using Automated Test Construction

    Science.gov (United States)

    Porter, Andrew; Polikoff, Morgan S.; Barghaus, Katherine M.; Yang, Rui

    2013-01-01

    We describe an innovative automated test construction algorithm for building aligned achievement tests. By incorporating the algorithm into the test construction process, along with other test construction procedures for building reliable and unbiased assessments, the result is much more valid tests than result from current test construction…