WorldWideScience

Sample records for supervised automated algorithm

  1. Supervised learning algorithms for visual object categorization

    NARCIS (Netherlands)

    bin Abdullah, A.|info:eu-repo/dai/nl/304842052

    2010-01-01

    This thesis presents novel techniques for image recognition systems for better understanding image content. More specifically, it looks at the algorithmic aspects and experimental verification to demonstrate the capability of the proposed algorithms. These techniques aim to improve the three major

  2. Automated training for algorithms that learn from genomic data.

    Science.gov (United States)

    Cilingir, Gokcen; Broschat, Shira L

    2015-01-01

    Supervised machine learning algorithms are used by life scientists for a variety of objectives. Expert-curated public gene and protein databases are major resources for gathering data to train these algorithms. While these data resources are continuously updated, generally, these updates are not incorporated into published machine learning algorithms which thereby can become outdated soon after their introduction. In this paper, we propose a new model of operation for supervised machine learning algorithms that learn from genomic data. By defining these algorithms in a pipeline in which the training data gathering procedure and the learning process are automated, one can create a system that generates a classifier or predictor using information available from public resources. The proposed model is explained using three case studies on SignalP, MemLoci, and ApicoAP in which existing machine learning models are utilized in pipelines. Given that the vast majority of the procedures described for gathering training data can easily be automated, it is possible to transform valuable machine learning algorithms into self-evolving learners that benefit from the ever-changing data available for gene products and to develop new machine learning algorithms that are similarly capable.

  3. A Supervised Classification Algorithm for Note Onset Detection

    Directory of Open Access Journals (Sweden)

    Douglas Eck

    2007-01-01

    Full Text Available This paper presents a novel approach to detecting onsets in music audio files. We use a supervised learning algorithm to classify spectrogram frames extracted from digital audio as being onsets or nononsets. Frames classified as onsets are then treated with a simple peak-picking algorithm based on a moving average. We present two versions of this approach. The first version uses a single neural network classifier. The second version combines the predictions of several networks trained using different hyperparameters. We describe the details of the algorithm and summarize the performance of both variants on several datasets. We also examine our choice of hyperparameters by describing results of cross-validation experiments done on a custom dataset. We conclude that a supervised learning approach to note onset detection performs well and warrants further investigation.

  4. CFSO3: A New Supervised Swarm-Based Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Antonino Laudani

    2013-01-01

    Full Text Available We present CFSO3, an optimization heuristic within the class of the swarm intelligence, based on a synergy among three different features of the Continuous Flock-of-Starlings Optimization. One of the main novelties is that this optimizer is no more a classical numerical algorithm since it now can be seen as a continuous dynamic system, which can be treated by using all the mathematical instruments available for managing state equations. In addition, CFSO3 allows passing from stochastic approaches to supervised deterministic ones since the random updating of parameters, a typical feature for numerical swam-based optimization algorithms, is now fully substituted by a supervised strategy: in CFSO3 the tuning of parameters is a priori designed for obtaining both exploration and exploitation. Indeed the exploration, that is, the escaping from a local minimum, as well as the convergence and the refinement to a solution can be designed simply by managing the eigenvalues of the CFSO state equations. Virtually in CFSO3, just the initial values of positions and velocities of the swarm members have to be randomly assigned. Both standard and parallel versions of CFSO3 together with validations on classical benchmarks are presented.

  5. A comparison of supervised machine learning algorithms and feature vectors for MS lesion segmentation using multimodal structural MRI.

    Science.gov (United States)

    Sweeney, Elizabeth M; Vogelstein, Joshua T; Cuzzocreo, Jennifer L; Calabresi, Peter A; Reich, Daniel S; Crainiceanu, Ciprian M; Shinohara, Russell T

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance.

  6. A numeric comparison of variable selection algorithms for supervised learning

    Energy Technology Data Exchange (ETDEWEB)

    Palombo, G., E-mail: giulio.palombo@gmail.co [University of Milan, Bicocca (Italy); Narsky, I., E-mail: narsky@hep.caltech.ed [California Institute of Technology (United States)

    2009-12-21

    Datasets in modern High Energy Physics (HEP) experiments are often described by dozens or even hundreds of input variables. Reducing a full variable set to a subset that most completely represents information about data is therefore an important task in analysis of HEP data. We compare various variable selection algorithms for supervised learning using several datasets such as, for instance, imaging gamma-ray Cherenkov telescope (MAGIC) data found at the UCI repository. We use classifiers and variable selection methods implemented in the statistical package StatPatternRecognition (SPR), a free open-source C++ package developed in the HEP community ( (http://sourceforge.net/projects/statpatrec/)). For each dataset, we select a powerful classifier and estimate its learning accuracy on variable subsets obtained by various selection algorithms. When possible, we also estimate the CPU time needed for the variable subset selection. The results of this analysis are compared with those published previously for these datasets using other statistical packages such as R and Weka. We show that the most accurate, yet slowest, method is a wrapper algorithm known as generalized sequential forward selection ('Add N Remove R') implemented in SPR.

  7. A numeric comparison of variable selection algorithms for supervised learning

    Science.gov (United States)

    Palombo, G.; Narsky, I.

    2009-12-01

    Datasets in modern High Energy Physics (HEP) experiments are often described by dozens or even hundreds of input variables. Reducing a full variable set to a subset that most completely represents information about data is therefore an important task in analysis of HEP data. We compare various variable selection algorithms for supervised learning using several datasets such as, for instance, imaging gamma-ray Cherenkov telescope (MAGIC) data found at the UCI repository. We use classifiers and variable selection methods implemented in the statistical package StatPatternRecognition (SPR), a free open-source C++ package developed in the HEP community ( http://sourceforge.net/projects/statpatrec/). For each dataset, we select a powerful classifier and estimate its learning accuracy on variable subsets obtained by various selection algorithms. When possible, we also estimate the CPU time needed for the variable subset selection. The results of this analysis are compared with those published previously for these datasets using other statistical packages such as R and Weka. We show that the most accurate, yet slowest, method is a wrapper algorithm known as generalized sequential forward selection ("Add N Remove R") implemented in SPR.

  8. Objectness Supervised Merging Algorithm for Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    Haifeng Sima

    2016-01-01

    Full Text Available Ideal color image segmentation needs both low-level cues and high-level semantic features. This paper proposes a two-hierarchy segmentation model based on merging homogeneous superpixels. First, a region growing strategy is designed for producing homogenous and compact superpixels in different partitions. Total variation smoothing features are adopted in the growing procedure for locating real boundaries. Before merging, we define a combined color-texture histogram feature for superpixels description and, meanwhile, a novel objectness feature is proposed to supervise the region merging procedure for reliable segmentation. Both color-texture histograms and objectness are computed to measure regional similarities between region pairs, and the mixed standard deviation of the union features is exploited to make stop criteria for merging process. Experimental results on the popular benchmark dataset demonstrate the better segmentation performance of the proposed model compared to other well-known segmentation algorithms.

  9. Algorithms to Automate LCLS Undulator Tuning

    Energy Technology Data Exchange (ETDEWEB)

    Wolf, Zachary

    2010-12-03

    Automation of the LCLS undulator tuning offers many advantages to the project. Automation can make a substantial reduction in the amount of time the tuning takes. Undulator tuning is fairly complex and automation can make the final tuning less dependent on the skill of the operator. Also, algorithms are fixed and can be scrutinized and reviewed, as opposed to an individual doing the tuning by hand. This note presents algorithms implemented in a computer program written for LCLS undulator tuning. The LCLS undulators must meet the following specifications. The maximum trajectory walkoff must be less than 5 {micro}m over 10 m. The first field integral must be below 40 x 10{sup -6} Tm. The second field integral must be below 50 x 10{sup -6} Tm{sup 2}. The phase error between the electron motion and the radiation field must be less than 10 degrees in an undulator. The K parameter must have the value of 3.5000 {+-} 0.0005. The phase matching from the break regions into the undulator must be accurate to better than 10 degrees. A phase change of 113 x 2{pi} must take place over a distance of 3.656 m centered on the undulator. Achieving these requirements is the goal of the tuning process. Most of the tuning is done with Hall probe measurements. The field integrals are checked using long coil measurements. An analysis program written in Matlab takes the Hall probe measurements and computes the trajectories, phase errors, K value, etc. The analysis program and its calculation techniques were described in a previous note. In this note, a second Matlab program containing tuning algorithms is described. The algorithms to determine the required number and placement of the shims are discussed in detail. This note describes the operation of a computer program which was written to automate LCLS undulator tuning. The algorithms used to compute the shim sizes and locations are discussed.

  10. Experiments on Supervised Learning Algorithms for Text Categorization

    Science.gov (United States)

    Namburu, Setu Madhavi; Tu, Haiying; Luo, Jianhui; Pattipati, Krishna R.

    2005-01-01

    Modern information society is facing the challenge of handling massive volume of online documents, news, intelligence reports, and so on. How to use the information accurately and in a timely manner becomes a major concern in many areas. While the general information may also include images and voice, we focus on the categorization of text data in this paper. We provide a brief overview of the information processing flow for text categorization, and discuss two supervised learning algorithms, viz., support vector machines (SVM) and partial least squares (PLS), which have been successfully applied in other domains, e.g., fault diagnosis [9]. While SVM has been well explored for binary classification and was reported as an efficient algorithm for text categorization, PLS has not yet been applied to text categorization. Our experiments are conducted on three data sets: Reuter's- 21578 dataset about corporate mergers and data acquisitions (ACQ), WebKB and the 20-Newsgroups. Results show that the performance of PLS is comparable to SVM in text categorization. A major drawback of SVM for multi-class categorization is that it requires a voting scheme based on the results of pair-wise classification. PLS does not have this drawback and could be a better candidate for multi-class text categorization.

  11. Using an Agent-oriented Framework for Supervision, Diagnosis and Prognosis Applications in Advanced Automation Environments

    DEFF Research Database (Denmark)

    Thunem, Harald P-J; Thunem, Atoosa P-J; Lind, Morten

    2011-01-01

    This paper demonstrates how a generic agent-oriented framework can be used in advanced automation environments, for systems analysis in general and supervision, diagnosis and prognosis purposes in particular. The framework’s background and main application areas are briefly described. Next......-oriented supervision, diagnosis and prognosis purposes are equally explained. Finally, the paper sums up by also addressing plans for further enhancement and in that respect integration with other tailor-made tools for joint treatment of various modeling and analysis activities upon advanced automation environments....

  12. An evaluation of unsupervised and supervised learning algorithms for clustering landscape types in the United States

    Science.gov (United States)

    Wendel, Jochen; Buttenfield, Barbara P.; Stanislawski, Larry V.

    2016-01-01

    Knowledge of landscape type can inform cartographic generalization of hydrographic features, because landscape characteristics provide an important geographic context that affects variation in channel geometry, flow pattern, and network configuration. Landscape types are characterized by expansive spatial gradients, lacking abrupt changes between adjacent classes; and as having a limited number of outliers that might confound classification. The US Geological Survey (USGS) is exploring methods to automate generalization of features in the National Hydrography Data set (NHD), to associate specific sequences of processing operations and parameters with specific landscape characteristics, thus obviating manual selection of a unique processing strategy for every NHD watershed unit. A chronology of methods to delineate physiographic regions for the United States is described, including a recent maximum likelihood classification based on seven input variables. This research compares unsupervised and supervised algorithms applied to these seven input variables, to evaluate and possibly refine the recent classification. Evaluation metrics for unsupervised methods include the Davies–Bouldin index, the Silhouette index, and the Dunn index as well as quantization and topographic error metrics. Cross validation and misclassification rate analysis are used to evaluate supervised classification methods. The paper reports the comparative analysis and its impact on the selection of landscape regions. The compared solutions show problems in areas of high landscape diversity. There is some indication that additional input variables, additional classes, or more sophisticated methods can refine the existing classification.

  13. A comparison of supervised machine learning algorithms and feature vectors for MS lesion segmentation using multimodal structural MRI.

    Directory of Open Access Journals (Sweden)

    Elizabeth M Sweeney

    Full Text Available Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS lesion segmentation in structural magnetic resonance imaging (MRI. We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w, T2-weighted (T2-w and fluid-attenuated inversion recovery (FLAIR MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance.

  14. A semi-supervised segmentation algorithm as applied to k-means ...

    African Journals Online (AJOL)

    ... study the newly proposed semi-supervised segmentation algorithm outperforms both an unsupervised and a supervised segmentation technique, when compared by using the Gini coecient as performance measure of the resulting predictive models. Key words: Banking, clustering, multivariate statistics, data mining ...

  15. Classification and Diagnostic Output Prediction of Cancer Using Gene Expression Profiling and Supervised Machine Learning Algorithms

    DEFF Research Database (Denmark)

    Yoo, C.; Gernaey, Krist

    2008-01-01

    importance in the projection (VIP) information of the DPLS method. The power of the gene selection method and the proposed supervised hierarchical clustering method is illustrated on a three microarray data sets of leukemia, breast, and colon cancer. Supervised machine learning algorithms thus enable...

  16. A semi-supervised classification algorithm using the TAD-derived background as training data

    Science.gov (United States)

    Fan, Lei; Ambeau, Brittany; Messinger, David W.

    2013-05-01

    In general, spectral image classification algorithms fall into one of two categories: supervised and unsupervised. In unsupervised approaches, the algorithm automatically identifies clusters in the data without a priori information about those clusters (except perhaps the expected number of them). Supervised approaches require an analyst to identify training data to learn the characteristics of the clusters such that they can then classify all other pixels into one of the pre-defined groups. The classification algorithm presented here is a semi-supervised approach based on the Topological Anomaly Detection (TAD) algorithm. The TAD algorithm defines background components based on a mutual k-Nearest Neighbor graph model of the data, along with a spectral connected components analysis. Here, the largest components produced by TAD are used as regions of interest (ROI's),or training data for a supervised classification scheme. By combining those ROI's with a Gaussian Maximum Likelihood (GML) or a Minimum Distance to the Mean (MDM) algorithm, we are able to achieve a semi supervised classification method. We test this classification algorithm against data collected by the HyMAP sensor over the Cooke City, MT area and University of Pavia scene.

  17. An Automated Energy Detection Algorithm Based on Consecutive Mean Excision

    Science.gov (United States)

    2018-01-01

    ARL-TR-8268 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Consecutive Mean Excision...not return it to the originator. ARL-TR-8268 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm...2018 2. REPORT TYPE Technical Report 3. DATES COVERED (From - To) 1 October 2016–30 September 2017 4. TITLE AND SUBTITLE An Automated Energy

  18. Online Semi-Supervised Learning: Algorithm and Application in Metagenomics

    NARCIS (Netherlands)

    Imangaliyev, S.; Keijser, B.J.F.; Crielaard, W.; Tsivtsivadze, E.

    2013-01-01

    As the amount of metagenomic data grows rapidly, online statistical learning algorithms are poised to play key rolein metagenome analysis tasks. Frequently, data are only partially labeled, namely dataset contains partial information about the problem of interest. This work presents an algorithm and

  19. Online semi-supervised learning: algorithm and application in metagenomics

    NARCIS (Netherlands)

    Imangaliyev, S.; Keijser, B.J.; Crielaard, W.; Tsivtsivadze, E.; Li, G.Z.; Kim, S.; Hughes, M.; McLachlan, G.; Sun, H.; Hu, X.; Ressom, H.; Liu, B.; Liebman, M.

    2013-01-01

    As the amount of metagenomic data grows rapidly, online statistical learning algorithms are poised to play key role in metagenome analysis tasks. Frequently, data are only partially labeled, namely dataset contains partial information about the problem of interest. This work presents an algorithm

  20. Automated Antenna Design with Evolutionary Algorithms

    Science.gov (United States)

    Hornby, Gregory S.; Globus, Al; Linden, Derek S.; Lohn, Jason D.

    2006-01-01

    Current methods of designing and optimizing antennas by hand are time and labor intensive, and limit complexity. Evolutionary design techniques can overcome these limitations by searching the design space and automatically finding effective solutions. In recent years, evolutionary algorithms have shown great promise in finding practical solutions in large, poorly understood design spaces. In particular, spacecraft antenna design has proven tractable to evolutionary design techniques. Researchers have been investigating evolutionary antenna design and optimization since the early 1990s, and the field has grown in recent years as computer speed has increased and electromagnetic simulators have improved. Two requirements-compliant antennas, one for ST5 and another for TDRS-C, have been automatically designed by evolutionary algorithms. The ST5 antenna is slated to fly this year, and a TDRS-C phased array element has been fabricated and tested. Such automated evolutionary design is enabled by medium-to-high quality simulators and fast modern computers to evaluate computer-generated designs. Evolutionary algorithms automate cut-and-try engineering, substituting automated search though millions of potential designs for intelligent search by engineers through a much smaller number of designs. For evolutionary design, the engineer chooses the evolutionary technique, parameters and the basic form of the antenna, e.g., single wire for ST5 and crossed-element Yagi for TDRS-C. Evolutionary algorithms then search for optimal configurations in the space defined by the engineer. NASA's Space Technology 5 (ST5) mission will launch three small spacecraft to test innovative concepts and technologies. Advanced evolutionary algorithms were used to automatically design antennas for ST5. The combination of wide beamwidth for a circularly-polarized wave and wide impedance bandwidth made for a challenging antenna design problem. From past experience in designing wire antennas, we chose to

  1. A supervised framework for lesion segmentation and automated VLSM analyses in left hemispheric stroke

    Directory of Open Access Journals (Sweden)

    Dorian Pustina

    2015-05-01

    Full Text Available INTRODUCTION: Voxel-based lesion-symptom mapping (VLSM is conventionally performed using skill and knowledge of experts to manually delineate brain lesions. This process requires time, and is likely to have substantial inter-rater variability. Here, we propose a supervised machine learning framework for lesion segmentation capable of learning from a single modality and existing manual segmentations in order to delineate lesions in new patients. METHODS: Data from 60 patients with chronic stroke aphasia were utilized in the study (age: 59.7±11.5yrs, post-stroke interval: 5±2.9yrs, male/female ratio: 34/26. Using a single T1 image of each subject, additional features were created that provided complementary information, such as, difference from template, tissue segmentation, brain asymmetries, gradient magnitude, and deviances of these images from 80 age and gender matched controls. These features were fed into MRV-NRF (multi-resolution voxel-wise neighborhood random forest; Tustison et al., 2014 prediction algorithm implemented in ANTsR (Avants, 2015. The algorithm incorporates information from each voxel and its surrounding neighbors from all above features, in a hierarchy of random forest predictions from low to high resolution. The validity of the framework was tested with a 6-fold cross validation (i.e., train from 50 subjects, predict 10. The process was repeated ten times, producing ten segmentations for each subject, from which the average solution was binarized. Predicted lesions were compared to manually defined lesions, and VLSM models were built on 4 language measures: repetition and comprehension subscores from the WAB (Kertesz, 1982, WAB-AQ, and PNT naming accuracy (Roach, Schwartz, Martin, Grewal, & Brecher, 1996. RESULTS: Manual and predicted lesion size showed high correlation (r=0.96. Compared to manual lesions, the predicted lesions had a dice overlap of 0.72 (±0.14 STD, a case-wise maximum distance (Hausdorff of 21mm (±16

  2. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks.

    Science.gov (United States)

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper.

  3. Comparison of supervised machine learning algorithms for waterborne pathogen detection using mobile phone fluorescence microscopy

    KAUST Repository

    Ceylan Koydemir, Hatice

    2017-06-14

    Giardia lamblia is a waterborne parasite that affects millions of people every year worldwide, causing a diarrheal illness known as giardiasis. Timely detection of the presence of the cysts of this parasite in drinking water is important to prevent the spread of the disease, especially in resource-limited settings. Here we provide extended experimental testing and evaluation of the performance and repeatability of a field-portable and cost-effective microscopy platform for automated detection and counting of Giardia cysts in water samples, including tap water, non-potable water, and pond water. This compact platform is based on our previous work, and is composed of a smartphone-based fluorescence microscope, a disposable sample processing cassette, and a custom-developed smartphone application. Our mobile phone microscope has a large field of view of ~0.8 cm2 and weighs only ~180 g, excluding the phone. A custom-developed smartphone application provides a user-friendly graphical interface, guiding the users to capture a fluorescence image of the sample filter membrane and analyze it automatically at our servers using an image processing algorithm and training data, consisting of >30,000 images of cysts and >100,000 images of other fluorescent particles that are captured, including, e.g. dust. The total time that it takes from sample preparation to automated cyst counting is less than an hour for each 10 ml of water sample that is tested. We compared the sensitivity and the specificity of our platform using multiple supervised classification models, including support vector machines and nearest neighbors, and demonstrated that a bootstrap aggregating (i.e. bagging) approach using raw image file format provides the best performance for automated detection of Giardia cysts. We evaluated the performance of this machine learning enabled pathogen detection device with water samples taken from different sources (e.g. tap water, non-potable water, pond water) and achieved

  4. Comparison of supervised machine learning algorithms for waterborne pathogen detection using mobile phone fluorescence microscopy

    Science.gov (United States)

    Ceylan Koydemir, Hatice; Feng, Steve; Liang, Kyle; Nadkarni, Rohan; Benien, Parul; Ozcan, Aydogan

    2017-06-01

    Giardia lamblia is a waterborne parasite that affects millions of people every year worldwide, causing a diarrheal illness known as giardiasis. Timely detection of the presence of the cysts of this parasite in drinking water is important to prevent the spread of the disease, especially in resource-limited settings. Here we provide extended experimental testing and evaluation of the performance and repeatability of a field-portable and cost-effective microscopy platform for automated detection and counting of Giardia cysts in water samples, including tap water, non-potable water, and pond water. This compact platform is based on our previous work, and is composed of a smartphone-based fluorescence microscope, a disposable sample processing cassette, and a custom-developed smartphone application. Our mobile phone microscope has a large field of view of 0.8 cm2 and weighs only 180 g, excluding the phone. A custom-developed smartphone application provides a user-friendly graphical interface, guiding the users to capture a fluorescence image of the sample filter membrane and analyze it automatically at our servers using an image processing algorithm and training data, consisting of >30,000 images of cysts and >100,000 images of other fluorescent particles that are captured, including, e.g. dust. The total time that it takes from sample preparation to automated cyst counting is less than an hour for each 10 ml of water sample that is tested. We compared the sensitivity and the specificity of our platform using multiple supervised classification models, including support vector machines and nearest neighbors, and demonstrated that a bootstrap aggregating (i.e. bagging) approach using raw image file format provides the best performance for automated detection of Giardia cysts. We evaluated the performance of this machine learning enabled pathogen detection device with water samples taken from different sources (e.g. tap water, non-potable water, pond water) and achieved a

  5. Comparison of supervised machine learning algorithms for waterborne pathogen detection using mobile phone fluorescence microscopy

    Directory of Open Access Journals (Sweden)

    Ceylan Koydemir Hatice

    2017-06-01

    Full Text Available Giardia lamblia is a waterborne parasite that affects millions of people every year worldwide, causing a diarrheal illness known as giardiasis. Timely detection of the presence of the cysts of this parasite in drinking water is important to prevent the spread of the disease, especially in resource-limited settings. Here we provide extended experimental testing and evaluation of the performance and repeatability of a field-portable and cost-effective microscopy platform for automated detection and counting of Giardia cysts in water samples, including tap water, non-potable water, and pond water. This compact platform is based on our previous work, and is composed of a smartphone-based fluorescence microscope, a disposable sample processing cassette, and a custom-developed smartphone application. Our mobile phone microscope has a large field of view of ~0.8 cm2 and weighs only ~180 g, excluding the phone. A custom-developed smartphone application provides a user-friendly graphical interface, guiding the users to capture a fluorescence image of the sample filter membrane and analyze it automatically at our servers using an image processing algorithm and training data, consisting of >30,000 images of cysts and >100,000 images of other fluorescent particles that are captured, including, e.g. dust. The total time that it takes from sample preparation to automated cyst counting is less than an hour for each 10 ml of water sample that is tested. We compared the sensitivity and the specificity of our platform using multiple supervised classification models, including support vector machines and nearest neighbors, and demonstrated that a bootstrap aggregating (i.e. bagging approach using raw image file format provides the best performance for automated detection of Giardia cysts. We evaluated the performance of this machine learning enabled pathogen detection device with water samples taken from different sources (e.g. tap water, non-potable water, pond

  6. Benchmarking protein classification algorithms via supervised cross-validation

    NARCIS (Netherlands)

    Kertész-Farkas, A.; Dhir, S.; Sonego, P.; Pacurar, M.; Netoteia, S.; Nijveen, H.; Kuzniar, A.; Leunissen, J.A.M.; Kocsor, A.; Pongor, S.

    2008-01-01

    Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold,

  7. Fall detection using supervised machine learning algorithms: A comparative study

    KAUST Repository

    Zerrouki, Nabil

    2017-01-05

    Fall incidents are considered as the leading cause of disability and even mortality among older adults. To address this problem, fall detection and prevention fields receive a lot of intention over the past years and attracted many researcher efforts. We present in the current study an overall performance comparison between fall detection systems using the most popular machine learning approaches which are: Naïve Bayes, K nearest neighbor, neural network, and support vector machine. The analysis of the classification power associated to these most widely utilized algorithms is conducted on two fall detection databases namely FDD and URFD. Since the performance of the classification algorithm is inherently dependent on the features, we extracted and used the same features for all classifiers. The classification evaluation is conducted using different state of the art statistical measures such as the overall accuracy, the F-measure coefficient, and the area under ROC curve (AUC) value.

  8. Automated discrete element method calibration using genetic and optimization algorithms

    Science.gov (United States)

    Do, Huy Q.; Aragón, Alejandro M.; Schott, Dingena L.

    2017-06-01

    This research aims at developing a universal methodology for automated calibration of microscopic properties of modelled granular materials. The proposed calibrator can be applied for different experimental set-ups. Two optimization approaches: (1) a genetic algorithm and (2) DIRECT optimization, are used to identify discrete element method input model parameters, e.g., coefficients of sliding and rolling friction. The algorithms are used to minimize the objective function characterized by the discrepancy between the experimental macroscopic properties and the associated numerical results. Two test cases highlight the robustness, stability, and reliability of the two algorithms used for automated discrete element method calibration with different set-ups.

  9. Novel maximum-margin training algorithms for supervised neural networks.

    Science.gov (United States)

    Ludwig, Oswaldo; Nunes, Urbano

    2010-06-01

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by

  10. Automated grading of lumbar disc degeneration via supervised distance metric learning

    Science.gov (United States)

    He, Xiaoxu; Landis, Mark; Leung, Stephanie; Warrington, James; Shmuilovich, Olga; Li, Shuo

    2017-03-01

    Lumbar disc degeneration (LDD) is a commonly age-associated condition related to low back pain, while its consequences are responsible for over 90% of spine surgical procedures. In clinical practice, grading of LDD by inspecting MRI is a necessary step to make a suitable treatment plan. This step purely relies on physicians manual inspection so that it brings the unbearable tediousness and inefficiency. An automated method for grading of LDD is highly desirable. However, the technical implementation faces a big challenge from class ambiguity, which is typical in medical image classification problems with a large number of classes. This typical challenge is derived from the complexity and diversity of medical images, which lead to a serious class overlapping and brings a great challenge in discriminating different classes. To solve this problem, we proposed an automated grading approach, which is based on supervised distance metric learning to classify the input discs into four class labels (0: normal, 1: slight, 2: marked, 3: severe). By learning distance metrics from labeled instances, an optimal distance metric is modeled and with two attractive advantages: (1) keeps images from the same classes close, and (2) keeps images from different classes far apart. The experiments, performed in 93 subjects, demonstrated the superiority of our method with accuracy 0.9226, sensitivity 0.9655, specificity 0.9083, F-score 0.8615. With our approach, physicians will be free from the tediousness and patients will be provided an effective treatment.

  11. Automated lesion detection on MRI scans using combined unsupervised and supervised methods.

    Science.gov (United States)

    Guo, Dazhou; Fridriksson, Julius; Fillmore, Paul; Rorden, Christopher; Yu, Hongkai; Zheng, Kang; Wang, Song

    2015-10-30

    Accurate and precise detection of brain lesions on MR images (MRI) is paramount for accurately relating lesion location to impaired behavior. In this paper, we present a novel method to automatically detect brain lesions from a T1-weighted 3D MRI. The proposed method combines the advantages of both unsupervised and supervised methods. First, unsupervised methods perform a unified segmentation normalization to warp images from the native space into a standard space and to generate probability maps for different tissue types, e.g., gray matter, white matter and fluid. This allows us to construct an initial lesion probability map by comparing the normalized MRI to healthy control subjects. Then, we perform non-rigid and reversible atlas-based registration to refine the probability maps of gray matter, white matter, external CSF, ventricle, and lesions. These probability maps are combined with the normalized MRI to construct three types of features, with which we use supervised methods to train three support vector machine (SVM) classifiers for a combined classifier. Finally, the combined classifier is used to accomplish lesion detection. We tested this method using T1-weighted MRIs from 60 in-house stroke patients. Using leave-one-out cross validation, the proposed method can achieve an average Dice coefficient of 73.1% when compared to lesion maps hand-delineated by trained neurologists. Furthermore, we tested the proposed method on the T1-weighted MRIs in the MICCAI BRATS 2012 dataset. The proposed method can achieve an average Dice coefficient of 66.5% in comparison to the expert annotated tumor maps provided in MICCAI BRATS 2012 dataset. In addition, on these two test datasets, the proposed method shows competitive performance to three state-of-the-art methods, including Stamatakis et al., Seghier et al., and Sanjuan et al. In this paper, we introduced a novel automated procedure for lesion detection from T1-weighted MRIs by combining both an unsupervised and a

  12. Agent-Based Automated Algorithm Generator

    Science.gov (United States)

    2010-01-12

    capabilities for vehicles and subsystems. In Lee et al’s recent paper [2] on the modeling and simulation of vehicle electric power, the battery and generator...proposed A2G framework; Section 3 describes the SRS design; Section 4 reports the light-weight D/P algorithms in A2G; Section 5 presents simulation ...Detection and Isolation Agent (FDIA), Prognostic Agent (PA), Fusion Agent (FA), and Maintenance Mining Agent (MMA). FDI agents perform diagnostics

  13. A novel algorithm for fully automated mapping of geospatial ontologies

    Science.gov (United States)

    Chaabane, Sana; Jaziri, Wassim

    2017-09-01

    Geospatial information is collected from different sources thus making spatial ontologies, built for the same geographic domain, heterogeneous; therefore, different and heterogeneous conceptualizations may coexist. Ontology integrating helps creating a common repository of the geospatial ontology and allows removing the heterogeneities between the existing ontologies. Ontology mapping is a process used in ontologies integrating and consists in finding correspondences between the source ontologies. This paper deals with the "mapping" process of geospatial ontologies which consist in applying an automated algorithm in finding the correspondences between concepts referring to the definitions of matching relationships. The proposed algorithm called "geographic ontologies mapping algorithm" defines three types of mapping: semantic, topological and spatial.

  14. New Algorithms For Automated Symmetry Recognition

    Science.gov (United States)

    Paul, Jody; Kilgore, Tammy Elaine; Klinger, Allen

    1988-02-01

    In this paper we present new methods for computer-based symmetry identification that combine elements of group theory and pattern recognition. Detection of symmetry has diverse applications including: the reduction of image data to a manageable subset with minimal information loss, the interpretation of sensor data,1 such as the x-ray diffraction patterns which sparked the recent discovery of a new "quasicrystal" phase of solid matter,2 and music analysis and composition.3,4,5 Our algorithms are expressed as parallel operations on the data using the matrix representation and manipulation features of the APL programming language. We demonstrate the operation of programs that characterize symmetric and nearly-symmetric patterns by determining the degree of invariance with respect to candidate symmetry transformations. The results are completely general; they may be applied to pattern data of arbitrary dimension and from any source.

  15. An immune-inspired semi-supervised algorithm for breast cancer diagnosis.

    Science.gov (United States)

    Peng, Lingxi; Chen, Wenbin; Zhou, Wubai; Li, Fufang; Yang, Jin; Zhang, Jiandong

    2016-10-01

    Breast cancer is the most frequently and world widely diagnosed life-threatening cancer, which is the leading cause of cancer death among women. Early accurate diagnosis can be a big plus in treating breast cancer. Researchers have approached this problem using various data mining and machine learning techniques such as support vector machine, artificial neural network, etc. The computer immunology is also an intelligent method inspired by biological immune system, which has been successfully applied in pattern recognition, combination optimization, machine learning, etc. However, most of these diagnosis methods belong to a supervised diagnosis method. It is very expensive to obtain labeled data in biology and medicine. In this paper, we seamlessly integrate the state-of-the-art research on life science with artificial intelligence, and propose a semi-supervised learning algorithm to reduce the need for labeled data. We use two well-known benchmark breast cancer datasets in our study, which are acquired from the UCI machine learning repository. Extensive experiments are conducted and evaluated on those two datasets. Our experimental results demonstrate the effectiveness and efficiency of our proposed algorithm, which proves that our algorithm is a promising automatic diagnosis method for breast cancer. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. A new supervised over-sampling algorithm with application to protein-nucleotide binding residue prediction.

    Directory of Open Access Journals (Sweden)

    Jun Hu

    Full Text Available Protein-nucleotide interactions are ubiquitous in a wide variety of biological processes. Accurately identifying interaction residues solely from protein sequences is useful for both protein function annotation and drug design, especially in the post-genomic era, as large volumes of protein data have not been functionally annotated. Protein-nucleotide binding residue prediction is a typical imbalanced learning problem, where binding residues are extremely fewer in number than non-binding residues. Alleviating the severity of class imbalance has been demonstrated to be a promising means of improving the prediction performance of a machine-learning-based predictor for class imbalance problems. However, little attention has been paid to the negative impact of class imbalance on protein-nucleotide binding residue prediction. In this study, we propose a new supervised over-sampling algorithm that synthesizes additional minority class samples to address class imbalance. The experimental results from protein-nucleotide interaction datasets demonstrate that the proposed supervised over-sampling algorithm can relieve the severity of class imbalance and help to improve prediction performance. Based on the proposed over-sampling algorithm, a predictor, called TargetSOS, is implemented for protein-nucleotide binding residue prediction. Cross-validation tests and independent validation tests demonstrate the effectiveness of TargetSOS. The web-server and datasets used in this study are freely available at http://www.csbio.sjtu.edu.cn/bioinf/TargetSOS/.

  17. Automated database-guided expert-supervised orientation for immunophenotypic diagnosis and classification of acute leukemia.

    Science.gov (United States)

    Lhermitte, L; Mejstrikova, E; van der Sluijs-Gelling, A J; Grigore, G E; Sedek, L; Bras, A E; Gaipa, G; Sobral da Costa, E; Novakova, M; Sonneveld, E; Buracchi, C; de Sá Bacelar, T; Te Marvelde, J G; Trinquand, A; Asnafi, V; Szczepanski, T; Matarraz, S; Lopez, A; Vidriales, B; Bulsa, J; Hrusak, O; Kalina, T; Lecrevisse, Q; Martin Ayuso, M; Brüggemann, M; Verde, J; Fernandez, P; Burgos, L; Paiva, B; Pedreira, C E; van Dongen, J J M; Orfao, A; van der Velden, V H J

    2017-11-01

    Precise classification of acute leukemia (AL) is crucial for adequate treatment. EuroFlow has previously designed an AL orientation tube (ALOT) to guide towards the relevant classification panel (T-cell acute lymphoblastic leukemia (T-ALL), B-cell precursor (BCP)-ALL and/or acute myeloid leukemia (AML)) and final diagnosis. Now we built a reference database with 656 typical AL samples (145 T-ALL, 377 BCP-ALL, 134 AML), processed and analyzed via standardized protocols. Using principal component analysis (PCA)-based plots and automated classification algorithms for direct comparison of single-cells from individual patients against the database, another 783 cases were subsequently evaluated. Depending on the database-guided results, patients were categorized as: (i) typical T, B or Myeloid without or; (ii) with a transitional component to another lineage; (iii) atypical; or (iv) mixed-lineage. Using this automated algorithm, in 781/783 cases (99.7%) the right panel was selected, and data comparable to the final WHO-diagnosis was already provided in >93% of cases (85% T-ALL, 97% BCP-ALL, 95% AML and 87% mixed-phenotype AL patients), even without data on the full-characterization panels. Our results show that database-guided analysis facilitates standardized interpretation of ALOT results and allows accurate selection of the relevant classification panels, hence providing a solid basis for designing future WHO AL classifications.Leukemia advance online publication, 1 December 2017; doi:10.1038/leu.2017.313.

  18. Automated detection of microaneurysms using scale-adapted blob analysis and semi-supervised learning.

    Science.gov (United States)

    Adal, Kedir M; Sidibé, Désiré; Ali, Sharib; Chaum, Edward; Karnowski, Thomas P; Mériaudeau, Fabrice

    2014-04-01

    Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier which can detect true MAs. The developed system is built using only few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Supervised learning technique for the automated identification of white matter hyperintensities in traumatic brain injury.

    Science.gov (United States)

    Stone, James R; Wilde, Elisabeth A; Taylor, Brian A; Tate, David F; Levin, Harvey; Bigler, Erin D; Scheibel, Randall S; Newsome, Mary R; Mayer, Andrew R; Abildskov, Tracy; Black, Garrett M; Lennon, Michael J; York, Gerald E; Agarwal, Rajan; DeVillasante, Jorge; Ritter, John L; Walker, Peter B; Ahlers, Stephen T; Tustison, Nicholas J

    2016-01-01

    White matter hyperintensities (WMHs) are foci of abnormal signal intensity in white matter regions seen with magnetic resonance imaging (MRI). WMHs are associated with normal ageing and have shown prognostic value in neurological conditions such as traumatic brain injury (TBI). The impracticality of manually quantifying these lesions limits their clinical utility and motivates the utilization of machine learning techniques for automated segmentation workflows. This study develops a concatenated random forest framework with image features for segmenting WMHs in a TBI cohort. The framework is built upon the Advanced Normalization Tools (ANTs) and ANTsR toolkits. MR (3D FLAIR, T2- and T1-weighted) images from 24 service members and veterans scanned in the Chronic Effects of Neurotrauma Consortium's (CENC) observational study were acquired. Manual annotations were employed for both training and evaluation using a leave-one-out strategy. Performance measures include sensitivity, positive predictive value, [Formula: see text] score and relative volume difference. Final average results were: sensitivity = 0.68 ± 0.38, positive predictive value = 0.51 ± 0.40, [Formula: see text] = 0.52 ± 0.36, relative volume difference = 43 ± 26%. In addition, three lesion size ranges are selected to illustrate the variation in performance with lesion size. Paired with correlative outcome data, supervised learning methods may allow for identification of imaging features predictive of diagnosis and prognosis in individual TBI patients.

  20. Development of a Sidescan Imagery Automated Roughness Estimation Algorithm

    Science.gov (United States)

    2006-03-01

    Development of a Sidescan Imagery Automated Roughness Estimation Algorithm Marlin L. Gendron, Maura C. Lohrenz and Geary Layne Naval Research...relative to nadir), which varies as a function of beam angle ( Fish and Carr, 1990). Likewise, the length of the surface of the object facing the SSS can...John Wiley & Sons. Fish , J. P. and Carr, H. A. Carr (1990). Sound Underwater Images: A Guide to the Generation and Interpretation of Sonar

  1. [The study of medical supplies automation replenishment algorithm in hospital on medical supplies supplying chain].

    Science.gov (United States)

    Sheng, Xi

    2012-07-01

    The thesis aims to study the automation replenishment algorithm in hospital on medical supplies supplying chain. The mathematical model and algorithm of medical supplies automation replenishment are designed through referring to practical data form hospital on the basis of applying inventory theory, greedy algorithm and partition algorithm. The automation replenishment algorithm is proved to realize automatic calculation of the medical supplies distribution amount and optimize medical supplies distribution scheme. A conclusion could be arrived that the model and algorithm of inventory theory, if applied in medical supplies circulation field, could provide theoretical and technological support for realizing medical supplies automation replenishment of hospital on medical supplies supplying chain.

  2. A Recommendation Algorithm for Automating Corollary Order Generation

    Science.gov (United States)

    Klann, Jeffrey; Schadow, Gunther; McCoy, JM

    2009-01-01

    Manual development and maintenance of decision support content is time-consuming and expensive. We explore recommendation algorithms, e-commerce data-mining tools that use collective order history to suggest purchases, to assist with this. In particular, previous work shows corollary order suggestions are amenable to automated data-mining techniques. Here, an item-based collaborative filtering algorithm augmented with association rule interestingness measures mined suggestions from 866,445 orders made in an inpatient hospital in 2007, generating 584 potential corollary orders. Our expert physician panel evaluated the top 92 and agreed 75.3% were clinically meaningful. Also, at least one felt 47.9% would be directly relevant in guideline development. This automated generation of a rough-cut of corollary orders confirms prior indications about automated tools in building decision support content. It is an important step toward computerized augmentation to decision support development, which could increase development efficiency and content quality while automatically capturing local standards. PMID:20351875

  3. SPECIAL LIBRARIES OF FRAGMENTS OF ALGORITHMIC NETWORKS TO AUTOMATE THE DEVELOPMENT OF ALGORITHMIC MODELS

    Directory of Open Access Journals (Sweden)

    V. E. Marley

    2015-01-01

    Full Text Available Summary. The concept of algorithmic models appeared from the algorithmic approach in which the simulated object, the phenomenon appears in the form of process, subject to strict rules of the algorithm, which placed the process of operation of the facility. Under the algorithmic model is the formalized description of the scenario subject specialist for the simulated process, the structure of which is comparable with the structure of the causal and temporal relationships between events of the process being modeled, together with all information necessary for its software implementation. To represent the structure of algorithmic models used algorithmic network. Normally, they were defined as loaded finite directed graph, the vertices which are mapped to operators and arcs are variables, bound by operators. The language of algorithmic networks has great features, the algorithms that it can display indifference the class of all random algorithms. In existing systems, automation modeling based on algorithmic nets, mainly used by operators working with real numbers. Although this reduces their ability, but enough for modeling a wide class of problems related to economy, environment, transport, technical processes. The task of modeling the execution of schedules and network diagrams is relevant and useful. There are many counting systems, network graphs, however, the monitoring process based analysis of gaps and terms of graphs, no analysis of prediction execution schedule or schedules. The library is designed to build similar predictive models. Specifying source data to obtain a set of projections from which to choose one and take it for a new plan.

  4. Sampling algorithms for validation of supervised learning models for Ising-like systems

    Science.gov (United States)

    Portman, Nataliya; Tamblyn, Isaac

    2017-12-01

    In this paper, we build and explore supervised learning models of ferromagnetic system behavior, using Monte-Carlo sampling of the spin configuration space generated by the 2D Ising model. Given the enormous size of the space of all possible Ising model realizations, the question arises as to how to choose a reasonable number of samples that will form physically meaningful and non-intersecting training and testing datasets. Here, we propose a sampling technique called ;ID-MH; that uses the Metropolis-Hastings algorithm creating Markov process across energy levels within the predefined configuration subspace. We show that application of this method retains phase transitions in both training and testing datasets and serves the purpose of validation of a machine learning algorithm. For larger lattice dimensions, ID-MH is not feasible as it requires knowledge of the complete configuration space. As such, we develop a new ;block-ID; sampling strategy: it decomposes the given structure into square blocks with lattice dimension N ≤ 5 and uses ID-MH sampling of candidate blocks. Further comparison of the performance of commonly used machine learning methods such as random forests, decision trees, k nearest neighbors and artificial neural networks shows that the PCA-based Decision Tree regressor is the most accurate predictor of magnetizations of the Ising model. For energies, however, the accuracy of prediction is not satisfactory, highlighting the need to consider more algorithmically complex methods (e.g., deep learning).

  5. An algorithm to automate yeast segmentation and tracking.

    Directory of Open Access Journals (Sweden)

    Andreas Doncic

    Full Text Available Our understanding of dynamic cellular processes has been greatly enhanced by rapid advances in quantitative fluorescence microscopy. Imaging single cells has emphasized the prevalence of phenomena that can be difficult to infer from population measurements, such as all-or-none cellular decisions, cell-to-cell variability, and oscillations. Examination of these phenomena requires segmenting and tracking individual cells over long periods of time. However, accurate segmentation and tracking of cells is difficult and is often the rate-limiting step in an experimental pipeline. Here, we present an algorithm that accomplishes fully automated segmentation and tracking of budding yeast cells within growing colonies. The algorithm incorporates prior information of yeast-specific traits, such as immobility and growth rate, to segment an image using a set of threshold values rather than one specific optimized threshold. Results from the entire set of thresholds are then used to perform a robust final segmentation.

  6. An algorithm to automate yeast segmentation and tracking.

    Science.gov (United States)

    Doncic, Andreas; Eser, Umut; Atay, Oguzhan; Skotheim, Jan M

    2013-01-01

    Our understanding of dynamic cellular processes has been greatly enhanced by rapid advances in quantitative fluorescence microscopy. Imaging single cells has emphasized the prevalence of phenomena that can be difficult to infer from population measurements, such as all-or-none cellular decisions, cell-to-cell variability, and oscillations. Examination of these phenomena requires segmenting and tracking individual cells over long periods of time. However, accurate segmentation and tracking of cells is difficult and is often the rate-limiting step in an experimental pipeline. Here, we present an algorithm that accomplishes fully automated segmentation and tracking of budding yeast cells within growing colonies. The algorithm incorporates prior information of yeast-specific traits, such as immobility and growth rate, to segment an image using a set of threshold values rather than one specific optimized threshold. Results from the entire set of thresholds are then used to perform a robust final segmentation.

  7. Semi-supervised spectral algorithms for community detection in complex networks based on equivalence of clustering methods

    Science.gov (United States)

    Ma, Xiaoke; Wang, Bingbo; Yu, Liang

    2018-01-01

    Community detection is fundamental for revealing the structure-functionality relationship in complex networks, which involves two issues-the quantitative function for community as well as algorithms to discover communities. Despite significant research on either of them, few attempt has been made to establish the connection between the two issues. To attack this problem, a generalized quantification function is proposed for community in weighted networks, which provides a framework that unifies several well-known measures. Then, we prove that the trace optimization of the proposed measure is equivalent with the objective functions of algorithms such as nonnegative matrix factorization, kernel K-means as well as spectral clustering. It serves as the theoretical foundation for designing algorithms for community detection. On the second issue, a semi-supervised spectral clustering algorithm is developed by exploring the equivalence relation via combining the nonnegative matrix factorization and spectral clustering. Different from the traditional semi-supervised algorithms, the partial supervision is integrated into the objective of the spectral algorithm. Finally, through extensive experiments on both artificial and real world networks, we demonstrate that the proposed method improves the accuracy of the traditional spectral algorithms in community detection.

  8. Enhancing Time-Series Detection Algorithms for Automated Biosurveillance

    Science.gov (United States)

    Burkom, Howard; Xing, Jian; English, Roseanne; Bloom, Steven; Cox, Kenneth; Pavlin, Julie A.

    2009-01-01

    BioSense is a US national system that uses data from health information systems for automated disease surveillance. We studied 4 time-series algorithm modifications designed to improve sensitivity for detecting artificially added data. To test these modified algorithms, we used reports of daily syndrome visits from 308 Department of Defense (DoD) facilities and 340 hospital emergency departments (EDs). At a constant alert rate of 1%, sensitivity was improved for both datasets by using a minimum standard deviation (SD) of 1.0, a 14–28 day baseline duration for calculating mean and SD, and an adjustment for total clinic visits as a surrogate denominator. Stratifying baseline days into weekdays versus weekends to account for day-of-week effects increased sensitivity for the DoD data but not for the ED data. These enhanced methods may increase sensitivity without increasing the alert rate and may improve the ability to detect outbreaks by using automated surveillance system data. PMID:19331728

  9. An Automated Summarization Assessment Algorithm for Identifying Summarizing Strategies.

    Directory of Open Access Journals (Sweden)

    Asad Abdi

    Full Text Available Summarization is a process to select important information from a source text. Summarizing strategies are the core cognitive processes in summarization activity. Since summarization can be important as a tool to improve comprehension, it has attracted interest of teachers for teaching summary writing through direct instruction. To do this, they need to review and assess the students' summaries and these tasks are very time-consuming. Thus, a computer-assisted assessment can be used to help teachers to conduct this task more effectively.This paper aims to propose an algorithm based on the combination of semantic relations between words and their syntactic composition to identify summarizing strategies employed by students in summary writing. An innovative aspect of our algorithm lies in its ability to identify summarizing strategies at the syntactic and semantic levels. The efficiency of the algorithm is measured in terms of Precision, Recall and F-measure. We then implemented the algorithm for the automated summarization assessment system that can be used to identify the summarizing strategies used by students in summary writing.

  10. An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform

    Science.gov (United States)

    2018-01-01

    ARL-TR-8270 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter...Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Modified Watershed Transform by Kwok F Tom Sensors and Electron...1 October 2016–30 September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a

  11. Automated Spectroscopic Analysis Using the Particle Swarm Optimization Algorithm: Implementing a Guided Search Algorithm to Autofit

    Science.gov (United States)

    Ervin, Katherine; Shipman, Steven

    2017-06-01

    While rotational spectra can be rapidly collected, their analysis (especially for complex systems) is seldom straightforward, leading to a bottleneck. The AUTOFIT program was designed to serve that need by quickly matching rotational constants to spectra with little user input and supervision. This program can potentially be improved by incorporating an optimization algorithm in the search for a solution. The Particle Swarm Optimization Algorithm (PSO) was chosen for implementation. PSO is part of a family of optimization algorithms called heuristic algorithms, which seek approximate best answers. This is ideal for rotational spectra, where an exact match will not be found without incorporating distortion constants, etc., which would otherwise greatly increase the size of the search space. PSO was tested for robustness against five standard fitness functions and then applied to a custom fitness function created for rotational spectra. This talk will explain the Particle Swarm Optimization algorithm and how it works, describe how Autofit was modified to use PSO, discuss the fitness function developed to work with spectroscopic data, and show our current results. Seifert, N.A., Finneran, I.A., Perez, C., Zaleski, D.P., Neill, J.L., Steber, A.L., Suenram, R.D., Lesarri, A., Shipman, S.T., Pate, B.H., J. Mol. Spec. 312, 13-21 (2015)

  12. MED: a new non-supervised gene prediction algorithm for bacterial and archaeal genomes

    Directory of Open Access Journals (Sweden)

    Yang Yi-Fan

    2007-03-01

    Full Text Available Abstract Background Despite a remarkable success in the computational prediction of genes in Bacteria and Archaea, a lack of comprehensive understanding of prokaryotic gene structures prevents from further elucidation of differences among genomes. It continues to be interesting to develop new ab initio algorithms which not only accurately predict genes, but also facilitate comparative studies of prokaryotic genomes. Results This paper describes a new prokaryotic genefinding algorithm based on a comprehensive statistical model of protein coding Open Reading Frames (ORFs and Translation Initiation Sites (TISs. The former is based on a linguistic "Entropy Density Profile" (EDP model of coding DNA sequence and the latter comprises several relevant features related to the translation initiation. They are combined to form a so-called Multivariate Entropy Distance (MED algorithm, MED 2.0, that incorporates several strategies in the iterative program. The iterations enable us to develop a non-supervised learning process and to obtain a set of genome-specific parameters for the gene structure, before making the prediction of genes. Conclusion Results of extensive tests show that MED 2.0 achieves a competitive high performance in the gene prediction for both 5' and 3' end matches, compared to the current best prokaryotic gene finders. The advantage of the MED 2.0 is particularly evident for GC-rich genomes and archaeal genomes. Furthermore, the genome-specific parameters given by MED 2.0 match with the current understanding of prokaryotic genomes and may serve as tools for comparative genomic studies. In particular, MED 2.0 is shown to reveal divergent translation initiation mechanisms in archaeal genomes while making a more accurate prediction of TISs compared to the existing gene finders and the current GenBank annotation.

  13. The Automated Assessment of Postural Stability: Balance Detection Algorithm.

    Science.gov (United States)

    Napoli, Alessandro; Glass, Stephen M; Tucker, Carole; Obeid, Iyad

    2017-12-01

    Impaired balance is a common indicator of mild traumatic brain injury, concussion and musculoskeletal injury. Given the clinical relevance of such injuries, especially in military settings, it is paramount to develop more accurate and reliable on-field evaluation tools. This work presents the design and implementation of the automated assessment of postural stability (AAPS) system, for on-field evaluations following concussion. The AAPS is a computer system, based on inexpensive off-the-shelf components and custom software, that aims to automatically and reliably evaluate balance deficits, by replicating a known on-field clinical test, namely, the Balance Error Scoring System (BESS). The AAPS main innovation is its balance error detection algorithm that has been designed to acquire data from a Microsoft Kinect® sensor and convert them into clinically-relevant BESS scores, using the same detection criteria defined by the original BESS test. In order to assess the AAPS balance evaluation capability, a total of 15 healthy subjects (7 male, 8 female) were required to perform the BESS test, while simultaneously being tracked by a Kinect 2.0 sensor and a professional-grade motion capture system (Qualisys AB, Gothenburg, Sweden). High definition videos with BESS trials were scored off-line by three experienced observers for reference scores. AAPS performance was assessed by comparing the AAPS automated scores to those derived by three experienced observers. Our results show that the AAPS error detection algorithm presented here can accurately and precisely detect balance deficits with performance levels that are comparable to those of experienced medical personnel. Specifically, agreement levels between the AAPS algorithm and the human average BESS scores ranging between 87.9% (single-leg on foam) and 99.8% (double-leg on firm ground) were detected. Moreover, statistically significant differences in balance scores were not detected by an ANOVA test with alpha equal to 0

  14. Postprocessing algorithm for automated analysis of pelvic intraoperative neuromonitoring signals

    Directory of Open Access Journals (Sweden)

    Wegner Celine

    2016-09-01

    Full Text Available Two dimensional pelvic intraoperative neuromonitoring (pIONM® is based on electric stimulation of autonomic nerves under observation of electromyography of internal anal sphincter (IAS and manometry of urinary bladder. The method provides nerve identification and verification of its’ functional integrity. Currently pIONM® is gaining increased attention in times where preservation of function is becoming more and more important. Ongoing technical and methodological developments in experimental and clinical settings require further analysis of the obtained signals. This work describes a postprocessing algorithm for pIONM® signals, developed for automated analysis of huge amount of recorded data. The analysis routine includes a graphical representation of the recorded signals in the time and frequency domain, as well as a quantitative evaluation by means of features calculated from the time and frequency domain. The produced plots are summarized automatically in a PowerPoint presentation. The calculated features are filled into a standardized Excel-sheet, ready for statistical analysis.

  15. How to measure metallicity from five-band photometry with supervised machine learning algorithms

    Science.gov (United States)

    Acquaviva, Viviana

    2016-02-01

    We demonstrate that it is possible to measure metallicity from the SDSS five-band photometry to better than 0.1 dex using supervised machine learning algorithms. Using spectroscopic estimates of metallicity as ground truth, we build, optimize and train several estimators to predict metallicity. We use the observed photometry, as well as derived quantities such as stellar mass and photometric redshift, as features, and we build two sample data sets at median redshifts of 0.103 and 0.218 and median r-band magnitude of 17.5 and 18.3, respectively. We find that ensemble methods, such as random forests of trees and extremely randomized trees and support vector machines all perform comparably well and can measure metallicity with a Root Mean Square Error (RMSE) of 0.081 and 0.090 for the two data sets when all objects are included. The fraction of outliers (objects for which |Ztrue - Zpred| > 0.2 dex) is 2.2 and 3.9 per cent, respectively and the RMSE decreases to 0.068 and 0.069 if those objects are excluded. Because of the ability of these algorithms to capture complex relationships between data and target, our technique performs better than previously proposed methods that sought to fit metallicity using an analytic fitting formula, and has 3× more constraining power than SED fitting-based methods. Additionally, this method is extremely forgiving of contamination in the training set, and can be used with very satisfactory results for sample sizes of a few hundred objects. We distribute all the routines to reproduce our results and apply them to other data sets.

  16. A supervised multi-spike learning algorithm based on gradient descent for spiking neural networks.

    Science.gov (United States)

    Xu, Yan; Zeng, Xiaoqin; Han, Lixin; Yang, Jing

    2013-07-01

    We use a supervised multi-spike learning algorithm for spiking neural networks (SNNs) with temporal encoding to simulate the learning mechanism of biological neurons in which the SNN output spike trains are encoded by firing times. We first analyze why existing gradient-descent-based learning methods for SNNs have difficulty in achieving multi-spike learning. We then propose a new multi-spike learning method for SNNs based on gradient descent that solves the problems of error function construction and interference among multiple output spikes during learning. The method could be widely applied to single spiking neurons to learn desired output spike trains and to multilayer SNNs to solve classification problems. By overcoming learning interference among multiple spikes, our method has high learning accuracy when there are a relatively large number of output spikes in need of learning. We also develop an output encoding strategy with respect to multiple spikes for classification problems. This effectively improves the classification accuracy of multi-spike learning compared to that of single-spike learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Precision assessment of some supervised and unsupervised algorithms for genotype discrimination in the genus Pisum using SSR molecular data.

    Science.gov (United States)

    Nasiri, Jaber; Naghavi, Mohammad Reza; Kayvanjoo, Amir Hossein; Nasiri, Mojtaba; Ebrahimi, Mansour

    2015-03-07

    For the first time, prediction accuracies of some supervised and unsupervised algorithms were evaluated in an SSR-based DNA fingerprinting study of a pea collection containing 20 cultivars and 57 wild samples. In general, according to the 10 attribute weighting models, the SSR alleles of PEAPHTAP-2 and PSBLOX13.2-1 were the two most important attributes to generate discrimination among eight different species and subspecies of genus Pisum. In addition, K-Medoids unsupervised clustering run on Chi squared dataset exhibited the best prediction accuracy (83.12%), while the lowest accuracy (25.97%) gained as K-Means model ran on FCdb database. Irrespective of some fluctuations, the overall accuracies of tree induction models were significantly high for many algorithms, and the attributes PSBLOX13.2-3 and PEAPHTAP could successfully detach Pisum fulvum accessions and cultivars from the others when two selected decision trees were taken into account. Meanwhile, the other used supervised algorithms exhibited overall reliable accuracies, even though in some rare cases, they gave us low amounts of accuracies. Our results, altogether, demonstrate promising applications of both supervised and unsupervised algorithms to provide suitable data mining tools regarding accurate fingerprinting of different species and subspecies of genus Pisum, as a fundamental priority task in breeding programs of the crop. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. An Automated Energy Detection Algorithm Based on Kurtosis-Histogram Excision

    Science.gov (United States)

    2018-01-01

    ARL-TR-8269 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Kurtosis-Histogram Excision...needed. Do not return it to the originator. ARL-TR-8269 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection...Automated Energy Detection Algorithm Based on Kurtosis-Histogram Excision 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

  19. An Automated Energy Detection Algorithm Based on Morphological and Statistical Processing Techniques

    Science.gov (United States)

    2018-01-09

    ARL-TR-8272 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological and...is no longer needed. Do not return it to the originator. ARL-TR-8272 ● JAN 2018 US Army Research Laboratory An Automated Energy ...4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological and Statistical Processing Techniques 5a. CONTRACT NUMBER

  20. An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Semi-Disk Structure

    Science.gov (United States)

    2018-01-01

    ARL-TR-8271 ● JAN 2018 US Army Research Laboratory An Automated Energy Detection Algorithm Based on Morphological Filter... Energy Detection Algorithm Based on Morphological Filter Processing with a Semi-Disk Structure by Kwok F Tom Sensors and Electron Devices...September 2017 4. TITLE AND SUBTITLE An Automated Energy Detection Algorithm Based on Morphological Filter Processing with a Semi-Disk Structure 5a

  1. Automated design of infrared digital metamaterials by genetic algorithm

    Science.gov (United States)

    Sugino, Yuya; Ishikawa, Atsushi; Hayashi, Yasuhiko; Tsuruta, Kenji

    2017-08-01

    We demonstrate automatic design of infrared (IR) metamaterials using a genetic algorithm (GA) and experimentally characterize their IR properties. To implement the automated design scheme of the metamaterial structures, we adopt a digital metamaterial consisting of 7 × 7 Au nano-pixels with an area of 200 nm × 200 nm, and their placements are coded as binary genes in the GA optimization process. The GA combined with three-dimensional (3D) finite element method (FEM) simulation is developed and applied to automatically construct a digital metamaterial to exhibit pronounced plasmonic resonances at the target IR frequencies. Based on the numerical results, the metamaterials are fabricated on a Si substrate over an area of 1 mm × 1 mm by using an EB lithography, Cr/Au (2/20 nm) depositions, and liftoff process. In the FT-IR measurement, pronounced plasmonic responses of each metamaterial are clearly observed near the targeted frequencies, although the synthesized pixel arrangements of the metamaterials are seemingly random. The corresponding numerical simulations reveal the important resonant behavior of each pixel and their hybridized systems. Our approach is fully computer-aided without artificial manipulation, thus paving the way toward the novel device design for next-generation plasmonic device applications.

  2. A scale space based algorithm for automated segmentation of single shot tagged MRI of shearing deformation

    NARCIS (Netherlands)

    Sprengers, Andre M. J.; Caan, Matthan W. A.; Moerman, Kevin M.; Nederveen, Aart J.; Lamerichs, Rolf M.; Stoker, Jaap

    2013-01-01

    This study proposes a scale space based algorithm for automated segmentation of single-shot tagged images of modest SNR. Furthermore the algorithm was designed for analysis of discontinuous or shearing types of motion, i.e. segmentation of broken tag patterns. The proposed algorithm utilises

  3. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    Science.gov (United States)

    Long, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris

    2000-01-01

    Parallelized versions of genetic algorithms (GAs) are popular primarily for three reasons: the GA is an inherently parallel algorithm, typical GA applications are very compute intensive, and powerful computing platforms, especially Beowulf-style computing clusters, are becoming more affordable and easier to implement. In addition, the low communication bandwidth required allows the use of inexpensive networking hardware such as standard office ethernet. In this paper we describe a parallel GA and its use in automated high-level circuit design. Genetic algorithms are a type of trial-and-error search technique that are guided by principles of Darwinian evolution. Just as the genetic material of two living organisms can intermix to produce offspring that are better adapted to their environment, GAs expose genetic material, frequently strings of 1s and Os, to the forces of artificial evolution: selection, mutation, recombination, etc. GAs start with a pool of randomly-generated candidate solutions which are then tested and scored with respect to their utility. Solutions are then bred by probabilistically selecting high quality parents and recombining their genetic representations to produce offspring solutions. Offspring are typically subjected to a small amount of random mutation. After a pool of offspring is produced, this process iterates until a satisfactory solution is found or an iteration limit is reached. Genetic algorithms have been applied to a wide variety of problems in many fields, including chemistry, biology, and many engineering disciplines. There are many styles of parallelism used in implementing parallel GAs. One such method is called the master-slave or processor farm approach. In this technique, slave nodes are used solely to compute fitness evaluations (the most time consuming part). The master processor collects fitness scores from the nodes and performs the genetic operators (selection, reproduction, variation, etc.). Because of dependency

  4. Designing automated warehouses by minimising investment cost using genetic algorithms

    OpenAIRE

    Lerher, Tone; Potrč, Iztok; Šraml, Matjaž

    2012-01-01

    The successful performance of the automated storage and retrieval systems is dependent upon the appropriate design and optimization process. In the present work a comprehensive model of designing automated storage and retrieval system for the single- and multi-aisle systems is presented. Because of the required conditions that the automated storage and retrieval systems should be technically highly efficient and that it should be designed on reasonable expenses, the objective function represe...

  5. 3D deeply supervised network for automated segmentation of volumetric medical images.

    Science.gov (United States)

    Dou, Qi; Yu, Lequan; Chen, Hao; Jin, Yueming; Yang, Xin; Qin, Jing; Heng, Pheng-Ann

    2017-10-01

    While deep convolutional neural networks (CNNs) have achieved remarkable success in 2D medical image segmentation, it is still a difficult task for CNNs to segment important organs or structures from 3D medical images owing to several mutually affected challenges, including the complicated anatomical environments in volumetric images, optimization difficulties of 3D networks and inadequacy of training samples. In this paper, we present a novel and efficient 3D fully convolutional network equipped with a 3D deep supervision mechanism to comprehensively address these challenges; we call it 3D DSN. Our proposed 3D DSN is capable of conducting volume-to-volume learning and inference, which can eliminate redundant computations and alleviate the risk of over-fitting on limited training data. More importantly, the 3D deep supervision mechanism can effectively cope with the optimization problem of gradients vanishing or exploding when training a 3D deep model, accelerating the convergence speed and simultaneously improving the discrimination capability. Such a mechanism is developed by deriving an objective function that directly guides the training of both lower and upper layers in the network, so that the adverse effects of unstable gradient changes can be counteracted during the training procedure. We also employ a fully connected conditional random field model as a post-processing step to refine the segmentation results. We have extensively validated the proposed 3D DSN on two typical yet challenging volumetric medical image segmentation tasks: (i) liver segmentation from 3D CT scans and (ii) whole heart and great vessels segmentation from 3D MR images, by participating two grand challenges held in conjunction with MICCAI. We have achieved competitive segmentation results to state-of-the-art approaches in both challenges with a much faster speed, corroborating the effectiveness of our proposed 3D DSN. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. AUTOMATION OF PLC PROGRAMMING WHEN IMPLEMENTING ALGORITHMS OF GUARANTEEING CONTROL

    Directory of Open Access Journals (Sweden)

    M. V. Levinskyi

    2015-05-01

    Full Text Available During developing programs for programmable logic controllers (PLCs the concept of model-oriented design is increasingly used. In particular, usage of Simulink PLC Coder is giving the opportunity to get SCL program codefrom Simulink model which contains certain dynamic elements. Then, for example, this SCL code can be transformed to functional blocks of the Simatic S7-300 (VIPA 300 PLC. This significantly reduces the timerequired to develop code in the language of SCL and reduces requirements for specialists’ qualification when developing control systems. In this article we provide an example of PLC programming automation whenimplementing algorithms of guaranteeing control (AGC. For certain types of technological processes it is typical to contain monotonically increasing function of the effectiveness with fixed one-way restriction in regulations. Forexample, in the grinders, presses, extruders the load current of the drive is stabilized using the change of feed. Energy efficiency of these plants will increase with increasing of the set point (SP to the controller of the drive loadcurrent stabilization loop. However, an increase in SP increases the probability of triggering appropriate protection, for example, as a result of random changes in the properties of raw materials. Therefore, to avoid this accident, thepower of driving motors is often unreasonably overrated. And in this case they are used with currents equal to the half of rated.Systems of guaranteeing control (SGC are used to solve the contradiction between the need to improvethe efficiency and increasing probability of an accident.

  7. A fully automated algorithm of baseline correction based on wavelet feature points and segment interpolation

    Science.gov (United States)

    Qian, Fang; Wu, Yihui; Hao, Peng

    2017-11-01

    Baseline correction is a very important part of pre-processing. Baseline in the spectrum signal can induce uneven amplitude shifts across different wavenumbers and lead to bad results. Therefore, these amplitude shifts should be compensated before further analysis. Many algorithms are used to remove baseline, however fully automated baseline correction is convenient in practical application. A fully automated algorithm based on wavelet feature points and segment interpolation (AWFPSI) is proposed. This algorithm finds feature points through continuous wavelet transformation and estimates baseline through segment interpolation. AWFPSI is compared with three commonly introduced fully automated and semi-automated algorithms, using simulated spectrum signal, visible spectrum signal and Raman spectrum signal. The results show that AWFPSI gives better accuracy and has the advantage of easy use.

  8. ALGORITHM OF WORK OF SYSTEM OF MANAGEMENT BY AUTOMATED GEAR-BOXES CARS

    Directory of Open Access Journals (Sweden)

    O. Smirnov

    2009-01-01

    Full Text Available The development of algorithms of management system’s work by the automated gear-boxes vehicles is considered and the results of their practical use on the example of the KamAZ truck are considered.

  9. Scheduling vehicles in automated transportation systems : algorithms and case study

    NARCIS (Netherlands)

    van der Heijden, Matthijs C.; Ebben, Mark; Gademann, Noud; van Harten, Aart

    2000-01-01

    One of the major planning issues in large scale automated transportation systems is so-called empty vehicle management, the timely supply of vehicles to terminals in order to reduce cargo waiting times. Motivated by a Dutch pilot project on an underground cargo transportation system using Automated

  10. Evaluation of an automated single-channel sleep staging algorithm

    Directory of Open Access Journals (Sweden)

    Wang Y

    2015-09-01

    Full Text Available Ying Wang,1 Kenneth A Loparo,1,2 Monica R Kelly,3 Richard F Kaplan1 1General Sleep Corporation, Euclid, OH, 2Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH, 3Department of Psychology, University of Arizona, Tucson, AZ, USA Background: We previously published the performance evaluation of an automated electroencephalography (EEG-based single-channel sleep–wake detection algorithm called Z-ALG used by the Zmachine® sleep monitoring system. The objective of this paper is to evaluate the performance of a new algorithm called Z-PLUS, which further differentiates sleep as detected by Z-ALG into Light Sleep, Deep Sleep, and Rapid Eye Movement (REM Sleep, against laboratory polysomnography (PSG using a consensus of expert visual scorers. Methods: Single night, in-lab PSG recordings from 99 subjects (52F/47M, 18–60 years, median age 32.7 years, including both normal sleepers and those reporting a variety of sleep complaints consistent with chronic insomnia, sleep apnea, and restless leg syndrome, as well as those taking selective serotonin reuptake inhibitor/serotonin–norepinephrine reuptake inhibitor antidepressant medications, previously evaluated using Z-ALG were re-examined using Z-PLUS. EEG data collected from electrodes placed at the differential-mastoids (A1–A2 were processed by Z-ALG to determine wake and sleep, then those epochs detected as sleep were further processed by Z-PLUS to differentiate into Light Sleep, Deep Sleep, and REM. EEG data were visually scored by multiple certified polysomnographic technologists according to the Rechtschaffen and Kales criterion, and then combined using a majority-voting rule to create a PSG Consensus score file for each of the 99 subjects. Z-PLUS output was compared to the PSG Consensus score files for both epoch-by-epoch (eg, sensitivity, specificity, and kappa and sleep stage-related statistics (eg, Latency to Deep Sleep, Latency to REM

  11. Automated Test Assembly for Cognitive Diagnosis Models Using a Genetic Algorithm

    Science.gov (United States)

    Finkelman, Matthew; Kim, Wonsuk; Roussos, Louis A.

    2009-01-01

    Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to…

  12. Progress on the development of automated data analysis algorithms and software for ultrasonic inspection of composites

    Science.gov (United States)

    Aldrin, John C.; Coughlin, Chris; Forsyth, David S.; Welter, John T.

    2014-02-01

    Progress is presented on the development and implementation of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. ADA processing results are presented for test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions.

  13. An Overview of the Automated Dispatch Controller Algorithms in the System Advisor Model (SAM)

    Energy Technology Data Exchange (ETDEWEB)

    DiOrio, Nicholas A [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-11-22

    Three automatic dispatch modes have been added to the battery model within the System Adviser Model. These controllers have been developed to perform peak shaving in an automated fashion, providing users with a way to see the benefit of reduced demand charges without manually programming a complicated dispatch control. A flexible input option allows more advanced interaction with the automated controller. This document will describe the algorithms in detail and present brief results on its use and limitations.

  14. Value Focused Thinking Applications to Supervised Pattern Classification With Extensions to Hyperspectral Anomaly Detection Algorithms

    Science.gov (United States)

    2015-03-26

    algorithms (including Naive Bayes , Classification Trees, and Quadratic Discriminant) through the development, use, and analysis of a value focused...146 Figure 97. - Naive Bayes Bias Comparison .................................................................... 146...Comparison ...................................................................... 147 Figure 100. - Naive Bayes Variance Comparison

  15. Automated phylogenetic detection of recombination using a genetic algorithm

    National Research Council Canada - National Science Library

    Kosakovsky Pond, Sergei L; Posada, David; Gravenor, Michael B; Woelk, Christopher H; Frost, Simon D W

    2006-01-01

    .... We propose a model-based framework that uses a genetic algorithm to search a multiple-sequence alignment for putative recombination break points, quantifies the level of support for their locations...

  16. Automated algorithm for counting microbleeds in patients with familial cerebral cavernous malformations.

    Science.gov (United States)

    Zou, Xiaowei; Hart, Blaine L; Mabray, Marc; Bartlett, Mary R; Bian, Wei; Nelson, Jeffrey; Morrison, Leslie A; McCulloch, Charles E; Hess, Christopher P; Lupo, Janine M; Kim, Helen

    2017-07-01

    Familial cerebral cavernous malformation (CCM) patients present with multiple lesions that can grow both in number and size over time and are reliably detected on susceptibility-weighted imaging (SWI). Manual counting of lesions is arduous and subject to high variability. We aimed to develop an automated algorithm for counting CCM microbleeds (lesions <5 mm in diameter) on SWI images. Fifty-seven familial CCM type-1 patients were included in this institutional review board-approved study. Baseline SWI (n = 57) and follow-up SWI (n = 17) were performed on a 3T Siemens MR scanner with lesions counted manually by the study neuroradiologist. We modified an algorithm for detecting radiation-induced microbleeds on SWI images in brain tumor patients, using a training set of 22 manually delineated CCM microbleeds from two random scans. Manual and automated counts were compared using linear regression with robust standard errors, intra-class correlation (ICC), and paired t tests. A validation analysis comparing the automated counting algorithm and a consensus read from two neuroradiologists was used to calculate sensitivity, the proportion of microbleeds correctly identified by the automated algorithm. Automated and manual microbleed counts were in strong agreement in both baseline (ICC = 0.95, p < 0.001) and longitudinal (ICC = 0.88, p < 0.001) analyses, with no significant difference between average counts (baseline p = 0.11, longitudinal p = 0.29). In the validation analysis, the algorithm correctly identified 662 of 1325 microbleeds (sensitivity=50%), again with strong agreement between approaches (ICC = 0.77, p < 0.001). The automated algorithm is a consistent method for counting microbleeds in familial CCM patients that can facilitate lesion quantification and tracking.

  17. Evaluation of supervised machine-learning algorithms to distinguish between inflammatory bowel disease and alimentary lymphoma in cats.

    Science.gov (United States)

    Awaysheh, Abdullah; Wilcke, Jeffrey; Elvinger, François; Rees, Loren; Fan, Weiguo; Zimmerman, Kurt L

    2016-11-01

    Inflammatory bowel disease (IBD) and alimentary lymphoma (ALA) are common gastrointestinal diseases in cats. The very similar clinical signs and histopathologic features of these diseases make the distinction between them diagnostically challenging. We tested the use of supervised machine-learning algorithms to differentiate between the 2 diseases using data generated from noninvasive diagnostic tests. Three prediction models were developed using 3 machine-learning algorithms: naive Bayes, decision trees, and artificial neural networks. The models were trained and tested on data from complete blood count (CBC) and serum chemistry (SC) results for the following 3 groups of client-owned cats: normal, inflammatory bowel disease (IBD), or alimentary lymphoma (ALA). Naive Bayes and artificial neural networks achieved higher classification accuracy (sensitivities of 70.8% and 69.2%, respectively) than the decision tree algorithm (63%, p cats into the 3 classes. These models can provide another noninvasive diagnostic tool to assist clinicians with differentiating between IBD and ALA, and between diseased and nondiseased cats. © 2016 The Author(s).

  18. A Solution Generator Algorithm for Decision Making based Automated Negotiation in the Construction Domain

    Directory of Open Access Journals (Sweden)

    Arazi Idrus

    2017-12-01

    Full Text Available In this paper, we present our work-in-progress of a proposed framework for automated negotiation in the construction domain. The proposed framework enables software agents to conduct negotiations and autonomously make value-based decisions. The framework consists of three main components which are, solution generator algorithm, negotiation algorithm, and conflict resolution algorithm. This paper extends the discussion on the solution generator algorithm that enables software agents to generate solutions and rank them from 1st to nth solution for the negotiation stage of the operation. The solution generator algorithm consists of three steps which are, review solutions, rank solutions, and form ranked solutions. For validation purpose, we present a scenario that utilizes the proposed algorithm to rank solutions. The validation shows that the algorithm is promising, however, it also highlights the conflict between different parties that needs further negotiation action.

  19. Statistical Evaluation of a Fully Automated Mammographic Breast Density Algorithm

    Directory of Open Access Journals (Sweden)

    Mohamed Abdolell

    2013-01-01

    estimated density in 5% increments for 138 “For Presentation” single MLO views; the median of the radiologists’ estimates was used as the reference standard. Agreement amongst radiologists was excellent, ICC = 0.884, 95% CI (0.854, 0.910. Similarly, the agreement between the algorithm and the reference standard was excellent, ICC = 0.862, falling within the 95% CI of the radiologists’ estimates. The Bland-Altman plot showed that the reference standard was slightly positively biased (+1.86% compared to the algorithm-generated densities. A scatter plot showed that the algorithm moderately overestimated low densities and underestimated high densities. A box plot showed that 95% of the algorithm-generated assessments fell within one BI-RADS category of the reference standard. This study demonstrates the effective use of several statistical techniques that collectively produce a comprehensive evaluation of the algorithm and its potential to provide mammographic density measures that can be used to inform clinical practice.

  20. Automated Photogrammetric Image Matching with Sift Algorithm and Delaunay Triangulation

    DEFF Research Database (Denmark)

    Karagiannis, Georgios; Antón Castro, Francesc/François; Mioc, Darka

    2016-01-01

    An algorithm for image matching of multi-sensor and multi-temporal satellite images is developed. The method is based on the SIFT feature detector proposed by Lowe in (Lowe, 1999). First, SIFT feature points are detected independently in two images (reference and sensed image). The features detec...... of each feature set for each image are computed. The isomorphism of the Delaunay triangulations is determined to guarantee the quality of the image matching. The algorithm is implemented in Matlab and tested on World-View 2, SPOT6 and TerraSAR-X image patches....

  1. Advanced Algorithms and Automation Tools for Discrete Ordinates Methods in Parallel Environments

    Energy Technology Data Exchange (ETDEWEB)

    Alireza Haghighat

    2003-05-07

    This final report discusses major accomplishments of a 3-year project under the DOE's NEER Program. The project has developed innovative and automated algorithms, codes, and tools for solving the discrete ordinates particle transport method efficiently in parallel environments. Using a number of benchmark and real-life problems, the performance and accuracy of the new algorithms have been measured and analyzed.

  2. Development of Nuclear Power Plant Safety Evaluation Method for the Automation Algorithm Application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Geun; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2016-10-15

    It is commonly believed that replacing human operators to the automated system would guarantee greater efficiency, lower workloads, and fewer human error. Conventional machine learning techniques are considered as not capable to handle complex situations in NPP. Due to these kinds of issues, automation is not actively adopted although human error probability drastically increases during abnormal situations in NPP due to overload of information, high workload, and short time available for diagnosis. Recently, new machine learning techniques, which are known as ‘deep learning’ techniques have been actively applied to many fields, and the deep learning technique-based artificial intelligences (AIs) are showing better performance than conventional AIs. In 2015, deep Q-network (DQN) which is one of the deep learning techniques was developed and applied to train AI that automatically plays various Atari 2800 games, and this AI surpassed the human-level playing in many kind of games. Also in 2016, ‘Alpha-Go’, which was developed by ‘Google Deepmind’ based on deep learning technique to play the game of Go (i.e. Baduk), was defeated Se-dol Lee who is the World Go champion with score of 4:1. By the effort for reducing human error in NPPs, the ultimate goal of this study is the development of automation algorithm which can cover various situations in NPPs. As the first part, quantitative and real-time NPP safety evaluation method is being developed in order to provide the training criteria for automation algorithm. For that, EWS concept of medical field was adopted, and the applicability is investigated in this paper. Practically, the application of full automation (i.e. fully replaces human operators) may requires much more time for the validation and investigation of side-effects after the development of automation algorithm, and so the adoption in the form of full automation will take long time.

  3. An algorithm for automated layout of process description maps drawn in SBGN.

    Science.gov (United States)

    Genc, Begum; Dogrusoz, Ugur

    2016-01-01

    Evolving technology has increased the focus on genomics. The combination of today's advanced techniques with decades of molecular biology research has yielded huge amounts of pathway data. A standard, named the Systems Biology Graphical Notation (SBGN), was recently introduced to allow scientists to represent biological pathways in an unambiguous, easy-to-understand and efficient manner. Although there are a number of automated layout algorithms for various types of biological networks, currently none specialize on process description (PD) maps as defined by SBGN. We propose a new automated layout algorithm for PD maps drawn in SBGN. Our algorithm is based on a force-directed automated layout algorithm called Compound Spring Embedder (CoSE). On top of the existing force scheme, additional heuristics employing new types of forces and movement rules are defined to address SBGN-specific rules. Our algorithm is the only automatic layout algorithm that properly addresses all SBGN rules for drawing PD maps, including placement of substrates and products of process nodes on opposite sides, compact tiling of members of molecular complexes and extensively making use of nested structures (compound nodes) to properly draw cellular locations and molecular complex structures. As demonstrated experimentally, the algorithm results in significant improvements over use of a generic layout algorithm such as CoSE in addressing SBGN rules on top of commonly accepted graph drawing criteria. An implementation of our algorithm in Java is available within ChiLay library (https://github.com/iVis-at-Bilkent/chilay). ugur@cs.bilkent.edu.tr or dogrusoz@cbio.mskcc.org Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  4. Improving automated case finding for ectopic pregnancy using a classification algorithm

    Science.gov (United States)

    Scholes, D.; Yu, O.; Raebel, M.A.; Trabert, B.; Holt, V.L.

    2011-01-01

    BACKGROUND Research and surveillance work addressing ectopic pregnancy often rely on diagnosis and procedure codes available from automated data sources. However, the use of these codes may result in misclassification of cases. Our aims were to evaluate the accuracy of standard ectopic pregnancy codes; and, through the use of additional automated data, to develop and validate a classification algorithm that could potentially improve the accuracy of ectopic pregnancy case identification. METHODS Using automated databases from two US managed-care plans, Group Health Cooperative (GH) and Kaiser Permanente Colorado (KPCO), we sampled women aged 15–44 with an ectopic pregnancy diagnosis or procedure code from 2001 to 2007 and verified their true case status through medical record review. We calculated positive predictive values (PPV) for code-selected cases compared with true cases at both sites. Using additional variables from the automated databases and classification and regression tree (CART) analysis, we developed a case-finding algorithm at GH (n = 280), which was validated at KPCO (n = 500). RESULTS Compared with true cases, the PPV of code-selected cases was 68 and 81% at GH and KPCO, respectively. The case-finding algorithm identified three predictors: ≥2 visits with an ectopic pregnancy code within 180 days; International Classification of Diseases, 9th Revision, Clinical Modification codes for tubal pregnancy; and methotrexate treatment. Relative to true cases, performance measures for the development and validation sets, respectively, were: 93 and 95% sensitivity; 81 and 81% specificity; 91 and 96% PPV; 84 and 79% negative predictive value. Misclassification proportions were 32% in the development set and 19% in the validation set when using standard codes; they were 11 and 8%, respectively, when using the algorithm. CONCLUSIONS The ectopic pregnancy algorithm improved case-finding accuracy over use of standard codes alone and generalized well to a

  5. Application of supervised machine learning algorithms for the classification of regulatory RNA riboswitches.

    Science.gov (United States)

    Singh, Swadha; Singh, Raghvendra

    2017-03-01

    Riboswitches, the small structured RNA elements, were discovered about a decade ago. It has been the subject of intense interest to identify riboswitches, understand their mechanisms of action and use them in genetic engineering. The accumulation of genome and transcriptome sequence data and comparative genomics provide unprecedented opportunities to identify riboswitches in the genome. In the present study, we have evaluated the following six machine learning algorithms for their efficiency to classify riboswitches: J48, BayesNet, Naïve Bayes, Multilayer Perceptron, sequential minimal optimization, hidden Markov model (HMM). For determining effective classifier, the algorithms were compared on the statistical measures of specificity, sensitivity, accuracy, F-measure and receiver operating characteristic (ROC) plot analysis. The classifier Multilayer Perceptron achieved the best performance, with the highest specificity, sensitivity, F-score and accuracy, and with the largest area under the ROC curve, whereas HMM was the poorest performer. At present, the available tools for the prediction and classification of riboswitches are based on covariance model, support vector machine and HMM. The present study determines Multilayer Perceptron as a better classifier for the genome-wide riboswitch searches. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  6. A new avenue for classification and prediction of olive cultivars using supervised and unsupervised algorithms.

    Directory of Open Access Journals (Sweden)

    Amir H Beiki

    Full Text Available Various methods have been used to identify cultivares of olive trees; herein we used different bioinformatics algorithms to propose new tools to classify 10 cultivares of olive based on RAPD and ISSR genetic markers datasets generated from PCR reactions. Five RAPD markers (OPA0a21, OPD16a, OP01a1, OPD16a1 and OPA0a8 and five ISSR markers (UBC841a4, UBC868a7, UBC841a14, U12BC807a and UBC810a13 selected as the most important markers by all attribute weighting models. K-Medoids unsupervised clustering run on SVM dataset was fully able to cluster each olive cultivar to the right classes. All trees (176 induced by decision tree models generated meaningful trees and UBC841a4 attribute clearly distinguished between foreign and domestic olive cultivars with 100% accuracy. Predictive machine learning algorithms (SVM and Naïve Bayes were also able to predict the right class of olive cultivares with 100% accuracy. For the first time, our results showed data mining techniques can be effectively used to distinguish between plant cultivares and proposed machine learning based systems in this study can predict new olive cultivars with the best possible accuracy.

  7. Design principles and algorithms for automated air traffic management

    Science.gov (United States)

    Erzberger, Heinz

    1995-01-01

    This paper presents design principles and algorithm for building a real time scheduler. The primary objective of the scheduler is to assign arrival aircraft to a favorable landing runway and schedule them to land at times that minimize delays. A further objective of the scheduler is to allocate delays between high altitude airspace far from the airport and low altitude airspace near the airport. A method of delay allocation is described that minimizes the average operating cost in the presence of errors in controlling aircraft to a specified landing time.

  8. A scale space based algorithm for automated segmentation of single shot tagged MRI of shearing deformation.

    Science.gov (United States)

    Sprengers, Andre M J; Caan, Matthan W A; Moerman, Kevin M; Nederveen, Aart J; Lamerichs, Rolf M; Stoker, Jaap

    2013-04-01

    This study proposes a scale space based algorithm for automated segmentation of single-shot tagged images of modest SNR. Furthermore the algorithm was designed for analysis of discontinuous or shearing types of motion, i.e. segmentation of broken tag patterns. The proposed algorithm utilises non-linear scale space for automatic segmentation of single-shot tagged images. The algorithm's ability to automatically segment tagged shearing motion was evaluated in a numerical simulation and in vivo. A typical shearing deformation was simulated in a Shepp-Logan phantom allowing for quantitative evaluation of the algorithm's success rate as a function of both SNR and the amount of deformation. For a qualitative in vivo evaluation tagged images showing deformations in the calf muscles and eye movement in a healthy volunteer were acquired. Both the numerical simulation and the in vivo tagged data demonstrated the algorithm's ability for automated segmentation of single-shot tagged MR provided that SNR of the images is above 10 and the amount of deformation does not exceed the tag spacing. The latter constraint can be met by adjusting the tag delay or the tag spacing. The scale space based algorithm for automatic segmentation of single-shot tagged MR enables the application of tagged MR to complex (shearing) deformation and the processing of datasets with relatively low SNR.

  9. The effects of automated artifact removal algorithms on electroencephalography-based Alzheimer's disease diagnosis

    Science.gov (United States)

    Cassani, Raymundo; Falk, Tiago H.; Fraga, Francisco J.; Kanda, Paulo A. M.; Anghinah, Renato

    2014-01-01

    Over the last decade, electroencephalography (EEG) has emerged as a reliable tool for the diagnosis of cortical disorders such as Alzheimer's disease (AD). EEG signals, however, are susceptible to several artifacts, such as ocular, muscular, movement, and environmental. To overcome this limitation, existing diagnostic systems commonly depend on experienced clinicians to manually select artifact-free epochs from the collected multi-channel EEG data. Manual selection, however, is a tedious and time-consuming process, rendering the diagnostic system “semi-automated.” Notwithstanding, a number of EEG artifact removal algorithms have been proposed in the literature. The (dis)advantages of using such algorithms in automated AD diagnostic systems, however, have not been documented; this paper aims to fill this gap. Here, we investigate the effects of three state-of-the-art automated artifact removal (AAR) algorithms (both alone and in combination with each other) on AD diagnostic systems based on four different classes of EEG features, namely, spectral, amplitude modulation rate of change, coherence, and phase. The three AAR algorithms tested are statistical artifact rejection (SAR), blind source separation based on second order blind identification and canonical correlation analysis (BSS-SOBI-CCA), and wavelet enhanced independent component analysis (wICA). Experimental results based on 20-channel resting-awake EEG data collected from 59 participants (20 patients with mild AD, 15 with moderate-to-severe AD, and 24 age-matched healthy controls) showed the wICA algorithm alone outperforming other enhancement algorithm combinations across three tasks: diagnosis (control vs. mild vs. moderate), early detection (control vs. mild), and disease progression (mild vs. moderate), thus opening the doors for fully-automated systems that can assist clinicians with early detection of AD, as well as disease severity progression assessment. PMID:24723886

  10. The effects of automated artifact removal algorithms on electroencephalography-based Alzheimer’s disease diagnosis

    Directory of Open Access Journals (Sweden)

    Raymundo eCassani

    2014-03-01

    Full Text Available Over the last decade, electroencephalography (EEG has emerged as a reliable tool for the diagnosis of cortical disorders such as Alzheimer's disease (AD. EEG signals, however, are susceptible to several artifacts, such as ocular, muscular, movement, and environmental. To overcome this limitation, existing diagnostic systems commonly depend on experienced clinicians to manually select artifact-free epochs from the collected multi-channel EEG data. Manual selection, however, is a tedious and time-consuming process, rendering the diagnostic system ``semi-automated. Notwithstanding, a number of EEG artifact removal algorithms have been proposed in the literature. The (disadvantages of using such algorithms in automated AD diagnostic systems, however, have not been documented; this paper aims to fill this gap. Here, we investigate the effects of three state-of-the-art automated artifact removal (AAR algorithms (both alone and in combination with each other on AD diagnostic systems based on four different classes of EEG features, namely, spectral, amplitude modulation rate of change, coherence, and phase. The three AAR algorithms tested are statistical artifact rejection (SAR, blind source separation based on second order blind identification and canonical correlation analysis (BSS-SOBI-CCA, and wavelet enhanced independent component analysis (wICA. Experimental results based on 20-channel resting-awake EEG data collected from 59 participants (20 patients with mild AD, 15 with moderate-to-severe AD, and 24 age-matched healthy controls showed the wICA algorithm alone outperforming other enhancement algorithm combinations across three tasks: diagnosis (control vs. mild vs. moderate, early detection (control vs. mild, and disease progression (mild vs. moderate, thus opening the doors for fully-automated systems that can assist clinicians with early detection of AD, as well as disease severity progression assessment.

  11. Acoustic diagnosis of pulmonary hypertension: automated speech- recognition-inspired classification algorithm outperforms physicians

    Science.gov (United States)

    Kaddoura, Tarek; Vadlamudi, Karunakar; Kumar, Shine; Bobhate, Prashant; Guo, Long; Jain, Shreepal; Elgendi, Mohamed; Coe, James Y.; Kim, Daniel; Taylor, Dylan; Tymchak, Wayne; Schuurmans, Dale; Zemp, Roger J.; Adatia, Ian

    2016-09-01

    We hypothesized that an automated speech- recognition-inspired classification algorithm could differentiate between the heart sounds in subjects with and without pulmonary hypertension (PH) and outperform physicians. Heart sounds, electrocardiograms, and mean pulmonary artery pressures (mPAp) were recorded simultaneously. Heart sound recordings were digitized to train and test speech-recognition-inspired classification algorithms. We used mel-frequency cepstral coefficients to extract features from the heart sounds. Gaussian-mixture models classified the features as PH (mPAp ≥ 25 mmHg) or normal (mPAp speech-recognition-inspired algorithm was 74% compared to 56% by physicians (p = 0.005). The false positive rate for the algorithm was 34% versus 50% (p = 0.04) for clinicians. The false negative rate for the algorithm was 23% and 68% (p = 0.0002) for physicians. We developed an automated speech-recognition-inspired classification algorithm for the acoustic diagnosis of PH that outperforms physicians that could be used to screen for PH and encourage earlier specialist referral.

  12. Automated Escape Guidance Algorithms for An Escape Vehicle

    Science.gov (United States)

    Flanary, Ronald; Hammen, David; Ito, Daigoro; Rabalais, Bruce; Rishikof, Brian; Siebold, Karl

    2002-01-01

    An escape vehicle was designed to provide an emergency evacuation for crew members living on a space station. For maximum escape capability, the escape vehicle needs to have the ability to safely evacuate a station in a contingency scenario such as an uncontrolled (e.g., tumbling) station. This emergency escape sequence will typically be divided into three events: The fust separation event (SEP1), the navigation reconstruction event, and the second separation event (SEP2). SEP1 is responsible for taking the spacecraft from its docking port to a distance greater than the maximum radius of the rotating station. The navigation reconstruction event takes place prior to the SEP2 event and establishes the orbital state to within the tolerance limits necessary for SEP2. The SEP2 event calculates and performs an avoidance burn to prevent station recontact during the next several orbits. This paper presents the tools and results for the whole separation sequence with an emphasis on the two separation events. The fust challenge includes collision avoidance during the escape sequence while the station is in an uncontrolled rotational state, with rotation rates of up to 2 degrees per second. The task of avoiding a collision may require the use of the Vehicle's de-orbit propulsion system for maximum thrust and minimum dwell time within the vicinity of the station vicinity. The thrust of the propulsion system is in a single direction, and can be controlled only by the attitude of the spacecraft. Escape algorithms based on a look-up table or analytical guidance can be implemented since the rotation rate and the angular momentum vector can be sensed onboard and a-priori knowledge of the position and relative orientation are available. In addition, crew intervention has been provided for in the event of unforeseen obstacles in the escape path. The purpose of the SEP2 burn is to avoid re-contact with the station over an extended period of time. Performing this maneuver properly

  13. Automated drusen detection in retinal images using analytical modelling algorithms

    Directory of Open Access Journals (Sweden)

    Manivannan Ayyakkannu

    2011-07-01

    Full Text Available Abstract Background Drusen are common features in the ageing macula associated with exudative Age-Related Macular Degeneration (ARMD. They are visible in retinal images and their quantitative analysis is important in the follow up of the ARMD. However, their evaluation is fastidious and difficult to reproduce when performed manually. Methods This article proposes a methodology for Automatic Drusen Deposits Detection and quantification in Retinal Images (AD3RI by using digital image processing techniques. It includes an image pre-processing method to correct the uneven illumination and to normalize the intensity contrast with smoothing splines. The drusen detection uses a gradient based segmentation algorithm that isolates drusen and provides basic drusen characterization to the modelling stage. The detected drusen are then fitted by Modified Gaussian functions, producing a model of the image that is used to evaluate the affected area. Twenty two images were graded by eight experts, with the aid of a custom made software and compared with AD3RI. This comparison was based both on the total area and on the pixel-to-pixel analysis. The coefficient of variation, the intraclass correlation coefficient, the sensitivity, the specificity and the kappa coefficient were calculated. Results The ground truth used in this study was the experts' average grading. In order to evaluate the proposed methodology three indicators were defined: AD3RI compared to the ground truth (A2G; each expert compared to the other experts (E2E and a standard Global Threshold method compared to the ground truth (T2G. The results obtained for the three indicators, A2G, E2E and T2G, were: coefficient of variation 28.8 %, 22.5 % and 41.1 %, intraclass correlation coefficient 0.92, 0.88 and 0.67, sensitivity 0.68, 0.67 and 0.74, specificity 0.96, 0.97 and 0.94, and kappa coefficient 0.58, 0.60 and 0.49, respectively. Conclusions The gradings produced by AD3RI obtained an agreement

  14. Location-Based Self-Adaptive Routing Algorithm for Wireless Sensor Networks in Home Automation

    Directory of Open Access Journals (Sweden)

    Hong SeungHo

    2011-01-01

    Full Text Available The use of wireless sensor networks in home automation (WSNHA is attractive due to their characteristics of self-organization, high sensing fidelity, low cost, and potential for rapid deployment. Although the AODVjr routing algorithm in IEEE 802.15.4/ZigBee and other routing algorithms have been designed for wireless sensor networks, not all are suitable for WSNHA. In this paper, we propose a location-based self-adaptive routing algorithm for WSNHA called WSNHA-LBAR. It confines route discovery flooding to a cylindrical request zone, which reduces the routing overhead and decreases broadcast storm problems in the MAC layer. It also automatically adjusts the size of the request zone using a self-adaptive algorithm based on Bayes' theorem. This makes WSNHA-LBAR more adaptable to the changes of the network state and easier to implement. Simulation results show improved network reliability as well as reduced routing overhead.

  15. Shockwave-Based Automated Vehicle Longitudinal Control Algorithm for Nonrecurrent Congestion Mitigation

    Directory of Open Access Journals (Sweden)

    Liuhui Zhao

    2017-01-01

    Full Text Available A shockwave-based speed harmonization algorithm for the longitudinal movement of automated vehicles is presented in this paper. In the advent of Connected/Automated Vehicle (C/AV environment, the proposed algorithm can be applied to capture instantaneous shockwaves constructed from vehicular speed profiles shared by individual equipped vehicles. With a continuous wavelet transform (CWT method, the algorithm detects abnormal speed drops in real-time and optimizes speed to prevent the shockwave propagating to the upstream traffic. A traffic simulation model is calibrated to evaluate the applicability and efficiency of the proposed algorithm. Based on 100% C/AV market penetration, the simulation results show that the CWT-based algorithm accurately detects abnormal speed drops. With the improved accuracy of abnormal speed drop detection, the simulation results also demonstrate that the congestion can be mitigated by reducing travel time and delay up to approximately 9% and 18%, respectively. It is also found that the shockwave caused by nonrecurrent congestion is quickly dissipated even with low market penetration.

  16. A new memetic algorithm for mitigating tandem automated guided vehicle system partitioning problem

    Science.gov (United States)

    Pourrahimian, Parinaz

    2017-11-01

    Automated Guided Vehicle System (AGVS) provides the flexibility and automation demanded by Flexible Manufacturing System (FMS). However, with the growing concern on responsible management of resource use, it is crucial to manage these vehicles in an efficient way in order reduces travel time and controls conflicts and congestions. This paper presents the development process of a new Memetic Algorithm (MA) for optimizing partitioning problem of tandem AGVS. MAs employ a Genetic Algorithm (GA), as a global search, and apply a local search to bring the solutions to a local optimum point. A new Tabu Search (TS) has been developed and combined with a GA to refine the newly generated individuals by GA. The aim of the proposed algorithm is to minimize the maximum workload of the system. After all, the performance of the proposed algorithm is evaluated using Matlab. This study also compared the objective function of the proposed MA with GA. The results showed that the TS, as a local search, significantly improves the objective function of the GA for different system sizes with large and small numbers of zone by 1.26 in average.

  17. Automated sequence-specific protein NMR assignment using the memetic algorithm MATCH

    Energy Technology Data Exchange (ETDEWEB)

    Volk, Jochen [ETH Zuerich, Institut fuer Molekularbiologie und Biophysik (Switzerland); Herrmann, Torsten [Universite de Lyon, CNRS/ENS Lyon/UCB-Lyon 1 (France); Wuethrich, Kurt [ETH Zuerich, Institut fuer Molekularbiologie und Biophysik (Switzerland)], E-mail: wuthrich@mol.biol.ethz.ch

    2008-07-15

    MATCH (Memetic Algorithm and Combinatorial Optimization Heuristics) is a new memetic algorithm for automated sequence-specific polypeptide backbone NMR assignment of proteins. MATCH employs local optimization for tracing partial sequence-specific assignments within a global, population-based search environment, where the simultaneous application of local and global optimization heuristics guarantees high efficiency and robustness. MATCH thus makes combined use of the two predominant concepts in use for automated NMR assignment of proteins. Dynamic transition and inherent mutation are new techniques that enable automatic adaptation to variable quality of the experimental input data. The concept of dynamic transition is incorporated in all major building blocks of the algorithm, where it enables switching between local and global optimization heuristics at any time during the assignment process. Inherent mutation restricts the intrinsically required randomness of the evolutionary algorithm to those regions of the conformation space that are compatible with the experimental input data. Using intact and artificially deteriorated APSY-NMR input data of proteins, MATCH performed sequence-specific resonance assignment with high efficiency and robustness.

  18. Progress on automated data analysis algorithms for ultrasonic inspection of composites

    Science.gov (United States)

    Aldrin, John C.; Forsyth, David S.; Welter, John T.

    2015-03-01

    Progress is presented on the development and demonstration of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. New algorithms have been implemented to reliably identify indications in time-of-flight images near the front and back walls of composite panels. Adaptive call criteria have also been applied to address sensitivity to variation in backwall signal level, panel thickness variation, and internal signal noise. ADA processing results are presented for a variety of test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions. Software tools have been developed to support both ADA algorithm design and certification, producing a statistical evaluation of indication results and false calls using a matching process with predefined truth tables. Parametric studies were performed to evaluate detection and false call results with respect to varying algorithm settings.

  19. A novel validation algorithm allows for automated cell tracking and the extraction of biologically meaningful parameters.

    Directory of Open Access Journals (Sweden)

    Daniel H Rapoport

    Full Text Available Automated microscopy is currently the only method to non-invasively and label-free observe complex multi-cellular processes, such as cell migration, cell cycle, and cell differentiation. Extracting biological information from a time-series of micrographs requires each cell to be recognized and followed through sequential microscopic snapshots. Although recent attempts to automatize this process resulted in ever improving cell detection rates, manual identification of identical cells is still the most reliable technique. However, its tedious and subjective nature prevented tracking from becoming a standardized tool for the investigation of cell cultures. Here, we present a novel method to accomplish automated cell tracking with a reliability comparable to manual tracking. Previously, automated cell tracking could not rival the reliability of manual tracking because, in contrast to the human way of solving this task, none of the algorithms had an independent quality control mechanism; they missed validation. Thus, instead of trying to improve the cell detection or tracking rates, we proceeded from the idea to automatically inspect the tracking results and accept only those of high trustworthiness, while rejecting all other results. This validation algorithm works independently of the quality of cell detection and tracking through a systematic search for tracking errors. It is based only on very general assumptions about the spatiotemporal contiguity of cell paths. While traditional tracking often aims to yield genealogic information about single cells, the natural outcome of a validated cell tracking algorithm turns out to be a set of complete, but often unconnected cell paths, i.e. records of cells from mitosis to mitosis. This is a consequence of the fact that the validation algorithm takes complete paths as the unit of rejection/acceptance. The resulting set of complete paths can be used to automatically extract important biological parameters

  20. Automated contouring error detection based on supervised geometric attribute distribution models for radiation therapy: a general strategy.

    Science.gov (United States)

    Chen, Hsin-Chen; Tan, Jun; Dolly, Steven; Kavanaugh, James; Anastasio, Mark A; Low, Daniel A; Li, H Harold; Altman, Michael; Gay, Hiram; Thorstad, Wade L; Mutic, Sasa; Li, Hua

    2015-02-01

    One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy based on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Considering the radiation therapy structures' geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets were separately

  1. Genetic algorithms approach to the problem of the automated vehicle identification equipment location

    Energy Technology Data Exchange (ETDEWEB)

    Teodorovic, D.; Van Aerde, M.; Zhu, F.; Dion, F. [Virginia Polytechnic Instutute and State University, Dept. of Civil and Environmental Engineering, Blacksburg, VA (United States)

    2002-12-31

    Automated Vehicle Identification technology allows vehicles equipped with special tags to be detected at specific points in the transportation network without any action by the driver as they pass under a reading station. Benefits of the systems are found in the real-time measurement of traffic patterns, traffic operations and control, reduction of traffic congestion at transportation facilities, transportation planning studies, information and control, electronic toll collection, vehicle identification and other related functions. The objective of this paper is to develop a heuristic model for the optimal location of automated vehicle identification equipment using generic algorithms. A model is proposed and it is tested for the case of a relatively small hypothetical transportation network. Testing the model showed promising results. As the subject of future research other metaheuristic approaches such as simulated annealing and taboo searching have been identified as most important directions. 4 refs., 1 tab., 11 figs.

  2. Electronic design automation of analog ICs combining gradient models with multi-objective evolutionary algorithms

    CERN Document Server

    Rocha, Frederico AE; Lourenço, Nuno CC; Horta, Nuno CG

    2013-01-01

    This book applies to the scientific area of electronic design automation (EDA) and addresses the automatic sizing of analog integrated circuits (ICs). Particularly, this book presents an approach to enhance a state-of-the-art layout-aware circuit-level optimizer (GENOM-POF), by embedding statistical knowledge from an automatically generated gradient model into the multi-objective multi-constraint optimization kernel based on the NSGA-II algorithm. The results showed allow the designer to explore the different trade-offs of the solution space, both through the achieved device sizes, or the resp

  3. Algorithms of control parameters selection for automation of FDM 3D printing process

    Directory of Open Access Journals (Sweden)

    Kogut Paweł

    2017-01-01

    Full Text Available The paper presents algorithms of control parameters selection of the Fused Deposition Modelling (FDM technology in case of an open printing solutions environment and 3DGence ONE printer. The following parameters were distinguished: model mesh density, material flow speed, cooling performance, retraction and printing speeds. These parameters are independent in principle printing system, but in fact to a certain degree that results from the selected printing equipment features. This is the first step for automation of the 3D printing process in FDM technology.

  4. Deadlock-free genetic scheduling algorithm for automated manufacturing systems based on deadlock control policy.

    Science.gov (United States)

    Xing, KeYi; Han, LiBin; Zhou, MengChu; Wang, Feng

    2012-06-01

    Deadlock-free control and scheduling are vital for optimizing the performance of automated manufacturing systems (AMSs) with shared resources and route flexibility. Based on the Petri net models of AMSs, this paper embeds the optimal deadlock avoidance policy into the genetic algorithm and develops a novel deadlock-free genetic scheduling algorithm for AMSs. A possible solution of the scheduling problem is coded as a chromosome representation that is a permutation with repetition of parts. By using the one-step look-ahead method in the optimal deadlock control policy, the feasibility of a chromosome is checked, and infeasible chromosomes are amended into feasible ones, which can be easily decoded into a feasible deadlock-free schedule. The chromosome representation and polynomial complexity of checking and amending procedures together support the cooperative aspect of genetic search for scheduling problems strongly.

  5. Automated EEG artifact elimination by applying machine learning algorithms to ICA-based features

    Science.gov (United States)

    Radüntz, Thea; Scouten, Jon; Hochmuth, Olaf; Meffert, Beate

    2017-08-01

    Objective. Biological and non-biological artifacts cause severe problems when dealing with electroencephalogram (EEG) recordings. Independent component analysis (ICA) is a widely used method for eliminating various artifacts from recordings. However, evaluating and classifying the calculated independent components (IC) as artifact or EEG is not fully automated at present. Approach. In this study, we propose a new approach for automated artifact elimination, which applies machine learning algorithms to ICA-based features. Main results. We compared the performance of our classifiers with the visual classification results given by experts. The best result with an accuracy rate of 95% was achieved using features obtained by range filtering of the topoplots and IC power spectra combined with an artificial neural network. Significance. Compared with the existing automated solutions, our proposed method is not limited to specific types of artifacts, electrode configurations, or number of EEG channels. The main advantages of the proposed method is that it provides an automatic, reliable, real-time capable, and practical tool, which avoids the need for the time-consuming manual selection of ICs during artifact removal.

  6. Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R; Kim, Jeehyun; Nelson, J Stuart [Beckman Laser Institute and Medical Clinic, University of California, Irvine, CA 92612 (United States)], E-mail: wverkruy@uci.edu

    2008-03-07

    Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.

  7. An Energy Efficient Adaptive Sampling Algorithm in a Sensor Network for Automated Water Quality Monitoring.

    Science.gov (United States)

    Shu, Tongxin; Xia, Min; Chen, Jiahong; Silva, Clarence de

    2017-11-05

    Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA) is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO) and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA), while achieving around the same Normalized Mean Error (NME), DDASA is superior in saving 5.31% more battery energy.

  8. Large-scale image region documentation for fully automated image biomarker algorithm development and evaluation.

    Science.gov (United States)

    Reeves, Anthony P; Xie, Yiting; Liu, Shuang

    2017-04-01

    With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset.

  9. Optimal Solution for VLSI Physical Design Automation Using Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    I. Hameem Shanavas

    2014-01-01

    Full Text Available In Optimization of VLSI Physical Design, area minimization and interconnect length minimization is an important objective in physical design automation of very large scale integration chips. The objective of minimizing the area and interconnect length would scale down the size of integrated chips. To meet the above objective, it is necessary to find an optimal solution for physical design components like partitioning, floorplanning, placement, and routing. This work helps to perform the optimization of the benchmark circuits with the above said components of physical design using hierarchical approach of evolutionary algorithms. The goal of minimizing the delay in partitioning, minimizing the silicon area in floorplanning, minimizing the layout area in placement, minimizing the wirelength in routing has indefinite influence on other criteria like power, clock, speed, cost, and so forth. Hybrid evolutionary algorithm is applied on each of its phases to achieve the objective. Because evolutionary algorithm that includes one or many local search steps within its evolutionary cycles to obtain the minimization of area and interconnect length. This approach combines a hierarchical design like genetic algorithm and simulated annealing to attain the objective. This hybrid approach can quickly produce optimal solutions for the popular benchmarks.

  10. Synthesis Study on Transitions in Signal Infrastructure and Control Algorithms for Connected and Automated Transportation

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, H. M. Abdul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wang, Hong [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Young, Stan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sperling, Joshua [National Renewable Energy Lab. (NREL), Golden, CO (United States); Beck, John [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2017-06-01

    Documenting existing state of practice is an initial step in developing future control infrastructure to be co-deployed for heterogeneous mix of connected and automated vehicles with human drivers while leveraging benefits to safety, congestion, and energy. With advances in information technology and extensive deployment of connected and automated vehicle technology anticipated over the coming decades, cities globally are making efforts to plan and prepare for these transitions. CAVs not only offer opportunities to improve transportation systems through enhanced safety and efficient operations of vehicles. There are also significant needs in terms of exploring how best to leverage vehicle-to-vehicle (V2V) technology, vehicle-to-infrastructure (V2I) technology and vehicle-to-everything (V2X) technology. Both Connected Vehicle (CV) and Connected and Automated Vehicle (CAV) paradigms feature bi-directional connectivity and share similar applications in terms of signal control algorithm and infrastructure implementation. The discussion in our synthesis study assumes the CAV/CV context where connectivity exists with or without automated vehicles. Our synthesis study explores the current state of signal control algorithms and infrastructure, reports the completed and newly proposed CV/CAV deployment studies regarding signal control schemes, reviews the deployment costs for CAV/AV signal infrastructure, and concludes with a discussion on the opportunities such as detector free signal control schemes and dynamic performance management for intersections, and challenges such as dependency on market adaptation and the need to build a fault-tolerant signal system deployment in a CAV/CV environment. The study will serve as an initial critical assessment of existing signal control infrastructure (devices, control instruments, and firmware) and control schemes (actuated, adaptive, and coordinated-green wave). Also, the report will help to identify the future needs for the signal

  11. An automated phase correction algorithm for retrieving permittivity and permeability of electromagnetic metamaterials

    Directory of Open Access Journals (Sweden)

    Z. X. Cao

    2014-06-01

    Full Text Available To retrieve complex-valued effective permittivity and permeability of electromagnetic metamaterials (EMMs based on resonant effect from scattering parameters using a complex logarithmic function is not inevitable. When complex values are expressed in terms of magnitude and phase, an infinite number of permissible phase angles is permissible due to the multi-valued property of complex logarithmic functions. Special attention needs to be paid to ensure continuity of the effective permittivity and permeability of lossy metamaterials as frequency sweeps. In this paper, an automated phase correction (APC algorithm is proposed to properly trace and compensate phase angles of the complex logarithmic function which may experience abrupt phase jumps near the resonant frequency region of the concerned EMMs, and hence the continuity of the effective optical properties of lossy metamaterials is ensured. The algorithm is then verified to extract effective optical properties from the simulated scattering parameters of the four different types of metamaterial media: a cut-wire cell array, a split ring resonator (SRR cell array, an electric-LC (E-LC resonator cell array, and a combined SRR and wire cell array respectively. The results demonstrate that the proposed algorithm is highly accurate and effective.

  12. Automated reconstruction algorithm for identification of 3D architectures of cribriform ductal carcinoma in situ.

    Directory of Open Access Journals (Sweden)

    Kerri-Ann Norton

    Full Text Available Ductal carcinoma in situ (DCIS is a pre-invasive carcinoma of the breast that exhibits several distinct morphologies but the link between morphology and patient outcome is not clear. We hypothesize that different mechanisms of growth may still result in similar 2D morphologies, which may look different in 3D. To elucidate the connection between growth and 3D morphology, we reconstruct the 3D architecture of cribriform DCIS from resected patient material. We produce a fully automated algorithm that aligns, segments, and reconstructs 3D architectures from microscopy images of 2D serial sections from human specimens. The alignment algorithm is based on normalized cross correlation, the segmentation algorithm uses histogram equilization, Otsu's thresholding, and morphology techniques to segment the duct and cribra. The reconstruction method combines these images in 3D. We show that two distinct 3D architectures are indeed found in samples whose 2D histological sections are similarly identified as cribriform DCIS. These differences in architecture support the hypothesis that luminal spaces may form due to different mechanisms, either isolated cell death or merging fronds, leading to the different architectures. We find that out of 15 samples, 6 were found to have 'bubble-like' cribra, 6 were found to have 'tube-like' criba and 3 were 'unknown.' We propose that the 3D architectures found, 'bubbles' and 'tubes', account for some of the heterogeneity of the disease and may be prognostic indicators of different patient outcomes.

  13. Semiautomated and automated algorithms for analysis of the carotid artery wall on computed tomography and sonography: a correlation study.

    Science.gov (United States)

    Saba, Luca; Tallapally, Niranjan; Gao, Hao; Molinari, Filippo; Anzidei, Michele; Piga, Mario; Sanfilippo, Roberto; Suri, Jasjit S

    2013-04-01

    The purpose of this study was to compare automated and semiautomated algorithms for analysis of carotid artery wall thickness and intima-media thickness on multidetector row computed tomographic (CT) angiography and sonography, respectively, and to study the correlation between them. Twenty consecutive patients underwent multidetector row CT angiographic and sonographic analysis of carotid arteries (mean age, 66 years; age range, 59-79 years). The intima-media thickness of the 40 carotid arteries was measured with novel and dedicated automated software analysis and by 4 observers who manually calculated the intima-media thickness. The carotid artery wall thickness was automatically estimated by using a specific algorithm and was also semiautomatically quantified. The correlation between groups was calculated by using the Pearson ρ statistic, and scatterplots were calculated. We evaluated intermethod agreement using Bland-Altman analysis. By comparing automated carotid artery wall thickness, automated intima-media thickness, semiautomated carotid artery wall thickness, and semiautomated intima-media thickness analyses, a statistically significant association was found, with the highest values obtained for the association between semiautomated and automated intima-media thickness analyses(Pearson ρ = 0.9; 95% confidence interval, 0.82-0.95; P = 0.0001). The lowest values were obtained for the association between semiautomated intima-media thickness and automated carotid artery wall thickness analyses (Pearson ρ = 0.44; 95% confidence interval, 0.15-0.66; P = 0.0047). In the Bland-Altman analysis, the better results were obtained by comparing the semiautomated and automated algorithms for the study of intima-media thickness, with an interval of -16.1% to +43.6%. The results of this preliminary study showed that carotid artery wall thickness and intima-media thickness can be studied with automated software, although the CT analysis needs to be further improved.

  14. Electroencephalogram Signal Classification for Automated Epileptic Seizure Detection Using Genetic Algorithm.

    Science.gov (United States)

    Nanthini, B Suguna; Santhi, B

    2017-01-01

    Epilepsy causes when the repeated seizure occurs in the brain. Electroencephalogram (EEG) test provides valuable information about the brain functions and can be useful to detect brain disorder, especially for epilepsy. In this study, application for an automated seizure detection model has been introduced successfully. The EEG signals are decomposed into sub-bands by discrete wavelet transform using db2 (daubechies) wavelet. The eight statistical features, the four gray level co-occurrence matrix and Renyi entropy estimation with four different degrees of order, are extracted from the raw EEG and its sub-bands. Genetic algorithm (GA) is used to select eight relevant features from the 16 dimension features. The model has been trained and tested using support vector machine (SVM) classifier successfully for EEG signals. The performance of the SVM classifier is evaluated for two different databases. The study has been experimented through two different analyses and achieved satisfactory performance for automated seizure detection using relevant features as the input to the SVM classifier. Relevant features using GA give better accuracy performance for seizure detection.

  15. Terminating supervision.

    Science.gov (United States)

    Levendosky, Alytia A; Hopwood, Christopher J

    2017-03-01

    The focus of this paper is on the termination of clinical supervision. Although clinical supervision is considered the backbone of most mental health training programs, it gets relatively little theoretical or empirical attention. The termination of supervision has received even less attention. In this paper, we describe an approach to terminating supervision in our treatment team, which integrates intensive assessment with a relational perspective in a clinical science training program (Levendosky & Hopwood, 2016). We describe our established conceptual framework, review empirical evidence, and provide verbatim examples from final supervision meetings on our team to elaborate the importance of conceptualizing individual differences across trainees and parallels between supervision and psychotherapy dynamics. We conclude by emphasizing the need for research on supervision in general and supervision termination in particular. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Supervising Unsupervised Learning

    OpenAIRE

    Garg, Vikas K.; Kalai, Adam

    2017-01-01

    We introduce a framework to leverage knowledge acquired from a repository of (heterogeneous) supervised datasets to new unsupervised datasets. Our perspective avoids the subjectivity inherent in unsupervised learning by reducing it to supervised learning, and provides a principled way to evaluate unsupervised algorithms. We demonstrate the versatility of our framework via simple agnostic bounds on unsupervised problems. In the context of clustering, our approach can help choose the number of ...

  17. DEVELOPMENT AND TESTING OF ERRORS CORRECTION ALGORITHM IN ELECTRONIC DESIGN AUTOMATION

    Directory of Open Access Journals (Sweden)

    E. B. Romanova

    2016-03-01

    Full Text Available Subject of Research. We have developed and presented a method of design errors correction for printed circuit boards (PCB in electronic design automation (EDA. Control of process parameters of PCB in EDA is carried out by means of Design Rule Check (DRC program. The DRC program monitors compliance with the design rules (minimum width of the conductors and gaps, the parameters of pads and via-holes, the parameters of polygons, etc. and also checks the route tracing, short circuits, the presence of objects outside PCB edge and other design errors. The result of the DRC program running is the generated error report. For quality production of circuit boards DRC-errors should be corrected, that is ensured by the creation of error-free DRC report. Method. A problem of correction repeatability of DRC-errors was identified as a result of trial operation of P-CAD, Altium Designer and KiCAD programs. For its solution the analysis of DRC-errors was carried out; the methods of their correction were studied. DRC-errors were proposed to be clustered. Groups of errors include the types of errors, which correction sequence has no impact on the correction time. The algorithm for correction of DRC-errors is proposed. Main Results. The best correction sequence of DRC-errors has been determined. The algorithm has been tested in the following EDA: P-CAD, Altium Designer and KiCAD. Testing has been carried out on two and four-layer test PCB (digital and analog. Comparison of DRC-errors correction time with the algorithm application to the same time without it has been done. It has been shown that time saved for the DRC-errors correction increases with the number of error types up to 3.7 times. Practical Relevance. The proposed algorithm application will reduce PCB design time and improve the quality of the PCB design. We recommend using the developed algorithm when the number of error types is equal to four or more. The proposed algorithm can be used in different

  18. Automated condition-invariable neurite segmentation and synapse classification using textural analysis-based machine-learning algorithms.

    Science.gov (United States)

    Kandaswamy, Umasankar; Rotman, Ziv; Watt, Dana; Schillebeeckx, Ian; Cavalli, Valeria; Klyachko, Vitaly A

    2013-02-15

    High-resolution live-cell imaging studies of neuronal structure and function are characterized by large variability in image acquisition conditions due to background and sample variations as well as low signal-to-noise ratio. The lack of automated image analysis tools that can be generalized for varying image acquisition conditions represents one of the main challenges in the field of biomedical image analysis. Specifically, segmentation of the axonal/dendritic arborizations in brightfield or fluorescence imaging studies is extremely labor-intensive and still performed mostly manually. Here we describe a fully automated machine-learning approach based on textural analysis algorithms for segmenting neuronal arborizations in high-resolution brightfield images of live cultured neurons. We compare performance of our algorithm to manual segmentation and show that it combines 90% accuracy, with similarly high levels of specificity and sensitivity. Moreover, the algorithm maintains high performance levels under a wide range of image acquisition conditions indicating that it is largely condition-invariable. We further describe an application of this algorithm to fully automated synapse localization and classification in fluorescence imaging studies based on synaptic activity. Textural analysis-based machine-learning approach thus offers a high performance condition-invariable tool for automated neurite segmentation. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Methodology, Algorithms, and Emerging Tool for Automated Design of Intelligent Integrated Multi-Sensor Systems

    Directory of Open Access Journals (Sweden)

    Andreas König

    2009-11-01

    Full Text Available The emergence of novel sensing elements, computing nodes, wireless communication and integration technology provides unprecedented possibilities for the design and application of intelligent systems. Each new application system must be designed from scratch, employing sophisticated methods ranging from conventional signal processing to computational intelligence. Currently, a significant part of this overall algorithmic chain of the computational system model still has to be assembled manually by experienced designers in a time and labor consuming process. In this research work, this challenge is picked up and a methodology and algorithms for automated design of intelligent integrated and resource-aware multi-sensor systems employing multi-objective evolutionary computation are introduced. The proposed methodology tackles the challenge of rapid-prototyping of such systems under realization constraints and, additionally, includes features of system instance specific self-correction for sustained operation of a large volume and in a dynamically changing environment. The extension of these concepts to the reconfigurable hardware platform renders so called self-x sensor systems, which stands, e.g., for self-monitoring, -calibrating, -trimming, and -repairing/-healing systems. Selected experimental results prove the applicability and effectiveness of our proposed methodology and emerging tool. By our approach, competitive results were achieved with regard to classification accuracy, flexibility, and design speed under additional design constraints.

  20. Automated ethernet-based test setup for long wave infrared camera analysis and algorithm evaluation

    Science.gov (United States)

    Edeler, Torsten; Ohliger, Kevin; Lawrenz, Sönke; Hussmann, Stephan

    2009-06-01

    In this paper we consider a new way for automated camera calibration and specification. The proposed setup is optimized for working with uncooled long wave infrared (thermal) cameras, while the concept itself is not restricted to those cameras. Every component of the setup like black body source, climate chamber, remote power switch, and the camera itself is connected to a network via Ethernet and a Windows XP workstation is controlling all components by the use of the TCL - script language. Beside the job of communicating with the components the script tool is also capable to run Matlab code via the matlab kernel. Data exchange during the measurement is possible and offers a variety of different advantages from drastically reduction of the amount of data to enormous speedup of the measuring procedure due to data analysis during measurement. A parameter based software framework is presented to create generic test cases, where modification to the test scenario does not require any programming skills. In the second part of the paper the measurement results of a self developed GigE-Vision thermal camera are presented and correction algorithms, providing high quality image output, are shown. These algorithms are fully implemented in the FPGA of the camera to provide real time processing while maintaining GigE-Vision as standard transmission protocol as an interface to arbitrary software tools. Artefacts taken into account are spatial noise, defective pixel and offset drift due to self heating after power on.

  1. An Automated Defect Prediction Framework using Genetic Algorithms: A Validation of Empirical Studies

    Directory of Open Access Journals (Sweden)

    Juan Murillo-Morera

    2016-05-01

    Full Text Available Today, it is common for software projects to collect measurement data through development processes. With these data, defect prediction software can try to estimate the defect proneness of a software module, with the objective of assisting and guiding software practitioners. With timely and accurate defect predictions, practitioners can focus their limited testing resources on higher risk areas. This paper reports the results of three empirical studies that uses an automated genetic defect prediction framework. This framework generates and compares different learning schemes (preprocessing + attribute selection + learning algorithms and selects the best one using a genetic algorithm, with the objective to estimate the defect proneness of a software module. The first empirical study is a performance comparison of our framework with the most important framework of the literature. The second empirical study is a performance and runtime comparison between our framework and an exhaustive framework. The third empirical study is a sensitivity analysis. The last empirical study, is our main contribution in this paper. Performance of the software development defect prediction models (using AUC, Area Under the Curve was validated using NASA-MDP and PROMISE data sets. Seventeen data sets from NASA-MDP (13 and PROMISE (4 projects were analyzed running a NxM-fold cross-validation. A genetic algorithm was used to select the components of the learning schemes automatically, and to assess and report the results. Our results reported similar performance between frameworks. Our framework reported better runtime than exhaustive framework. Finally, we reported the best configuration according to sensitivity analysis.

  2. An optimized routing algorithm for the automated assembly of standard multimode ribbon fibers in a full-mesh optical backplane

    Science.gov (United States)

    Basile, Vito; Guadagno, Gianluca; Ferrario, Maddalena; Fassi, Irene

    2018-03-01

    In this paper a parametric, modular and scalable algorithm allowing a fully automated assembly of a backplane fiber-optic interconnection circuit is presented. This approach guarantees the optimization of the optical fiber routing inside the backplane with respect to specific criteria (i.e. bending power losses), addressing both transmission performance and overall costs issues. Graph theory has been exploited to simplify the complexity of the NxN full-mesh backplane interconnection topology, firstly, into N independent sub-circuits and then, recursively, into a limited number of loops easier to be generated. Afterwards, the proposed algorithm selects a set of geometrical and architectural parameters whose optimization allows to identify the optimal fiber optic routing for each sub-circuit of the backplane. The topological and numerical information provided by the algorithm are then exploited to control a robot which performs the automated assembly of the backplane sub-circuits. The proposed routing algorithm can be extended to any array architecture and number of connections thanks to its modularity and scalability. Finally, the algorithm has been exploited for the automated assembly of an 8x8 optical backplane realized with standard multimode (MM) 12-fiber ribbons.

  3. National Automated Surveillance of Hospital-Acquired Bacteremia in Denmark Using a Computer Algorithm.

    Science.gov (United States)

    Gubbels, Sophie; Nielsen, Jens; Voldstedlund, Marianne; Kristensen, Brian; Schønheyder, Henrik C; Ellermann-Eriksen, Svend; Engberg, Jørgen H; Møller, Jens K; Østergaard, Christian; Mølbak, Kåre

    2017-05-01

    BACKGROUND In 2015, Denmark launched an automated surveillance system for hospital-acquired infections, the Hospital-Acquired Infections Database (HAIBA). OBJECTIVE To describe the algorithm used in HAIBA, to determine its concordance with point prevalence surveys (PPSs), and to present trends for hospital-acquired bacteremia SETTING Private and public hospitals in Denmark METHODS A hospital-acquired bacteremia case was defined as at least 1 positive blood culture with at least 1 pathogen (bacterium or fungus) taken between 48 hours after admission and 48 hours after discharge, using the Danish Microbiology Database and the Danish National Patient Registry. PPSs performed in 2012 and 2013 were used for comparison. RESULTS National trends showed an increase in HA bacteremia cases between 2010 and 2014. Incidence was higher for men than women (9.6 vs 5.4 per 10,000 risk days) and was highest for those aged 61-80 years (9.5 per 10,000 risk days). The median daily prevalence was 3.1% (range, 2.1%-4.7%). Regional incidence varied from 6.1 to 8.1 per 10,000 risk days. The microorganisms identified were typical for HA bacteremia. Comparison of HAIBA with PPS showed a sensitivity of 36% and a specificity of 99%. HAIBA was less sensitive for patients in hematology departments and intensive care units. Excluding these departments improved the sensitivity of HAIBA to 44%. CONCLUSIONS Although the estimated sensitivity of HAIBA compared with PPS is low, a PPS is not a gold standard. Given the many advantages of automated surveillance, HAIBA allows monitoring of HA bacteremia across the healthcare system, supports prioritizing preventive measures, and holds promise for evaluating interventions. Infect Control Hosp Epidemiol 2017;38:559-566.

  4. Application of supervised range-constrained thresholding to extract lung pleura for automated detection of pleural thickenings from thoracic CT images

    Science.gov (United States)

    Chaisaowong, K.; Knepper, A.; Kraus, T.; Aach, T.

    2007-03-01

    We develop an image analysis system to automatically detect pleural thickenings and assess their characteristic values from patients' thoracic spiral CT images. Algorithms are described to carry out the segmentation of pleural contours and to find the pleural thickenings. The method of thresholding was selected as the technique to separate lung's tissue from other. Instead thresholding based only on empirical considerations, the so-called "supervised range-constrained thresholding" is applied. The automatic detection of pleural thickenings is carried out based on the examination of its concavity and on the characteristic Hounsfield unit of tumorous tissue. After detection of pleural thickenings, in order to assess their growth rate, a spline-based interpolation technique is used to create a model of healthy pleura. Based on this healthy model, the size of the pleural thickenings is calculated. In conjunction with the spatio-temporal matching of CT images acquired at different times, the oncopathological assessment of morbidity can be documented. A graphical user interface is provided which is also equipped with 3D visualization of the pleura. Our overall aim is to develop an image analysis system for an efficient and reliable diagnosis of early stage pleural mesothelioma in order to ease the consequences of the expected peak of malignant pleural mesothelioma caused by asbestos exposure.

  5. A benchmark study of automated intra-retinal cyst segmentation algorithms using optical coherence tomography B-scans.

    Science.gov (United States)

    Girish, G N; Anima, V A; Kothari, Abhishek R; Sudeep, P V; Roychowdhury, Sohini; Rajan, Jeny

    2018-01-01

    Retinal cysts are formed by accumulation of fluid in the retina caused by leakages from inflammation or vitreous fractures. Analysis of the retinal cystic spaces holds significance in detection and treatment of several ocular diseases like age-related macular degeneration, diabetic macular edema etc. Thus, segmentation of intra-retinal cysts and quantification of cystic spaces are vital for retinal pathology and severity detection. In the recent years, automated segmentation of intra-retinal cysts using optical coherence tomography B-scans has gained significant importance in the field of retinal image analysis. The objective of this paper is to compare different intra-retinal cyst segmentation algorithms for comparative analysis and benchmarking purposes. In this work, we employ a modular approach for standardizing the different segmentation algorithms. Further, we analyze the variations in automated cyst segmentation performances and method scalability across image acquisition systems by using the publicly available cyst segmentation challenge dataset (OPTIMA cyst segmentation challenge). Several key automated methods are comparatively analyzed using quantitative and qualitative experiments. Our analysis demonstrates the significance of variations in signal-to-noise ratio (SNR), retinal layer morphology and post-processing steps on the automated cyst segmentation processes. This benchmarking study provides insights towards the scalability of automated processes across vendor-specific imaging modalities to provide guidance for retinal pathology diagnostics and treatment processes. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Cairn detection in southern Arabia using a supervised automatic detection algorithm and multiple sample data spectroscopic clustering

    Science.gov (United States)

    Schuetter, Jared Michael

    Excavating cairns in southern Arabia is a way for anthropologists to understand which factors led ancient settlers to transition from a pastoral lifestyle and tribal narrative to the formation of states that exist today. Locating these monuments has traditionally been done in the field, relying on eyewitness reports and costly searches through the arid landscape. In this thesis, an algorithm for automatically detecting cairns in satellite imagery is presented. The algorithm uses a set of filters in a window based approach to eliminate background pixels and other objects that do not look like cairns. The resulting set of detected objects constitutes fewer than 0.001% of the pixels in the satellite image, and contains the objects that look the most like cairns in imagery. When a training set of cairns is available, a further reduction of this set of objects can take place, along with a likelihood-based ranking system. To aid in cairn detection, the satellite image is also clustered to determine land-form classes that tend to be consistent with the presence of cairns. Due to the large number of pixels in the image, a subsample spectral clustering algorithm called "Multiple Sample Data Spectroscopic clustering" is used. This multiple sample clustering procedure is motivated by perturbation studies on single sample spectral algorithms. The studies, presented in this thesis, show that sampling variability in the single sample approach can cause an unsatisfactory level of instability in clustering results. The multiple sample data spectroscopic clustering algorithm is intended to stabilize this perturbation by combining information from different samples. While sampling variability is still present, the use of multiple samples mitigates its effect on cluster results. Finally, a step-through of the cairn detection algorithm and satellite image clustering are given for an image in the Hadramawt region of Yemen. The top ranked detected objects are presented, and a discussion

  7. Supervised Machine Learning Algorithms Can Classify Open-Text Feedback of Doctor Performance With Human-Level Accuracy.

    Science.gov (United States)

    Gibbons, Chris; Richards, Suzanne; Valderas, Jose Maria; Campbell, John

    2017-03-15

    Machine learning techniques may be an effective and efficient way to classify open-text reports on doctor's activity for the purposes of quality assurance, safety, and continuing professional development. The objective of the study was to evaluate the accuracy of machine learning algorithms trained to classify open-text reports of doctor performance and to assess the potential for classifications to identify significant differences in doctors' professional performance in the United Kingdom. We used 1636 open-text comments (34,283 words) relating to the performance of 548 doctors collected from a survey of clinicians' colleagues using the General Medical Council Colleague Questionnaire (GMC-CQ). We coded 77.75% (1272/1636) of the comments into 5 global themes (innovation, interpersonal skills, popularity, professionalism, and respect) using a qualitative framework. We trained 8 machine learning algorithms to classify comments and assessed their performance using several training samples. We evaluated doctor performance using the GMC-CQ and compared scores between doctors with different classifications using t tests. Individual algorithm performance was high (range F score=.68 to .83). Interrater agreement between the algorithms and the human coder was highest for codes relating to "popular" (recall=.97), "innovator" (recall=.98), and "respected" (recall=.87) codes and was lower for the "interpersonal" (recall=.80) and "professional" (recall=.82) codes. A 10-fold cross-validation demonstrated similar performance in each analysis. When combined together into an ensemble of multiple algorithms, mean human-computer interrater agreement was .88. Comments that were classified as "respected," "professional," and "interpersonal" related to higher doctor scores on the GMC-CQ compared with comments that were not classified (P.05). Machine learning algorithms can classify open-text feedback of doctor performance into multiple themes derived by human raters with high

  8. Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Aidan Patrick; Schultz, Peter Andrew; Crozier, Paul; Moore, Stan Gerald; Swiler, Laura Painton; Stephens, John Adam; Trott, Christian Robert; Foiles, Stephen Martin; Tucker, Garritt J. (Drexel University)

    2014-09-01

    This report summarizes the result of LDRD project 12-0395, titled "Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations." During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel computers

  9. Automation of RELAP5 input calibration and code validation using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Phung, Viet-Anh, E-mail: vaphung@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Kööp, Kaspar, E-mail: kaspar@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Vorobyev, Yury, E-mail: yura3510@gmail.com [National Research Center “Kurchatov Institute”, Kurchatov square 1, Moscow 123182 (Russian Federation); Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden)

    2016-04-15

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  10. On the implementation of an automated acoustic output optimization algorithm for subharmonic aided pressure estimation.

    Science.gov (United States)

    Dave, J K; Halldorsdottir, V G; Eisenbrey, J R; Merton, D A; Liu, J B; Machado, P; Zhao, H; Park, S; Dianis, S; Chalek, C L; Thomenius, K E; Brown, D B; Forsberg, F

    2013-04-01

    Incident acoustic output (IAO) dependent subharmonic signal amplitudes from ultrasound contrast agents can be categorized into occurrence, growth or saturation stages. Subharmonic aided pressure estimation (SHAPE) is a technique that utilizes growth stage subharmonic signal amplitudes for hydrostatic pressure estimation. In this study, we developed an automated IAO optimization algorithm to identify the IAO level eliciting growth stage subharmonic signals and also studied the effect of pulse length on SHAPE. This approach may help eliminate the problems of acquiring and analyzing the data offline at all IAO levels as was done in previous studies and thus, pave the way for real-time clinical pressure monitoring applications. The IAO optimization algorithm was implemented on a Logiq 9 (GE Healthcare, Milwaukee, WI) scanner interfaced with a computer. The optimization algorithm stepped the ultrasound scanner from 0% to 100% IAO. A logistic equation fitting function was applied with the criterion of minimum least squared error between the fitted subharmonic amplitudes and the measured subharmonic amplitudes as a function of the IAO levels and the optimum IAO level was chosen corresponding to the inflection point calculated from the fitted data. The efficacy of the optimum IAO level was investigated for in vivo SHAPE to monitor portal vein (PV) pressures in 5 canines and was compared with the performance of IAO levels, below and above the optimum IAO level, for 4, 8 and 16 transmit cycles. The canines received a continuous infusion of Sonazoid microbubbles (1.5 μl/kg/min; GE Healthcare, Oslo, Norway). PV pressures were obtained using a surgically introduced pressure catheter (Millar Instruments, Inc., Houston, TX) and were recorded before and after increasing PV pressures. The experiments showed that optimum IAO levels for SHAPE in the canines ranged from 6% to 40%. The best correlation between changes in PV pressures and in subharmonic amplitudes (r=-0.76; p=0

  11. Combination of mass spectrometry-based targeted lipidomics and supervised machine learning algorithms in detecting adulterated admixtures of white rice.

    Science.gov (United States)

    Lim, Dong Kyu; Long, Nguyen Phuoc; Mo, Changyeun; Dong, Ziyuan; Cui, Lingmei; Kim, Giyoung; Kwon, Sung Won

    2017-10-01

    The mixing of extraneous ingredients with original products is a common adulteration practice in food and herbal medicines. In particular, authenticity of white rice and its corresponding blended products has become a key issue in food industry. Accordingly, our current study aimed to develop and evaluate a novel discrimination method by combining targeted lipidomics with powerful supervised learning methods, and eventually introduce a platform to verify the authenticity of white rice. A total of 30 cultivars were collected, and 330 representative samples of white rice from Korea and China as well as seven mixing ratios were examined. Random forests (RF), support vector machines (SVM) with a radial basis function kernel, C5.0, model averaged neural network, and k-nearest neighbor classifiers were used for the classification. We achieved desired results, and the classifiers effectively differentiated white rice from Korea to blended samples with high prediction accuracy for the contamination ratio as low as five percent. In addition, RF and SVM classifiers were generally superior to and more robust than the other techniques. Our approach demonstrated that the relative differences in lysoGPLs can be successfully utilized to detect the adulterated mixing of white rice originating from different countries. In conclusion, the present study introduces a novel and high-throughput platform that can be applied to authenticate adulterated admixtures from original white rice samples. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Microseismic event location using global optimization algorithms: An integrated and automated workflow

    Science.gov (United States)

    Lagos, Soledad R.; Velis, Danilo R.

    2018-02-01

    We perform the location of microseismic events generated in hydraulic fracturing monitoring scenarios using two global optimization techniques: Very Fast Simulated Annealing (VFSA) and Particle Swarm Optimization (PSO), and compare them against the classical grid search (GS). To this end, we present an integrated and optimized workflow that concatenates into an automated bash script the different steps that lead to the microseismic events location from raw 3C data. First, we carry out the automatic detection, denoising and identification of the P- and S-waves. Secondly, we estimate their corresponding backazimuths using polarization information, and propose a simple energy-based criterion to automatically decide which is the most reliable estimate. Finally, after taking proper care of the size of the search space using the backazimuth information, we perform the location using the aforementioned algorithms for 2D and 3D usual scenarios of hydraulic fracturing processes. We assess the impact of restricting the search space and show the advantages of using either VFSA or PSO over GS to attain significant speed-ups.

  13. Automated recognition of the pericardium contour on processed CT images using genetic algorithms.

    Science.gov (United States)

    Rodrigues, É O; Rodrigues, L O; Oliveira, L S N; Conci, A; Liatsis, P

    2017-08-01

    This work proposes the use of Genetic Algorithms (GA) in tracing and recognizing the pericardium contour of the human heart using Computed Tomography (CT) images. We assume that each slice of the pericardium can be modelled by an ellipse, the parameters of which need to be optimally determined. An optimal ellipse would be one that closely follows the pericardium contour and, consequently, separates appropriately the epicardial and mediastinal fats of the human heart. Tracing and automatically identifying the pericardium contour aids in medical diagnosis. Usually, this process is done manually or not done at all due to the effort required. Besides, detecting the pericardium may improve previously proposed automated methodologies that separate the two types of fat associated to the human heart. Quantification of these fats provides important health risk marker information, as they are associated with the development of certain cardiovascular pathologies. Finally, we conclude that GA offers satisfiable solutions in a feasible amount of processing time. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. The design of 3D scaffold for tissue engineering using automated scaffold design algorithm.

    Science.gov (United States)

    Mahmoud, Shahenda; Eldeib, Ayman; Samy, Sherif

    2015-06-01

    Several progresses have been introduced in the field of bone regenerative medicine. A new term tissue engineering (TE) was created. In TE, a highly porous artificial extracellular matrix or scaffold is required to accommodate cells and guide their growth in three dimensions. The design of scaffolds with desirable internal and external structure represents a challenge for TE. In this paper, we introduce a new method known as automated scaffold design (ASD) for designing a 3D scaffold with a minimum mismatches for its geometrical parameters. The method makes use of k-means clustering algorithm to separate the different tissues and hence decodes the defected bone portions. The segmented portions of different slices are registered to construct the 3D volume for the data. It also uses an isosurface rendering technique for 3D visualization of the scaffold and bones. It provides the ability to visualize the transplanted as well as the normal bone portions. The proposed system proves good performance in both the segmentation results and visualizations aspects.

  15. System Performance of an Integrated Airborne Spacing Algorithm with Ground Automation

    Science.gov (United States)

    Swieringa, Kurt A.; Wilson, Sara R.; Baxley, Brian T.

    2016-01-01

    The National Aeronautics and Space Administration's (NASA's) first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature ATM technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise time-based scheduling in the Terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools to enable precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise spacing behind another aircraft. Recent simulations and IM algorithm development at NASA have focused on trajectory-based IM operations where aircraft equipped with IM avionics are expected to achieve a spacing goal, assigned by air traffic controllers, at the final approach fix. The recently published IM Minimum Operational Performance Standards describe five types of IM operations. This paper discusses the results and conclusions of a human-in-the-loop simulation that investigated three of those IM operations. The results presented in this paper focus on system performance and integration metrics. Overall, the IM operations conducted in this simulation integrated well with ground-based decisions support tools and certain types of IM operational were able to provide improved spacing precision at the final approach fix; however, some issues were identified that should be addressed prior to implementing IM procedures into real-world operations.

  16. Seasonal cultivated and fallow cropland mapping using MODIS-based automated cropland classification algorithm

    Science.gov (United States)

    Wu, Zhuoting; Thenkabail, Prasad S.; Mueller, Rick; Zakzeski, Audra; Melton, Forrest; Johnson, Lee; Rosevelt, Carolyn; Dwyer, John; Jones, Jeanine; Verdin, James P.

    2014-01-01

    Increasing drought occurrences and growing populations demand accurate, routine, and consistent cultivated and fallow cropland products to enable water and food security analysis. The overarching goal of this research was to develop and test automated cropland classification algorithm (ACCA) that provide accurate, consistent, and repeatable information on seasonal cultivated as well as seasonal fallow cropland extents and areas based on the Moderate Resolution Imaging Spectroradiometer remote sensing data. Seasonal ACCA development process involves writing series of iterative decision tree codes to separate cultivated and fallow croplands from noncroplands, aiming to accurately mirror reliable reference data sources. A pixel-by-pixel accuracy assessment when compared with the U.S. Department of Agriculture (USDA) cropland data showed, on average, a producer’s accuracy of 93% and a user’s accuracy of 85% across all months. Further, ACCA-derived cropland maps agreed well with the USDA Farm Service Agency crop acreage-reported data for both cultivated and fallow croplands with R-square values over 0.7 and field surveys with an accuracy of ≥95% for cultivated croplands and ≥76% for fallow croplands. Our results demonstrated the ability of ACCA to generate cropland products, such as cultivated and fallow cropland extents and areas, accurately, automatically, and repeatedly throughout the growing season.

  17. Derivation and validation of the automated search algorithms to identify cognitive impairment and dementia in electronic health records.

    Science.gov (United States)

    Amra, Sakusic; O'Horo, John C; Singh, Tarun D; Wilson, Gregory A; Kashyap, Rahul; Petersen, Ronald; Roberts, Rosebud O; Fryer, John D; Rabinstein, Alejandro A; Gajic, Ognjen

    2017-02-01

    Long-term cognitive impairment is a common and important problem in survivors of critical illness. We developed electronic search algorithms to identify cognitive impairment and dementia from the electronic medical records (EMRs) that provide opportunity for big data analysis. Eligible patients met 2 criteria. First, they had a formal cognitive evaluation by The Mayo Clinic Study of Aging. Second, they were hospitalized in intensive care unit at our institution between 2006 and 2014. The "criterion standard" for diagnosis was formal cognitive evaluation supplemented by input from an expert neurologist. Using all available EMR data, we developed and improved our algorithms in the derivation cohort and validated them in the independent validation cohort. Of 993 participants who underwent formal cognitive testing and were hospitalized in intensive care unit, we selected 151 participants at random to form the derivation and validation cohorts. The automated electronic search algorithm for cognitive impairment was 94.3% sensitive and 93.0% specific. The search algorithms for dementia achieved respective sensitivity and specificity of 97% and 99%. EMR search algorithms significantly outperformed International Classification of Diseases codes. Automated EMR data extractions for cognitive impairment and dementia are reliable and accurate and can serve as acceptable and efficient alternatives to time-consuming manual data review. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. An automated land-use mapping comparison of the Bayesian maximum likelihood and linear discriminant analysis algorithms

    Science.gov (United States)

    Tom, C. H.; Miller, L. D.

    1984-01-01

    The Bayesian maximum likelihood parametric classifier has been tested against the data-based formulation designated 'linear discrimination analysis', using the 'GLIKE' decision and "CLASSIFY' classification algorithms in the Landsat Mapping System. Identical supervised training sets, USGS land use/land cover classes, and various combinations of Landsat image and ancilliary geodata variables, were used to compare the algorithms' thematic mapping accuracy on a single-date summer subscene, with a cellularized USGS land use map of the same time frame furnishing the ground truth reference. CLASSIFY, which accepts a priori class probabilities, is found to be more accurate than GLIKE, which assumes equal class occurrences, for all three mapping variable sets and both levels of detail. These results may be generalized to direct accuracy, time, cost, and flexibility advantages of linear discriminant analysis over Bayesian methods.

  19. SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws

    Science.gov (United States)

    Cooke, Daniel; Rushton, Nelson

    2013-01-01

    With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less

  20. Design and demonstration of automated data analysis algorithms for ultrasonic inspection of complex composite panels with bonds

    Science.gov (United States)

    Aldrin, John C.; Forsyth, David S.; Welter, John T.

    2016-02-01

    To address the data review burden and improve the reliability of the ultrasonic inspection of large composite structures, automated data analysis (ADA) algorithms have been developed to make calls on indications that satisfy the detection criteria and minimize false calls. The original design followed standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. However, certain complex panels with varying shape, ply drops and the presence of bonds can complicate this interpretation process. In this paper, enhancements to the automated data analysis algorithms are introduced to address these challenges. To estimate the thickness of the part and presence of bonds without prior information, an algorithm tracks potential backwall or bond-line signals, and evaluates a combination of spatial, amplitude, and time-of-flight metrics to identify bonded sections. Once part boundaries, thickness transitions and bonded regions are identified, feature extraction algorithms are applied to multiple sets of through-thickness and backwall C-scan images, for evaluation of both first layer through thickness and layers under bonds. ADA processing results are presented for a variety of complex test specimens with inserted materials and other test discontinuities. Lastly, enhancements to the ADA software interface are presented, which improve the software usability for final data review by the inspectors and support the certification process.

  1. Fully automated segmentation of a hip joint using the patient-specific optimal thresholding and watershed algorithm.

    Science.gov (United States)

    Kim, Jung Jin; Nam, Jimin; Jang, In Gwun

    2018-02-01

    Automated segmentation with high accuracy and speed is a prerequisite for FEA-based quantitative assessment with a large population. However, hip joint segmentation has remained challenging due to a narrow articular cartilage and thin cortical bone with a marked interindividual variance. To overcome this challenge, this paper proposes a fully automated segmentation method for a hip joint that uses the complementary characteristics between the thresholding technique and the watershed algorithm. Using the golden section method and load path algorithm, the proposed method first determines the patient-specific optimal threshold value that enables reliably separating a femur from a pelvis while removing cortical and trabecular bone in the femur at the minimum. This provides regional information on the femur. The watershed algorithm is then used to obtain boundary information on the femur. The proximal femur can be extracted by merging the complementary information on a target image. For eight CT images, compared with the manual segmentation and other segmentation methods, the proposed method offers a high accuracy in terms of the dice overlap coefficient (97.24 ± 0.44%) and average surface distance (0.36 ± 0.07 mm) within a fast timeframe in terms of processing time per slice (1.25 ± 0.27 s). The proposed method also delivers structural behavior which is close to that of the manual segmentation with a small mean of average relative errors of the risk factor (4.99%). The segmentation results show that, without the aid of a prerequisite dataset and users' manual intervention, the proposed method can segment a hip joint as fast as the simplified Kang (SK)-based automated segmentation, while maintaining the segmentation accuracy at a similar level of the snake-based semi-automated segmentation. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. An Exact Algorithm using Edges and Routes Pegging Test for the Input-Output Scheduling Problem in Automated Warehouses

    Science.gov (United States)

    Kubota, Yoshitsune; Numata, Kazumiti

    In this paper we propose and evaluate some idea to improve an existing exact algorithm for Input-Output Scheduling Problem (IOSP) in automated warehouses. The existing algorithm is based on LP relaxation of IOSP, which is solved by the column generation method allowing relaxed columns (routes). Our idea is, expecting to enhance LP solution, to impliment the column generation using only exact routes, and to reduce consequently increasing calculation cost by dropping (pegging) unusable edges. The pegging test is done in the preprocessing phase by solving Lagrangian relaxation of IOSP formulated in node cover decision variables. The results of computational experiments show that the proposed algorithm can solve slightly large sized instances in less execution time than existing one.

  3. Validation of automated cloud top phase algorithms: distinguishing between cirrus clouds and snow in a priori analyses of AVHRR imagery

    Science.gov (United States)

    Hutchison, Keith D.; Etherton, Brian J.; Topping, Phillip C.

    1997-06-01

    Quantitative assessments the performance of automated cloud analysis algorithms require the creation of highly accurate, manual cloud, no cloud (CNC) images from multispectral meteorological satellite data. In general, the methodology to create these analyses for the evaluation of cloud detection algorithms is relatively straightforward, although the task becomes more complicated when little spectral signature is evident between a cloud and its background, as appears to be the case in advanced very high resolution radiometer (AVHRR) imagery when thin cirrus is present over snow-covered surfaces. In addition, complex procedures are needed to help the analyst distinguish between water and ice cloud tops to construct the manual cloud top phase analyses and to ensure that inaccuracies in automated cloud detection are not propagated into the results of the cloud classification algorithm. Procedures are described that enhance the researcher's ability to (1) distinguish between thin cirrus clouds and snow-covered surfaces in daytime AVHRR imagery, (2) construct accurate cloud top phase manual analyses, and (3) quantitatively validate the performance of both automated cloud detection and cloud top phase classification algorithms. The methodology uses all AVHRR spectral bands, including a band derived from the daytime 3.7-micrometers channel, which has proven most valuable for discriminating between thin cirrus clouds and snow. It is concluded that while the 1.6-micrometers band is needed to distinguish between snow and water clouds in daytime data, the 3.7-micrometers channel remains essential during the daytime to differentiate between thin ice clouds and snow.Unfortunately this capability that may be lost if the 3.7-micrometers data switches to a nighttime-only transmission with the launch of future National Oceanographic and Atmospheric Administration satellites.

  4. Evaluation of an Automated Swallow-Detection Algorithm Using Visual Biofeedback in Healthy Adults and Head and Neck Cancer Survivors.

    Science.gov (United States)

    Constantinescu, Gabriela; Kuffel, Kristina; Aalto, Daniel; Hodgetts, William; Rieger, Jana

    2017-11-02

    Mobile health (mHealth) technologies may offer an opportunity to address longstanding clinical challenges, such as access and adherence to swallowing therapy. Mobili-T® is an mHealth device that uses surface electromyography (sEMG) to provide biofeedback on submental muscles activity during exercise. An automated swallow-detection algorithm was developed for Mobili-T®. This study evaluated the performance of the swallow-detection algorithm. Ten healthy participants and 10 head and neck cancer (HNC) patients were fitted with the device. Signal was acquired during regular, effortful, and Mendelsohn maneuver saliva swallows, as well as lip presses, tongue, and head movements. Signals of interest were tagged during data acquisition and used to evaluate algorithm performance. Sensitivity and positive predictive values (PPV) were calculated for each participant. Saliva swallows were compared between HNC and controls in the four sEMG-based parameters used in the algorithm: duration, peak amplitude ratio, median frequency, and 15th percentile of the power spectrum density. In healthy participants, sensitivity and PPV were 92.3 and 83.9%, respectively. In HNC patients, sensitivity was 92.7% and PPV was 72.2%. In saliva swallows, HNC patients had longer event durations (U = 1925.5, p < 0.001), lower median frequency (U = 2674.0, p < 0.001), and lower 15th percentile of the power spectrum density [t(176.9) = 2.07, p < 0.001] than healthy participants. The automated swallow-detection algorithm performed well with healthy participants and retained a high sensitivity, but had lowered PPV with HNC patients. With respect to Mobili-T®, the algorithm will next be evaluated using the mHealth system.

  5. The Cascaded Enhanced k-Means and Fuzzy c-Means Clustering Algorithms for Automated Segmentation of Malaria Parasites

    Directory of Open Access Journals (Sweden)

    Abdul Nasir Aimi Salihah

    2018-01-01

    Full Text Available Malaria continues to be one of the leading causes of death in the world, despite the massive efforts put forth by World Health Organization (WHO in eradicating it, worldwide. Efficient control and proper treatment of this disease requires early detection and accurate diagnosis due to the large number of cases reported yearly. To achieve this aim, this paper proposes a malaria parasite segmentation approach via cascaded clustering algorithms to automate the malaria diagnosis process. The comparisons among the cascaded clustering algorithms have been made by considering the accuracy, sensitivity and specificity of the segmented malaria images. Based on the qualitative and quantitative findings, the results show that by using the final centres that have been generated by enhanced k-means (EKM clustering as the initial centres for fuzzy c-means (FCM clustering, has led to the production of good segmented malaria image. The proposed cascaded EKM and FCM clustering has successfully segmented 100 malaria images of Plasmodium Vivax species with average segmentation accuracy, sensitivity and specificity values of 99.22%, 88.84% and 99.56%, respectively. Therefore, the EKM algorithm has given the best performance compared to k-means (KM and moving k-means (MKM algorithms when all the three clustering algorithms are cascaded with FCM algorithm.

  6. ANIE: A Mathematical Algorithm for Automated Indexing of Planar Deformation Features in Shocked Quartz

    Science.gov (United States)

    Huber, M. S.; Ferrière, L.; Losiak, A.; Koeberl, C.

    2011-03-01

    A mathematical method of indexing planar deformation features in quartz and a Microsoft Excel macro for automated indexing is presented, allowing for more rapid and accurate results than the previously used manual method.

  7. Automated delay estimation at signalized intersections : phase I concept and algorithm development.

    Science.gov (United States)

    2011-07-01

    Currently there are several methods to measure the performance of surface streets, but their capabilities in dynamically estimating vehicle delay are limited. The objective of this research is to develop a method to automate traffic delay estimation ...

  8. Reliability of a semi-automated algorithm for the vastus lateralis muscle architecture measurement based on ultrasound images.

    Science.gov (United States)

    Marzilger, Robert; Legerlotz, Kirsten; Panteli, Chrystalla; Bohm, Sebastian; Arampatzis, Adamantios

    2018-02-01

    The assessment of muscle architecture with B-mode ultrasound is an established method in muscle physiology and mechanics. There are several manual, semi-automated and automated approaches available for muscle architecture analysis from ultrasound images or videos. However, most approaches have limitations such as workload, subjectivity, drift or they are applicable to short muscle fascicles only. Addressing these issues, an algorithm was developed to analyse architectural parameters of the vastus lateralis muscle (VL). In 17 healthy young men and women, ultrasound images were taken five times on two different days during passive knee joint flexion. From the images, fascicle length (FL), pennation angle (PAN) and muscle thickness (MTH) were calculated for both test days using the algorithm. Interday differences were determined using a two-way ANOVA. Interday and intraday reliability were assessed using intraclass correlation coefficients (ICC) and root mean square (RMS) differences. FL, MTH and PAN did not differ between day one and two. The within day ICCs were above 0.94 for all tested parameters. The average interday ICCs were 0.86 for the FL, 0.96 for MTH and 0.60 for PAN. The average RMS differences between both days were 5.0%, 8.5% and 12.0% for MTH, FL and PAN, respectively. The proposed algorithm provides high measurement reliability. However, the interday reliability might be influenced by small differences in probe position between days.

  9. An algorithm for automated ROI definition in water or epoxy-filled NEMA NU-2 image quality phantoms.

    Science.gov (United States)

    Pierce, Larry A; Byrd, Darrin W; Elston, Brian F; Karp, Joel S; Sunderland, John J; Kinahan, Paul E

    2016-01-08

    Drawing regions of interest (ROIs) in positron emission tomography/computed tomography (PET/CT) scans of the National Electrical Manufacturers Association (NEMA) NU-2 Image Quality (IQ) phantom is a time-consuming process that allows for interuser variability in the measurements. In order to reduce operator effort and allow batch processing of IQ phantom images, we propose a fast, robust, automated algorithm for performing IQ phantom sphere localization and analysis. The algorithm is easily altered to accommodate different configurations of the IQ phantom. The proposed algorithm uses information from both the PET and CT image volumes in order to overcome the challenges of detecting the smallest spheres in the PET volume. This algorithm has been released as an open-source plug-in to the Osirix medical image viewing software package. We test the algorithm under various noise conditions, positions within the scanner, air bubbles in the phantom spheres, and scanner misalignment conditions. The proposed algorithm shows run-times between 3 and 4 min and has proven to be robust under all tested conditions, with expected sphere localization deviations of less than 0.2 mm and variations of PET ROI mean and maximum values on the order of 0.5% and 2%, respectively, over multiple PET acquisitions. We conclude that the proposed algorithm is stable when challenged with a variety of physical and imaging anomalies, and that the algorithm can be a valuable tool for those who use the NEMA NU-2 IQ phantom for PET/CT scanner acceptance testing and QA/QC.

  10. Automated calculation of the distal contractile integral in esophageal pressure topography with a region-growing algorithm.

    Science.gov (United States)

    Lin, Z; Roman, S; Pandolfino, J E; Kahrilas, P J

    2012-01-01

    The distal contractile integral (DCI) is an index of contractile vigor in high-resolution esophageal pressure topography (EPT) calculated as the product of amplitude, duration, and span of the distal esophageal contraction. The aim of this study was to develop an automated algorithm calculating DCI. The DCI was calculated conventionally using ManoView™ (Given Imaging, Los Angeles, CA, USA) software in EPT studies from 72 controls and 20 patients and compared to the calculation using a MATLAB™ (Version 7.9.0, R2009b; The MathWorks Inc., Natick, MA, USA) 'region-growing' algorithm. This algorithm first established the spatial limits of the distal contraction (the proximal pressure trough to either the distal pressure trough or to the superior margin of the lower esophageal sphincter at rest). Pixel-by-pixel horizontal line segments were then analyzed within this span starting at the pressure maximum and extending outward from that point. The limits of 'region-growing' were defined either by the spatial DCI limits or by encountering a pressure calculated as the total units of mmHg s cm greater than 20 mmHg within this domain. Excellent correlation existed between the two methods (r = 0.98, P calculation were slightly but significantly greater than with the region-growing algorithm. Differences were attributed to the inclusion of vascular pressures in the conventional calculation or to differences in localization of the distal limit of the DCI. The proposed region-growing algorithm provides an automated method to calculate DCI that limits inclusion of vascular pressure artifacts and minimizes the need for user input in data analysis. © 2011 Blackwell Publishing Ltd.

  11. An Automated Cropland Classification Algorithm (ACCA) for Tajikistan by combining Landsat, MODIS, and secondary data

    Science.gov (United States)

    Thenkabail, Prasad S.; Wu, Zhuoting

    2012-01-01

    The overarching goal of this research was to develop and demonstrate an automated Cropland Classification Algorithm (ACCA) that will rapidly, routinely, and accurately classify agricultural cropland extent, areas, and characteristics (e.g., irrigated vs. rainfed) over large areas such as a country or a region through combination of multi-sensor remote sensing and secondary data. In this research, a rule-based ACCA was conceptualized, developed, and demonstrated for the country of Tajikistan using mega file data cubes (MFDCs) involving data from Landsat Global Land Survey (GLS), Landsat Enhanced Thematic Mapper Plus (ETM+) 30 m, Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m time-series, a suite of secondary data (e.g., elevation, slope, precipitation, temperature), and in situ data. First, the process involved producing an accurate reference (or truth) cropland layer (TCL), consisting of cropland extent, areas, and irrigated vs. rainfed cropland areas, for the entire country of Tajikistan based on MFDC of year 2005 (MFDC2005). The methods involved in producing TCL included using ISOCLASS clustering, Tasseled Cap bi-spectral plots, spectro-temporal characteristics from MODIS 250 m monthly normalized difference vegetation index (NDVI) maximum value composites (MVC) time-series, and textural characteristics of higher resolution imagery. The TCL statistics accurately matched with the national statistics of Tajikistan for irrigated and rainfed croplands, where about 70% of croplands were irrigated and the rest rainfed. Second, a rule-based ACCA was developed to replicate the TCL accurately (~80% producer’s and user’s accuracies or within 20% quantity disagreement involving about 10 million Landsat 30 m sized cropland pixels of Tajikistan). Development of ACCA was an iterative process involving series of rules that are coded, refined, tweaked, and re-coded till ACCA derived croplands (ACLs) match accurately with TCLs. Third, the ACCA derived cropland

  12. An Automated Cropland Classification Algorithm (ACCA for Tajikistan by Combining Landsat, MODIS, and Secondary Data

    Directory of Open Access Journals (Sweden)

    Prasad S. Thenkabail

    2012-09-01

    Full Text Available The overarching goal of this research was to develop and demonstrate an automated Cropland Classification Algorithm (ACCA that will rapidly, routinely, and accurately classify agricultural cropland extent, areas, and characteristics (e.g., irrigated vs. rainfed over large areas such as a country or a region through combination of multi-sensor remote sensing and secondary data. In this research, a rule-based ACCA was conceptualized, developed, and demonstrated for the country of Tajikistan using mega file data cubes (MFDCs involving data from Landsat Global Land Survey (GLS, Landsat Enhanced Thematic Mapper Plus (ETM+ 30 m, Moderate Resolution Imaging Spectroradiometer (MODIS 250 m time-series, a suite of secondary data (e.g., elevation, slope, precipitation, temperature, and in situ data. First, the process involved producing an accurate reference (or truth cropland layer (TCL, consisting of cropland extent, areas, and irrigated vs. rainfed cropland areas, for the entire country of Tajikistan based on MFDC of year 2005 (MFDC2005. The methods involved in producing TCL included using ISOCLASS clustering, Tasseled Cap bi-spectral plots, spectro-temporal characteristics from MODIS 250 m monthly normalized difference vegetation index (NDVI maximum value composites (MVC time-series, and textural characteristics of higher resolution imagery. The TCL statistics accurately matched with the national statistics of Tajikistan for irrigated and rainfed croplands, where about 70% of croplands were irrigated and the rest rainfed. Second, a rule-based ACCA was developed to replicate the TCL accurately (~80% producer’s and user’s accuracies or within 20% quantity disagreement involving about 10 million Landsat 30 m sized cropland pixels of Tajikistan. Development of ACCA was an iterative process involving series of rules that are coded, refined, tweaked, and re-coded till ACCA derived croplands (ACLs match accurately with TCLs. Third, the ACCA derived

  13. Whither Supervision?

    Directory of Open Access Journals (Sweden)

    Duncan Waite

    2006-11-01

    Full Text Available This paper inquires if the school supervision is in decadence. Dr. Waite responds that the answer will depend on which perspective you look at it. Dr. Waite suggests taking in consideration three elements that are related: the field itself, the expert in the field (the professor, the theorist, the student and the administrator, and the context. When these three elements are revised, it emphasizes that there is not a consensus about the field of supervision, but there are coincidences related to its importance and that it is related to the improvement of the practice of the students in the school for their benefit. Dr. Waite suggests that the practice on this field is not always in harmony with what the theorists affirm. When referring to the supervisor or the skilled person, the author indicates that his or her perspective depends on his or her epistemological believes or in the way he or she conceives the learning; that is why supervision can be understood in different ways. About the context, Waite suggests that there have to be taken in consideration the social or external forces that influent the people and the society, because through them the education is affected. Dr. Waite concludes that the way to understand the supervision depends on the performer’s perspective. He responds to the initial question saying that the supervision authorities, the knowledge on this field, the performers, and its practice, are maybe spread but not extinct because the supervision will always be part of the great enterprise that we called education.

  14. Genetic algorithms in teaching artificial intelligence (automated generation of specific algebras)

    Science.gov (United States)

    Habiballa, Hashim; Jendryscik, Radek

    2017-11-01

    The problem of teaching essential Artificial Intelligence (AI) methods is an important task for an educator in the branch of soft-computing. The key focus is often given to proper understanding of the principle of AI methods in two essential points - why we use soft-computing methods at all and how we apply these methods to generate reasonable results in sensible time. We present one interesting problem solved in the non-educational research concerning automated generation of specific algebras in the huge search space. We emphasize above mentioned points as an educational case study of an interesting problem in automated generation of specific algebras.

  15. Quantifying Histological Features of Cancer Biospecimens for Biobanking Quality Assurance Using Automated Morphometric Pattern Recognition Image Analysis Algorithms

    Science.gov (United States)

    Webster, Joshua D.; Simpson, Eleanor R.; Michalowski, Aleksandra M.; Hoover, Shelley B.; Simpson, R. Mark

    2011-01-01

    Biorepository-supported translational research depends on high-quality, well-annotated specimens. Histopathology assessment contributes insight into how representative lesions are for research objectives. Feasibility of documenting histological proportions of tumor and stroma was studied in an effort to enhance information regarding biorepository tissue heterogeneity. Using commercially available software, unique spatial-spectral algorithms were developed for applying automated pattern recognition morphometric image analysis to quantify histologic tumor and nontumor tissue areas in biospecimen tissue sections. Measurements were acquired successfully for 75/75 (100%) lymphomas, 76/77 (98.7%) osteosarcomas, and 60/70 (85.7%) melanomas. The percentage of tissue area occupied by tumor varied among patients and tumor types and was distributed around medians of 94% [interquartile range (IQR)=14%] for lymphomas, 84% for melanomas (IQR=24%), and 39% for osteosarcomas (IQR=44%). Within-patient comparisons from a subset, including multiple individual patient specimens, revealed ≤12% median coefficient of variation (CV) for lymphomas and melanomas. Phenotypic heterogeneity of osteosarcomas resulted in 33% median CV. Uniformly applied, tumor-specific pattern recognition software permits automated tissue-feature quantification. Furthermore, dispersion analyses of area measurements across collections, as well as of multiple specimens from individual patients, support using limited tissue slices to gauge features for some tumor types. Quantitative image analysis automation is anticipated to minimize variability associated with routine biorepository pathologic evaluations and enhance biomarker discovery by helping to guide the selection of study-appropriate specimens. PMID:21966258

  16. FluTyper-an algorithm for automated typing and subtyping of the influenza virus from high resolution mass spectral data

    Directory of Open Access Journals (Sweden)

    Schwahn Alexander B

    2010-05-01

    Full Text Available Abstract Background High resolution mass spectrometry has been employed to rapidly and accurately type and subtype influenza viruses. The detection of signature peptides with unique theoretical masses enables the unequivocal assignment of the type and subtype of a given strain. This analysis has, to date, required the manual inspection of mass spectra of whole virus and antigen digests. Results A computer algorithm, FluTyper, has been designed and implemented to achieve the automated analysis of MALDI mass spectra recorded for proteolytic digests of the whole influenza virus and antigens. FluTyper incorporates the use of established signature peptides and newly developed naïve Bayes classifiers for four common influenza antigens, hemagglutinin, neuraminidase, nucleoprotein, and matrix protein 1, to type and subtype the influenza virus based on their detection within proteolytic peptide mass maps. Theoretical and experimental testing of the classifiers demonstrates their applicability at protein coverage rates normally achievable in mass mapping experiments. The application of FluTyper to whole virus and antigen digests of a range of different strains of the influenza virus is demonstrated. Conclusions FluTyper algorithm facilitates the rapid and automated typing and subtyping of the influenza virus from mass spectral data. The newly developed naïve Bayes classifiers increase the confidence of influenza virus subtyping, especially where signature peptides are not detected. FluTyper is expected to popularize the use of mass spectrometry to characterize influenza viruses.

  17. A new automated quantification algorithm for the detection and evaluation of focal liver lesions with contrast-enhanced ultrasound.

    Science.gov (United States)

    Gatos, Ilias; Tsantis, Stavros; Spiliopoulos, Stavros; Skouroliakou, Aikaterini; Theotokas, Ioannis; Zoumpoulis, Pavlos; Hazle, John D; Kagadis, George C

    2015-07-01

    Detect and classify focal liver lesions (FLLs) from contrast-enhanced ultrasound (CEUS) imaging by means of an automated quantification algorithm. The proposed algorithm employs a sophisticated segmentation method to detect and contour focal lesions from 52 CEUS video sequences (30 benign and 22 malignant). Lesion detection involves wavelet transform zero crossings utilization as an initialization step to the Markov random field model toward the lesion contour extraction. After FLL detection across frames, time intensity curve (TIC) is computed which provides the contrast agents' behavior at all vascular phases with respect to adjacent parenchyma for each patient. From each TIC, eight features were automatically calculated and employed into the support vector machines (SVMs) classification algorithm in the design of the image analysis model. With regard to FLLs detection accuracy, all lesions detected had an average overlap value of 0.89 ± 0.16 with manual segmentations for all CEUS frame-subsets included in the study. Highest classification accuracy from the SVM model was 90.3%, misdiagnosing three benign and two malignant FLLs with sensitivity and specificity values of 93.1% and 86.9%, respectively. The proposed quantification system that employs FLLs detection and classification algorithms may be of value to physicians as a second opinion tool for avoiding unnecessary invasive procedures.

  18. A hierarchical, automated target recognition algorithm for a parallel analog processor

    Science.gov (United States)

    Woodward, Gail; Padgett, Curtis

    1997-01-01

    A hierarchical approach is described for an automated target recognition (ATR) system, VIGILANTE, that uses a massively parallel, analog processor (3DANN). The 3DANN processor is capable of performing 64 concurrent inner products of size 1x4096 every 250 nanoseconds.

  19. Ice crystal characterization in cirrus clouds: a sun-tracking camera system and automated detection algorithm for halo displays

    Directory of Open Access Journals (Sweden)

    L. Forster

    2017-07-01

    Full Text Available Halo displays in the sky contain valuable information about ice crystal shape and orientation: e.g., the 22° halo is produced by randomly oriented hexagonal prisms while parhelia (sundogs indicate oriented plates. HaloCam, a novel sun-tracking camera system for the automated observation of halo displays is presented. An initial visual evaluation of the frequency of halo displays for the ACCEPT (Analysis of the Composition of Clouds with Extended Polarization Techniques field campaign from October to mid-November 2014 showed that sundogs were observed more often than 22° halos. Thus, the majority of halo displays was produced by oriented ice crystals. During the campaign about 27 % of the cirrus clouds produced 22° halos, sundogs or upper tangent arcs. To evaluate the HaloCam observations collected from regular measurements in Munich between January 2014 and June 2016, an automated detection algorithm for 22° halos was developed, which can be extended to other halo types as well. This algorithm detected 22° halos about 2 % of the time for this dataset. The frequency of cirrus clouds during this time period was estimated by co-located ceilometer measurements using temperature thresholds of the cloud base. About 25 % of the detected cirrus clouds occurred together with a 22° halo, which implies that these clouds contained a certain fraction of smooth, hexagonal ice crystals. HaloCam observations complemented by radiative transfer simulations and measurements of aerosol and cirrus cloud optical thickness (AOT and COT provide a possibility to retrieve more detailed information about ice crystal roughness. This paper demonstrates the feasibility of a completely automated method to collect and evaluate a long-term database of halo observations and shows the potential to characterize ice crystal properties.

  20. EpHLA: an innovative and user-friendly software automating the HLAMatchmaker algorithm for antibody analysis.

    Science.gov (United States)

    Sousa, Luiz Cláudio Demes da Mata; Filho, Herton Luiz Alves Sales; Von Glehn, Cristina de Queiroz Carrascosa; da Silva, Adalberto Socorro; Neto, Pedro de Alcântara dos Santos; de Castro, José Adail Fonseca; do Monte, Semíramis Jamil Hadad

    2011-12-01

    The global challenge for solid organ transplantation programs is to distribute organs to the highly sensitized recipients. The purpose of this work is to describe and test the functionality of the EpHLA software, a program that automates the analysis of acceptable and unacceptable HLA epitopes on the basis of the HLAMatchmaker algorithm. HLAMatchmaker considers small configurations of polymorphic residues referred to as eplets as essential components of HLA-epitopes. Currently, the analyses require the creation of temporary files and the manual cut and paste of laboratory tests results between electronic spreadsheets, which is time-consuming and prone to administrative errors. The EpHLA software was developed in Object Pascal programming language and uses the HLAMatchmaker algorithm to generate histocompatibility reports. The automated generation of reports requires the integration of files containing the results of laboratory tests (HLA typing, anti-HLA antibody signature) and public data banks (NMDP, IMGT). The integration and the access to this data were accomplished by means of the framework called eDAFramework. The eDAFramework was developed in Object Pascal and PHP and it provides data access functionalities for software developed in these languages. The tool functionality was successfully tested in comparison to actual, manually derived reports of patients from a renal transplantation program with related donors. We successfully developed software, which enables the automated definition of the epitope specificities of HLA antibodies. This new tool will benefit the management of recipient/donor pairs selection for highly sensitized patients. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. The Pandora multi-algorithm approach to automated pattern recognition in LAr TPC detectors

    Science.gov (United States)

    Marshall, J. S.; Blake, A. S. T.; Thomson, M. A.; Escudero, L.; de Vries, J.; Weston, J.; MicroBooNE Collaboration

    2017-09-01

    The development and operation of Liquid Argon Time Projection Chambers (LAr TPCs) for neutrino physics has created a need for new approaches to pattern recognition, in order to fully exploit the superb imaging capabilities offered by this technology. The Pandora Software Development Kit provides functionality to aid the process of designing, implementing and running pattern recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition: individual algorithms each address a specific task in a particular topology; a series of many tens of algorithms then carefully builds-up a picture of the event. The input to the Pandora pattern recognition is a list of 2D Hits. The output from the chain of over 70 algorithms is a hierarchy of reconstructed 3D Particles, each with an identified particle type, vertex and direction.

  2. Automation of Algorithmic Tasks for Virtual Laboratories Based on Automata Theory

    Directory of Open Access Journals (Sweden)

    Evgeniy A. Efimchik

    2016-03-01

    Full Text Available In the work a description of an automata model of standard algorithm for constructing a correct solution of algorithmic tests is given. The described model allows a formal determination of the variant complexity of algorithmic test and serves as a basis for determining the complexity functions, including the collision concept – the situation of uncertainty, when a choice must be made upon fulfilling the task between the alternatives with various priorities. The influence of collisions on the automata model and its inner structure is described. The model and complexity functions are applied for virtual laboratories upon designing the algorithms of constructing variant with a predetermined complexity in real time and algorithms of the estimation procedures of students’ solution with respect to collisions. The results of the work are applied to the development of virtual laboratories, which are used in the practical part of massive online course on graph theory.

  3. Clinical evaluation of a novel adaptive algorithm for automated control of oxygen therapy in preterm infants on non-invasive respiratory support.

    Science.gov (United States)

    Plottier, Gemma K; Wheeler, Kevin I; Ali, Sanoj K M; Fathabadi, Omid Sadeghi; Jayakar, Rohan; Gale, Timothy J; Dargaville, Peter A

    2017-01-01

    To evaluate the performance of a novel rapidly responsive proportional-integral-derivative (PID) algorithm for automated oxygen control in preterm infants with respiratory insufficiency. Interventional study of a 4-hour period of automated oxygen control compared with combined data from two flanking periods of manual control (4 hours each). Neonatal intensive care unit. Preterm infants (n=20) on non-invasive respiratory support and supplemental oxygen, with oxygen saturation (SpO2) target range 90%-94% (manual control) and 91%-95% (automated control). Median gestation at birth 27.5 weeks (IQR 26-30 weeks), postnatal age 8.0 (1.8-34) days. Automated oxygen control using a standalone device, receiving SpO2 input from a standard oximeter and computing alterations to oxygen concentration that were actuated with a modified blender. The PID algorithm was enhanced to avoid iatrogenic hyperoxaemia and adapt to the severity of lung dysfunction. Proportion of time in the SpO2 target range, or above target range when in air. Automated oxygen control resulted in more time in the target range or above in air (manual 56 (48-63)% vs automated 81 (76-90)%, pManual changes to oxygen therapy were infrequent during automated control (0.24/hour vs 2.3/hour during manual control), and oxygen requirements were unchanged (automated control period 27%, manual 27% and 26%, p>0.05). The novel PID algorithm was very effective for automated oxygen control in preterm infants, and deserves further investigation. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  4. Genetic Programming for Automating the Development of Data Management Algorithms in Information Technology Systems

    Directory of Open Access Journals (Sweden)

    Gabriel A. Archanjo

    2012-01-01

    Full Text Available Information technology (IT systems are present in almost all fields of human activity, with emphasis on processing, storage, and handling of datasets. Automated methods to provide access to data stored in databases have been proposed mainly for tasks related to knowledge discovery and data mining (KDD. However, for this purpose, the database is used only to query data in order to find relevant patterns associated with the records. Processes modelled on IT systems should manipulate the records to modify the state of the system. Linear genetic programming for databases (LGPDB is a tool proposed here for automatic generation of programs that can query, delete, insert, and update records on databases. The obtained results indicate that the LGPDB approach is able to generate programs for effectively modelling processes of IT systems, opening the possibility of automating relevant stages of data manipulation, and thus allowing human programmers to focus on more complex tasks.

  5. GOES Fire Detects from the Wildfire Automated Biomass Burning Algorithm (WF-ABBA)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The GOES ABBA is a contextual multi-spectral thresholding algorithm which utilizes dynamic local thresholds derived from the GOES satellite imagery and ancillary...

  6. A Robust Automated Cataract Detection Algorithm Using Diagnostic Opinion Based Parameter Thresholding for Telemedicine Application

    OpenAIRE

    Shashwat Pathak; Basant Kumar

    2016-01-01

    This paper proposes and evaluates an algorithm to automatically detect the cataracts from color images in adult human subjects. Currently, methods available for cataract detection are based on the use of either fundus camera or Digital Single-Lens Reflex (DSLR) camera; both are very expensive. The main motive behind this work is to develop an inexpensive, robust and convenient algorithm which in conjugation with suitable devices will be able to diagnose the presence of cataract from the true ...

  7. Automated Storm Tracking and the Lightning Jump Algorithm Using GOES-R Geostationary Lightning Mapper (GLM) Proxy Data

    Science.gov (United States)

    Schultz, Elise; Schultz, Christopher Joseph; Carey, Lawrence D.; Cecil, Daniel J.; Bateman, Monte

    2016-01-01

    This study develops a fully automated lightning jump system encompassing objective storm tracking, Geostationary Lightning Mapper proxy data, and the lightning jump algorithm (LJA), which are important elements in the transition of the LJA concept from a research to an operational based algorithm. Storm cluster tracking is based on a product created from the combination of a radar parameter (vertically integrated liquid, VIL), and lightning information (flash rate density). Evaluations showed that the spatial scale of tracked features or storm clusters had a large impact on the lightning jump system performance, where increasing spatial scale size resulted in decreased dynamic range of the system's performance. This framework will also serve as a means to refine the LJA itself to enhance its operational applicability. Parameters within the system are isolated and the system's performance is evaluated with adjustments to parameter sensitivity. The system's performance is evaluated using the probability of detection (POD) and false alarm ratio (FAR) statistics. Of the algorithm parameters tested, sigma-level (metric of lightning jump strength) and flash rate threshold influenced the system's performance the most. Finally, verification methodologies are investigated. It is discovered that minor changes in verification methodology can dramatically impact the evaluation of the lightning jump system.

  8. Automated Lead Optimization of MMP-12 Inhibitors Using a Genetic Algorithm.

    Science.gov (United States)

    Pickett, Stephen D; Green, Darren V S; Hunt, David L; Pardoe, David A; Hughes, Ian

    2011-01-13

    Traditional lead optimization projects involve long synthesis and testing cycles, favoring extensive structure-activity relationship (SAR) analysis and molecular design steps, in an attempt to limit the number of cycles that a project must run to optimize a development candidate. Microfluidic-based chemistry and biology platforms, with cycle times of minutes rather than weeks, lend themselves to unattended autonomous operation. The bottleneck in the lead optimization process is therefore shifted from synthesis or test to SAR analysis and design. As such, the way is open to an algorithm-directed process, without the need for detailed user data analysis. Here, we present results of two synthesis and screening experiments, undertaken using traditional methodology, to validate a genetic algorithm optimization process for future application to a microfluidic system. The algorithm has several novel features that are important for the intended application. For example, it is robust to missing data and can suggest compounds for retest to ensure reliability of optimization. The algorithm is first validated on a retrospective analysis of an in-house library embedded in a larger virtual array of presumed inactive compounds. In a second, prospective experiment with MMP-12 as the target protein, 140 compounds are submitted for synthesis over 10 cycles of optimization. Comparison is made to the results from the full combinatorial library that was synthesized manually and tested independently. The results show that compounds selected by the algorithm are heavily biased toward the more active regions of the library, while the algorithm is robust to both missing data (compounds where synthesis failed) and inactive compounds. This publication places the full combinatorial library and biological data into the public domain with the intention of advancing research into algorithm-directed lead optimization methods.

  9. Algorithm development for automated outlier detection and background noise reduction during NIR spectroscopic data processing

    Science.gov (United States)

    Abookasis, David; Workman, Jerome J.

    2011-09-01

    This study describes a hybrid processing algorithm for use during calibration/validation of near-infrared spectroscopic signals based on a spectra cross-correlation and filtering process, combined with a partial-least square regression (PLS) analysis. In the first step of the algorithm, exceptional signals (outliers) are detected and remove based on spectra correlation criteria we have developed. Then, signal filtering based on direct orthogonal signal correction (DOSC) was applied, before being used in the PLS model, to filter out background variance. After outlier screening and DOSC treatment, a PLS calibration model matrix is formed. Once this matrix has been built, it is used to predict the concentration of the unknown samples. Common statistics such as standard error of cross-validation, mean relative error, coefficient of determination, etc. were computed to assess the fitting ability of the algorithm Algorithm performance was tested on several hundred blood samples prepared at different hematocrit and glucose levels using blood materials from thirteen healthy human volunteers. During measurements, these samples were subjected to variations in temperature, flow rate, and sample pathlength. Experimental results highlight the potential, applicability, and effectiveness of the proposed algorithm in terms of low error of prediction, high sensitivity and specificity, and low false negative (Type II error) samples.

  10. An automated algorithm for extracting road edges from terrestrial mobile LiDAR data

    Science.gov (United States)

    Kumar, Pankaj; McElhinney, Conor P.; Lewis, Paul; McCarthy, Timothy

    2013-11-01

    Terrestrial mobile laser scanning systems provide rapid and cost effective 3D point cloud data which can be used for extracting features such as the road edge along a route corridor. This information can assist road authorities in carrying out safety risk assessment studies along road networks. The knowledge of the road edge is also a prerequisite for the automatic estimation of most other road features. In this paper, we present an algorithm which has been developed for extracting left and right road edges from terrestrial mobile LiDAR data. The algorithm is based on a novel combination of two modified versions of the parametric active contour or snake model. The parameters involved in the algorithm are selected empirically and are fixed for all the road sections. We have developed a novel way of initialising the snake model based on the navigation information obtained from the mobile mapping vehicle. We tested our algorithm on different types of road sections representing rural, urban and national primary road sections. The successful extraction of road edges from these multiple road section environments validates our algorithm. These findings and knowledge provide valuable insights as well as a prototype road edge extraction tool-set, for both national road authorities and survey companies.

  11. A Robust Automated Cataract Detection Algorithm Using Diagnostic Opinion Based Parameter Thresholding for Telemedicine Application

    Directory of Open Access Journals (Sweden)

    Shashwat Pathak

    2016-09-01

    Full Text Available This paper proposes and evaluates an algorithm to automatically detect the cataracts from color images in adult human subjects. Currently, methods available for cataract detection are based on the use of either fundus camera or Digital Single-Lens Reflex (DSLR camera; both are very expensive. The main motive behind this work is to develop an inexpensive, robust and convenient algorithm which in conjugation with suitable devices will be able to diagnose the presence of cataract from the true color images of an eye. An algorithm is proposed for cataract screening based on texture features: uniformity, intensity and standard deviation. These features are first computed and mapped with diagnostic opinion by the eye expert to define the basic threshold of screening system and later tested on real subjects in an eye clinic. Finally, a tele-ophthamology model using our proposed system has been suggested, which confirms the telemedicine application of the proposed system.

  12. Algorithms

    Indian Academy of Sciences (India)

    positive numbers. The word 'algorithm' was most often associated with this algorithm till 1950. It may however be pOinted out that several non-trivial algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used.

  13. MaxBin 2.0: an automated binning algorithm to recover genomes from multiple metagenomic datasets

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Yu-Wei [Joint BioEnergy Inst. (JBEI), Emeryville, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Simmons, Blake A. [Joint BioEnergy Inst. (JBEI), Emeryville, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Singer, Steven W. [Joint BioEnergy Inst. (JBEI), Emeryville, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-10-29

    The recovery of genomes from metagenomic datasets is a critical step to defining the functional roles of the underlying uncultivated populations. We previously developed MaxBin, an automated binning approach for high-throughput recovery of microbial genomes from metagenomes. Here, we present an expanded binning algorithm, MaxBin 2.0, which recovers genomes from co-assembly of a collection of metagenomic datasets. Tests on simulated datasets revealed that MaxBin 2.0 is highly accurate in recovering individual genomes, and the application of MaxBin 2.0 to several metagenomes from environmental samples demonstrated that it could achieve two complementary goals: recovering more bacterial genomes compared to binning a single sample as well as comparing the microbial community composition between different sampling environments. Availability and implementation: MaxBin 2.0 is freely available at http://sourceforge.net/projects/maxbin/ under BSD license. Supplementary information: Supplementary data are available at Bioinformatics online.

  14. Microprocessor-based integration of microfluidic control for the implementation of automated sensor monitoring and multithreaded optimization algorithms.

    Science.gov (United States)

    Ezra, Elishai; Maor, Idan; Bavli, Danny; Shalom, Itai; Levy, Gahl; Prill, Sebastian; Jaeger, Magnus S; Nahmias, Yaakov

    2015-08-01

    Microfluidic applications range from combinatorial synthesis to high throughput screening, with platforms integrating analog perfusion components, digitally controlled micro-valves and a range of sensors that demand a variety of communication protocols. Currently, discrete control units are used to regulate and monitor each component, resulting in scattered control interfaces that limit data integration and synchronization. Here, we present a microprocessor-based control unit, utilizing the MS Gadgeteer open framework that integrates all aspects of microfluidics through a high-current electronic circuit that supports and synchronizes digital and analog signals for perfusion components, pressure elements, and arbitrary sensor communication protocols using a plug-and-play interface. The control unit supports an integrated touch screen and TCP/IP interface that provides local and remote control of flow and data acquisition. To establish the ability of our control unit to integrate and synchronize complex microfluidic circuits we developed an equi-pressure combinatorial mixer. We demonstrate the generation of complex perfusion sequences, allowing the automated sampling, washing, and calibrating of an electrochemical lactate sensor continuously monitoring hepatocyte viability following exposure to the pesticide rotenone. Importantly, integration of an optical sensor allowed us to implement automated optimization protocols that require different computational challenges including: prioritized data structures in a genetic algorithm, distributed computational efforts in multiple-hill climbing searches and real-time realization of probabilistic models in simulated annealing. Our system offers a comprehensive solution for establishing optimization protocols and perfusion sequences in complex microfluidic circuits.

  15. The Automation of Stochastization Algorithm with Use of SymPy Computer Algebra Library

    Science.gov (United States)

    Demidova, Anastasya; Gevorkyan, Migran; Kulyabov, Dmitry; Korolkova, Anna; Sevastianov, Leonid

    2018-02-01

    SymPy computer algebra library is used for automatic generation of ordinary and stochastic systems of differential equations from the schemes of kinetic interaction. Schemes of this type are used not only in chemical kinetics but also in biological, ecological and technical models. This paper describes the automatic generation algorithm with an emphasis on application details.

  16. Design and Implementation of the Automated Rendezvous Targeting Algorithms for Orion

    Science.gov (United States)

    DSouza, Christopher; Weeks, Michael

    2010-01-01

    The Orion vehicle will be designed to perform several rendezvous missions: rendezvous with the ISS in Low Earth Orbit (LEO), rendezvous with the EDS/Altair in LEO, a contingency rendezvous with the ascent stage of the Altair in Low Lunar Orbit (LLO) and a contingency rendezvous in LLO with the ascent and descent stage in the case of an aborted lunar landing. Therefore, it is not difficult to realize that each of these scenarios imposes different operational, timing, and performance constraints on the GNC system. To this end, a suite of on-board guidance and targeting algorithms have been designed to meet the requirement to perform the rendezvous independent of communications with the ground. This capability is particularly relevant for the lunar missions, some of which may occur on the far side of the moon. This paper will describe these algorithms which are designed to be structured and arranged in such a way so as to be flexible and able to safely perform a wide variety of rendezvous trajectories. The goal of the algorithms is not to merely fly one specific type of canned rendezvous profile. Conversely, it was designed from the start to be general enough such that any type of trajectory profile can be flown.(i.e. a coelliptic profile, a stable orbit rendezvous profile, and a expedited LLO rendezvous profile, etc) all using the same rendezvous suite of algorithms. Each of these profiles makes use of maneuver types which have been designed with dual goals of robustness and performance. They are designed to converge quickly under dispersed conditions and they are designed to perform many of the functions performed on the ground today. The targeting algorithms consist of a phasing maneuver (NC), an altitude adjust maneuver (NH), and plane change maneuver (NPC), a coelliptic maneuver (NSR), a Lambert targeted maneuver, and several multiple-burn targeted maneuvers which combine one of more of these algorithms. The derivation and implementation of each of these

  17. Experience with Group Supervision

    OpenAIRE

    Chen, Weiqin

    2006-01-01

    Supervision can take a few different forms. For example, it can be one-to-one supervision and it can also be group supervision. Group supervision is an important process within the scientific community. Many research groups use this form to supervise doctoral- and master students in groups. Some efforts have been made to study this process. For example, Samara (2002) studied the group supervision process in group writing. However, group supervision has not been studied thorough...

  18. Towards an intercomparison of automated registration algorithms for multiple source remote sensing data

    Science.gov (United States)

    LeMoigne, Jacqueline; Xia, Wei; Chettri, Samir; El-Ghazawi, Tarek; Kaymaz, Emre; Lerner, Bao-Ting; Mareboyana, Manohar; Netanyahu, Nathan; Pierce, John; Raghavan, Srini; hide

    1997-01-01

    The first step in the integration of multiple data is registration, either relative image-to-image registration or absolute geo-registration, to a map or a fixed coordinate system. As the need for automating registration techniques is recognized, we feel that there is a need to survey all the registration methods which may be applicable to Earth and space science problems and to evaluate their performances on a large variety of existing remote sensing data as well as on simulated data of soon-to-be-flown instruments. In this paper we will describe: 1) the operational toolbox which we are developing and which will consist in some of the most important registration techniques; and 2) the quantitative intercomparison of the different methods, which will allow a user to select the desired registration technique based on this evaluation and the visualization of the registration results.

  19. Automated EEG detection algorithms and clinical semiology in epilepsy: importance of correlations.

    Science.gov (United States)

    Hogan, R Edward

    2011-12-01

    With advances in technological innovation, electroencephalography has remained the gold standard for classification and localization of epileptic seizures. Like other diagnostic modalities, technological advances have opened new avenues for assessment of data, and hold great promise to improve interpretive capabilities. However, proper overall interpretation and application of electroencephalographic findings relies on valid correlations of associated clinical semiology. This article addresses interpretation of clinical signs and symptoms in the context of the diagnostic predictive value of electroencephalographic, clinical, and electrographic definitions of seizures, and upcoming challenges of interpreting intracranial high-frequency electroencephalographic data. This article is part of a Supplemental Special Issue entitled The Future of Automated Seizure Detection and Prediction. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. A thesis on the Development of an Automated SWIFT Edge Detection Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Trujillo, Christopher J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-28

    Throughout the world, scientists and engineers such as those at Los Alamos National Laboratory, perform research and testing unique only to applications aimed towards advancing technology, and understanding the nature of materials. With this testing, comes a need for advanced methods of data acquisition and most importantly, a means of analyzing and extracting the necessary information from such acquired data. In this thesis, I aim to produce an automated method implementing advanced image processing techniques and tools to analyze SWIFT image datasets for Detonator Technology at Los Alamos National Laboratory. Such an effective method for edge detection and point extraction can prove to be advantageous in analyzing such unique datasets and provide for consistency in producing results.

  1. Reconfirmation algorithms should be the standard of care in automated external defibrillators.

    Science.gov (United States)

    Faddy, Steven C

    2006-03-01

    Non-sustained and self-terminating arrhythmias pose a significant challenge during resuscitation. Delivery of a defibrillation shock to a non-shockable rhythm has a poorly understood effect on the heart. The importance of assessing rhythm right up until the delivery of a shock is of increased importance when "blind" shocks are being delivered by automatic defibrillators or minimally trained rescuers. Reconfirmation algorithms are common in current-generation implantable defibrillators but this investigation of current-generation AEDs shows that only 71% of devices presently available have reconfirmation algorithms. A case of spontaneous reversion of a non-sustained arrhythmia is presented. The implications of delivering a defibrillator shock to a non-shockable rhythm are discussed.

  2. An algorithm for automating the registration of USDA segment ground data to LANDSAT MSS data

    Science.gov (United States)

    Graham, M. H. (Principal Investigator)

    1981-01-01

    The algorithm is referred to as the Automatic Segment Matching Algorithm (ASMA). The ASMA uses control points or the annotation record of a P-format LANDSAT compter compatible tape as the initial registration to relate latitude and longitude to LANDSAT rows and columns. It searches a given area of LANDSAT data with a 2x2 sliding window and computes gradient values for bands 5 and 7 to match the segment boundaries. The gradient values are held in memory during the shifting (or matching) process. The reconstructed segment array, containing ones (1's) for boundaries and zeros elsewhere are computer compared to the LANDSAT array and the best match computed. Initial testing of the ASMA indicates that it has good potential for replacing the manual technique.

  3. Integration of symbolic and algorithmic hardware and software for the automation of space station subsystems

    Science.gov (United States)

    Gregg, Hugh; Healey, Kathleen; Hack, Edmund; Wong, Carla

    1988-01-01

    Expert systems that require access to data bases, complex simulations and real time instrumentation have both symbolic and algorithmic needs. Both of these needs could be met using a general purpose workstation running both symbolic and algorithmic codes, or separate, specialized computers networked together. The later approach was chosen to implement TEXSYS, the thermal expert system, developed by the NASA Ames Research Center in conjunction with the Johnson Space Center to demonstrate the ability of an expert system to autonomously monitor the thermal control system of the space station. TEXSYS has been implemented on a Symbolics workstation, and will be linked to a microVAX computer that will control a thermal test bed. The integration options and several possible solutions are presented.

  4. An automated tetrahedral mesh generator for computer simulation in Odontology based on the Delaunay's algorithm

    Directory of Open Access Journals (Sweden)

    Mauro Massayoshi Sakamoto

    2008-01-01

    Full Text Available In this work, a software package based on the Delaunay´s algorithm is described. The main feature of this package is the capability in applying discretization in geometric domains of teeth taking into account their complex inner structures and the materials with different hardness. Usually, the mesh generators reported in literature treat molars and other teeth by using simplified geometric models, or even considering the teeth as homogeneous structures.

  5. Algorithm for Automated Mapping of Land Surface Temperature Using LANDSAT 8 Satellite Data

    OpenAIRE

    Ugur Avdan; Gordana Jovanovska

    2016-01-01

    Land surface temperature is an important factor in many areas, such as global climate change, hydrological, geo-/biophysical, and urban land use/land cover. As the latest launched satellite from the LANDSAT family, LANDSAT 8 has opened new possibilities for understanding the events on the Earth with remote sensing. This study presents an algorithm for the automatic mapping of land surface temperature from LANDSAT 8 data. The tool was developed using the LANDSAT 8 thermal infrared sensor Band ...

  6. An Automated Processing Algorithm for Flat Areas Resulting from DEM Filling and Interpolation

    Directory of Open Access Journals (Sweden)

    Xingwei Liu

    2017-11-01

    Full Text Available Correction of digital elevation models (DEMs for flat areas is a critical process for hydrological analyses and modeling, such as the determination of flow directions and accumulations, and the delineation of drainage networks and sub-basins. In this study, a new algorithm is proposed for flat correction/removal. It uses the puddle delineation (PD program to identify depressions (including their centers and overflow/spilling thresholds, compute topographic characteristics, and further fill the depressions. Three different levels of elevation increments are used for flat correction. The first and second level of increments create flows toward the thresholds and centers of the filled depressions or flats, while the third level of small random increments is introduced to cope with multiple threshold conditions. A set of artificial surfaces and two real-world landscapes were selected to test the new algorithm. The results showed that the proposed method was not limited by the shapes, the number of thresholds, and the surrounding topographic conditions of flat areas. Compared with the traditional methods, the new algorithm simplified the flat correction procedure and reduced the final elevation increments by 5.71–33.33%. This can be used to effectively remove/correct topographic flats and create flat-free DEMs.

  7. Algorithm for Automated Mapping of Land Surface Temperature Using LANDSAT 8 Satellite Data

    Directory of Open Access Journals (Sweden)

    Ugur Avdan

    2016-01-01

    Full Text Available Land surface temperature is an important factor in many areas, such as global climate change, hydrological, geo-/biophysical, and urban land use/land cover. As the latest launched satellite from the LANDSAT family, LANDSAT 8 has opened new possibilities for understanding the events on the Earth with remote sensing. This study presents an algorithm for the automatic mapping of land surface temperature from LANDSAT 8 data. The tool was developed using the LANDSAT 8 thermal infrared sensor Band 10 data. Different methods and formulas were used in the algorithm that successfully retrieves the land surface temperature to help us study the thermal environment of the ground surface. To verify the algorithm, the land surface temperature and the near-air temperature were compared. The results showed that, for the first case, the standard deviation was 2.4°C, and for the second case, it was 2.7°C. For future studies, the tool should be refined with in situ measurements of land surface temperature.

  8. Automated process flowsheet synthesis for membrane processes using genetic algorithm: role of crossover operators

    KAUST Repository

    Shafiee, Alireza

    2016-06-25

    In optimization-based process flowsheet synthesis, optimization methods, including genetic algorithms (GA), are used as advantageous tools to select a high performance flowsheet by ‘screening’ large numbers of possible flowsheets. In this study, we expand the role of GA to include flowsheet generation through proposing a modified Greedysub tour crossover operator. Performance of the proposed crossover operator is compared with four other commonly used operators. The proposed GA optimizationbased process synthesis method is applied to generate the optimum process flowsheet for a multicomponent membrane-based CO2 capture process. Within defined constraints and using the random-point crossover, CO2 purity of 0.827 (equivalent to 0.986 on dry basis) is achieved which results in improvement (3.4%) over the simplest crossover operator applied. In addition, the least variability in the converged flowsheet and CO2 purity is observed for random-point crossover operator, which approximately implies closeness of the solution to the global optimum, and hence the consistency of the algorithm. The proposed crossover operator is found to improve the convergence speed of the algorithm by 77.6%.

  9. Automated estimation of mass eruption rate of volcanic eruption on satellite imagery using a cloud pattern recognition algorithm

    Science.gov (United States)

    Pouget, Solene; Jansons, Emile; Bursik, Marcus; Tupper, Andrew; Patra, Abani; Pitman, Bruce; Carn, Simon

    2014-05-01

    The need to detect and track the position of ash in the atmosphere has been highlighted in the past few years following the eruption Eyjafjallajokull. As a result, Volcanic Ash Advisory Centers (VAACs) are using Volcanic Ash Transport and Dispersion models (VATD) to estimate and predict the whereabouts of the ash in the atmosphere. However, these models require inputs of eruption source parameters, such as the mass eruption rate (MER), and wind fields, which are vital to properly model the ash movements. These inputs might change with time as the eruption enters different phases. This implies tracking the ash movement as conditions change, and new satellite imagery comes in. Thus, ultimately, the eruption must be detectable, regardless of changing eruption source and meteorological conditions. Volcanic cloud recognition can be particularly challenging, especially when meteorological clouds are present, which is typically the case in the tropics. Given the fact that a large fraction of the eruptions in the world happen in a tropical environment, we have based an automated volcanic cloud recognition algorithm on the fact that meteorological clouds and volcanic clouds behave differently. As a result, the pattern definition algorithm detects and defines volcanic clouds as different object types from meteorological clouds on satellite imagery. Following detection and definition, the algorithm then estimates the area covered by the ash. The area is then analyzed with respect to a plume growth rate methodology to get estimation of the volumetric and mass growth with time. This way, we were able to get an estimation of the MER with time, as plume growth is dependent on MER. To test our approach, we used the examples of two eruptions of different source strength, in two different climatic regimes, and for which therefore the weather during the eruption was quite different: Manam (Papua New Guinea) January 27 2005, which produced a stratospheric umbrella cloud and was

  10. Algorithms

    Indian Academy of Sciences (India)

    In the description of algorithms and programming languages, what is the role of control abstraction? • What are the inherent limitations of the algorithmic processes? In future articles in this series, we will show that these constructs are powerful and can be used to encode any algorithm. In the next article, we will discuss ...

  11. Evaluation of automated fundus photograph analysis algorithms for detecting microaneurysms, haemorrhages and exudates, and of a computer-assisted diagnostic system for grading diabetic retinopathy.

    Science.gov (United States)

    Dupas, B; Walter, T; Erginay, A; Ordonez, R; Deb-Joardar, N; Gain, P; Klein, J-C; Massin, P

    2010-06-01

    This study aimed to evaluate automated fundus photograph analysis algorithms for the detection of primary lesions and a computer-assisted diagnostic system for grading diabetic retinopathy (DR) and the risk of macular edema (ME). Two prospective analyses were conducted on fundus images from diabetic patients. Automated detection of microaneurysms and exudates was applied to two small image databases on which these lesions were manually marked. A computer-assisted diagnostic system for the detection and grading of DR and the risk of ME was then developed and evaluated, using a large database containing both normal and pathological images, and compared with manual grading. The algorithm for the automated detection of microaneurysms demonstrated a sensitivity of 88.5%, with an average number of 2.13 false positives per image. The pixel-based evaluation of the algorithm for automated detection of exudates had a sensitivity of 92.8% and a positive predictive value of 92.4%. Combined automated grading of DR and risk of ME was performed on 761 images from a large database. For DR detection, the sensitivity and specificity of the algorithm were 83.9% and 72.7%, respectively, and, for detection of the risk of ME, the sensitivity and specificity were 72.8% and 70.8%, respectively. This study shows that previously published algorithms for computer-aided diagnosis is a reliable alternative to time-consuming manual analysis of fundus photographs when screening for DR. The use of this system would allow considerable timesavings for physicians and, therefore, alleviate the time spent on a mass-screening programme. Copyright 2010 Elsevier Masson SAS. All rights reserved.

  12. Automated chart review utilizing natural language processing algorithm for asthma predictive index.

    Science.gov (United States)

    Kaur, Harsheen; Sohn, Sunghwan; Wi, Chung-Il; Ryu, Euijung; Park, Miguel A; Bachman, Kay; Kita, Hirohito; Croghan, Ivana; Castro-Rodriguez, Jose A; Voge, Gretchen A; Liu, Hongfang; Juhn, Young J

    2018-02-13

    Thus far, no algorithms have been developed to automatically extract patients who meet Asthma Predictive Index (API) criteria from the Electronic health records (EHR) yet. Our objective is to develop and validate a natural language processing (NLP) algorithm to identify patients that meet API criteria. This is a cross-sectional study nested in a birth cohort study in Olmsted County, MN. Asthma status ascertained by manual chart review based on API criteria served as gold standard. NLP-API was developed on a training cohort (n = 87) and validated on a test cohort (n = 427). Criterion validity was measured by sensitivity, specificity, positive predictive value and negative predictive value of the NLP algorithm against manual chart review for asthma status. Construct validity was determined by associations of asthma status defined by NLP-API with known risk factors for asthma. Among the eligible 427 subjects of the test cohort, 48% were males and 74% were White. Median age was 5.3 years (interquartile range 3.6-6.8). 35 (8%) had a history of asthma by NLP-API vs. 36 (8%) by abstractor with 31 by both approaches. NLP-API predicted asthma status with sensitivity 86%, specificity 98%, positive predictive value 88%, negative predictive value 98%. Asthma status by both NLP and manual chart review were significantly associated with the known asthma risk factors, such as history of allergic rhinitis, eczema, family history of asthma, and maternal history of smoking during pregnancy (p value NLP-API and abstractor, and the effect sizes were similar between the reviews with 4.4 vs 4.2 respectively. NLP-API was able to ascertain asthma status in children mining from EHR and has a potential to enhance asthma care and research through population management and large-scale studies when identifying children who meet API criteria.

  13. AUTOMATION OF CALCULATION ALGORITHMS FOR EFFICIENCY ESTIMATION OF TRANSPORT INFRASTRUCTURE DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Sergey Kharitonov

    2015-06-01

    Full Text Available Optimum transport infrastructure usage is an important aspect of the development of the national economy of the Russian Federation. Thus, development of instruments for assessing the efficiency of infrastructure is impossible without constant monitoring of a number of significant indicators. This work is devoted to the selection of indicators and the method of their calculation in relation to the transport subsystem as airport infrastructure. The work also reflects aspects of the evaluation of the possibilities of algorithmic computational mechanisms to improve the tools of public administration transport subsystems.

  14. An automated sawtooth detection algorithm for strongly varying plasma conditions and crash characteristics

    Science.gov (United States)

    Gude, A.; Maraschek, M.; Kardaun, O.; the ASDEX Upgrade Team

    2017-09-01

    A sawtooth crash algorithm that can automatically detect irregular sawteeth with strongly varying crash characteristics, including inverted crashes with central signal increase, has been developed. Such sawtooth behaviour is observed in ASDEX Upgrade with its tungsten wall, especially in phases with central ECRH. This application of ECRH for preventing impurity accumulation is envisaged also for ITER. The detection consists of three steps: a sensitive edge detection, a multichannel combination to increase detection performance, and a profile analysis that tests generic sawtooth crash features. The effect of detection parameters on the edge detection results has been investigated using synthetic signals and tested in an application to ASDEX Upgrade soft x-ray data.

  15. An automated algorithm to identify and reject artefacts for quantitative EEG analysis during sleep in patients with sleep-disordered breathing.

    Science.gov (United States)

    D'Rozario, Angela L; Dungan, George C; Banks, Siobhan; Liu, Peter Y; Wong, Keith K H; Killick, Roo; Grunstein, Ronald R; Kim, Jong Won

    2015-05-01

    Large quantities of neurophysiological electroencephalogram (EEG) data are routinely collected in the sleep laboratory. These are underutilised due to the burden of managing artefact contamination. The aim of this study was to develop a new tool for automated artefact rejection that facilitates subsequent quantitative analysis of sleep EEG data collected during routine overnight polysomnography (PSG) in subjects with and without sleep-disordered breathing (SDB). We evaluated the accuracy of an automated algorithm to detect sleep EEG artefacts against artefacts manually scored by three experienced technologists (reference standard) in 40 PSGs. Spectral power was computed using artefact-free EEG data derived from (1) the reference standard, (2) the algorithm and (3) raw EEG without any prior artefact rejection. The algorithm showed a high level of accuracy of 94.3, 94.7 and 95.8% for detecting artefacts during the entire PSG, NREM sleep and REM sleep, respectively. There was good to moderate sensitivity and excellent specificity of the algorithm detection capabilities during sleep. The EEG spectral power for the reference standard and algorithm was significantly lower than that of the raw, unprocessed EEG signal. These preliminary findings support an automated way to process EEG artefacts during sleep, providing the opportunity to investigate EEG-based markers of neurobehavioural impairment in sleep disorders in future studies.

  16. The automated reference toolset: A soil-geomorphic ecological potential matching algorithm

    Science.gov (United States)

    Nauman, Travis; Duniway, Michael C.

    2016-01-01

    Ecological inventory and monitoring data need referential context for interpretation. Identification of appropriate reference areas of similar ecological potential for site comparison is demonstrated using a newly developed automated reference toolset (ART). Foundational to identification of reference areas was a soil map of particle size in the control section (PSCS), a theme in US Soil Taxonomy. A 30-m resolution PSCS map of the Colorado Plateau (366,000 km2) was created by interpolating ∼5000 field soil observations using a random forest model and a suite of raster environmental spatial layers representing topography, climate, general ecological community, and satellite imagery ratios. The PSCS map had overall out of bag accuracy of 61.8% (Kappa of 0.54, p < 0.0001), and an independent validation accuracy of 93.2% at a set of 356 field plots along the southern edge of Canyonlands National Park, Utah. The ART process was also tested at these plots, and matched plots with the same ecological sites (ESs) 67% of the time where sites fell within 2-km buffers of each other. These results show that the PSCS and ART have strong application for ecological monitoring and sampling design, as well as assessing impacts of disturbance and land management action using an ecological potential framework. Results also demonstrate that PSCS could be a key mapping layer for the USDA-NRCS provisional ES development initiative.

  17. Automated Software Acceleration in Programmable Logic for an Efficient NFFT Algorithm Implementation: A Case Study.

    Science.gov (United States)

    Rodríguez, Manuel; Magdaleno, Eduardo; Pérez, Fernando; García, Cristhian

    2017-03-28

    Non-equispaced Fast Fourier transform (NFFT) is a very important algorithm in several technological and scientific areas such as synthetic aperture radar, computational photography, medical imaging, telecommunications, seismic analysis and so on. However, its computation complexity is high. In this paper, we describe an efficient NFFT implementation with a hardware coprocessor using an All-Programmable System-on-Chip (APSoC). This is a hybrid device that employs an Advanced RISC Machine (ARM) as Processing System with Programmable Logic for high-performance digital signal processing through parallelism and pipeline techniques. The algorithm has been coded in C language with pragma directives to optimize the architecture of the system. We have used the very novel Software Develop System-on-Chip (SDSoC) evelopment tool that simplifies the interface and partitioning between hardware and software. This provides shorter development cycles and iterative improvements by exploring several architectures of the global system. The computational results shows that hardware acceleration significantly outperformed the software based implementation.

  18. Image processing algorithms for automated analysis of GMR data from inspection of multilayer structures

    Science.gov (United States)

    Karpenko, Oleksii; Safdernejad, Seyed; Dib, Gerges; Udpa, Lalita; Udpa, Satish; Tamburrino, Antonello

    2015-03-01

    Eddy current probes (EC) with Giant Magnetoresistive (GMR) sensors have recently emerged as a promising tool for rapid scanning of multilayer aircraft panels that helps detect cracks under fastener heads. However, analysis of GMR data is challenging due to the complexity of sensed magnetic fields. Further, probes that induce unidirectional currents are insensitive to cracks parallel to the current flow. In this paper, signal processing algorithms are developed for mixing data from two orthogonal EC-GMR scans in order to generate pseudo-rotating electromagnetic field images of fasteners with bottom layer cracks. Finite element simulations demonstrate that the normal component of numerically computed rotating field has uniform sensitivity to cracks emanating in all radial directions. The concept of pseudo-rotating field imaging is experimentally validated with the help of MAUS bilateral GMR array (Big-MR) designed by Boeing.

  19. Automated cross-identifying radio to infrared surveys using the LRPY algorithm: a case study

    Science.gov (United States)

    Weston, S. D.; Seymour, N.; Gulyaev, S.; Norris, R. P.; Banfield, J.; Vaccari, M.; Hopkins, A. M.; Franzen, T. M. O.

    2018-02-01

    Cross-identifying complex radio sources with optical or infra red (IR) counterparts in surveys such as the Australia Telescope Large Area Survey (ATLAS) has traditionally been performed manually. However, with new surveys from the Australian Square Kilometre Array Pathfinder detecting many tens of millions of radio sources, such an approach is no longer feasible. This paper presents new software (LRPY - Likelihood Ratio in PYTHON) to automate the process of cross-identifying radio sources with catalogues at other wavelengths. LRPY implements the likelihood ratio (LR) technique with a modification to account for two galaxies contributing to a sole measured radio component. We demonstrate LRPY by applying it to ATLAS DR3 and a Spitzer-based multiwavelength fusion catalogue, identifying 3848 matched sources via our LR-based selection criteria. A subset of 1987 sources have flux density values for all IRAC bands which allow us to use criteria to distinguish between active galactic nuclei (AGNs) and star-forming galaxies (SFG). We find that 936 radio sources ( ≈ 47 per cent) meet both of the Lacy and Stern AGN selection criteria. Of the matched sources, 295 have spectroscopic redshifts and we examine the radio to IR flux ratio versus redshift, proposing an AGN selection criterion below the Elvis radio-loud AGN limit for this dataset. Taking the union of all three AGNs selection criteria we identify 956 as AGNs ( ≈ 48 per cent). From this dataset, we find a decreasing fraction of AGNs with lower radio flux densities consistent with other results in the literature.

  20. The Pandora multi-algorithm approach to automated pattern recognition of cosmic-ray muon and neutrino events in the MicroBooNE detector

    CERN Document Server

    Acciarri, R.; An, R.; Anthony, J.; Asaadi, J.; Auger, M.; Bagby, L.; Balasubramanian, S.; Baller, B.; Barnes, C.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Cohen, E.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fadeeva, A. A.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garcia-Gamez, D.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; Hourlier, A.; Huang, E.-C.; James, C.; Jan de Vries, J.; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Piasetzky, E.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; Rudolf von Rohr, C.; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Smith, A.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van De Pontseele, W.; Van de Water, R. G.; Viren, B.; Weber, M.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Yates, L.; Zeller, G. P.; Zennamo, J.; Zhang, C.

    2017-01-01

    The development and operation of Liquid-Argon Time-Projection Chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens of algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the...

  1. The Pandora multi-algorithm approach to automated pattern recognition of cosmic-ray muon and neutrino events in the MicroBooNE detector

    Energy Technology Data Exchange (ETDEWEB)

    Acciarri, R.; et al.

    2017-08-10

    The development and operation of Liquid-Argon Time-Projection Chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens of algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.

  2. An Automated Algorithm for Producing Land Cover Information from Landsat Surface Reflectance Data Acquired Between 1984 and Present

    Science.gov (United States)

    Rover, J.; Goldhaber, M. B.; Holen, C.; Dittmeier, R.; Wika, S.; Steinwand, D.; Dahal, D.; Tolk, B.; Quenzer, R.; Nelson, K.; Wylie, B. K.; Coan, M.

    2015-12-01

    Multi-year land cover mapping from remotely sensed data poses challenges. Producing land cover products at spatial and temporal scales required for assessing longer-term trends in land cover change are typically a resource-limited process. A recently developed approach utilizes open source software libraries to automatically generate datasets, decision tree classifications, and data products while requiring minimal user interaction. Users are only required to supply coordinates for an area of interest, land cover from an existing source such as National Land Cover Database and percent slope from a digital terrain model for the same area of interest, two target acquisition year-day windows, and the years of interest between 1984 and present. The algorithm queries the Landsat archive for Landsat data intersecting the area and dates of interest. Cloud-free pixels meeting the user's criteria are mosaicked to create composite images for training the classifiers and applying the classifiers. Stratification of training data is determined by the user and redefined during an iterative process of reviewing classifiers and resulting predictions. The algorithm outputs include yearly land cover raster format data, graphics, and supporting databases for further analysis. Additional analytical tools are also incorporated into the automated land cover system and enable statistical analysis after data are generated. Applications tested include the impact of land cover change and water permanence. For example, land cover conversions in areas where shrubland and grassland were replaced by shale oil pads during hydrofracking of the Bakken Formation were quantified. Analytical analysis of spatial and temporal changes in surface water included identifying wetlands in the Prairie Pothole Region of North Dakota with potential connectivity to ground water, indicating subsurface permeability and geochemistry.

  3. Algorithms

    Indian Academy of Sciences (India)

    , i is referred to as the loop-index, 'stat-body' is any sequence of ... while i ~ N do stat-body; i: = i+ 1; endwhile. The algorithm for sorting the numbers is described in Table 1 and the algorithmic steps on a list of 4 numbers shown in. Figure 1.

  4. SPEQTACLE: An automated generalized fuzzy C-means algorithm for tumor delineation in PET

    Energy Technology Data Exchange (ETDEWEB)

    Lapuyade-Lahorgue, Jérôme; Visvikis, Dimitris; Hatt, Mathieu, E-mail: hatt@univ-brest.fr [LaTIM, INSERM, UMR 1101, Brest 29609 (France); Pradier, Olivier [LaTIM, INSERM, UMR 1101, Brest 29609, France and Radiotherapy Department, CHRU Morvan, Brest 29609 (France); Cheze Le Rest, Catherine [DACTIM University of Poitiers, Nuclear Medicine Department, CHU Milétrie, Poitiers 86021 (France)

    2015-10-15

    Purpose: Accurate tumor delineation in positron emission tomography (PET) images is crucial in oncology. Although recent methods achieved good results, there is still room for improvement regarding tumors with complex shapes, low signal-to-noise ratio, and high levels of uptake heterogeneity. Methods: The authors developed and evaluated an original clustering-based method called spatial positron emission quantification of tumor—Automatic Lp-norm estimation (SPEQTACLE), based on the fuzzy C-means (FCM) algorithm with a generalization exploiting a Hilbertian norm to more accurately account for the fuzzy and non-Gaussian distributions of PET images. An automatic and reproducible estimation scheme of the norm on an image-by-image basis was developed. Robustness was assessed by studying the consistency of results obtained on multiple acquisitions of the NEMA phantom on three different scanners with varying acquisition parameters. Accuracy was evaluated using classification errors (CEs) on simulated and clinical images. SPEQTACLE was compared to another FCM implementation, fuzzy local information C-means (FLICM) and fuzzy locally adaptive Bayesian (FLAB). Results: SPEQTACLE demonstrated a level of robustness similar to FLAB (variability of 14% ± 9% vs 14% ± 7%, p = 0.15) and higher than FLICM (45% ± 18%, p < 0.0001), and improved accuracy with lower CE (14% ± 11%) over both FLICM (29% ± 29%) and FLAB (22% ± 20%) on simulated images. Improvement was significant for the more challenging cases with CE of 17% ± 11% for SPEQTACLE vs 28% ± 22% for FLAB (p = 0.009) and 40% ± 35% for FLICM (p < 0.0001). For the clinical cases, SPEQTACLE outperformed FLAB and FLICM (15% ± 6% vs 37% ± 14% and 30% ± 17%, p < 0.004). Conclusions: SPEQTACLE benefitted from the fully automatic estimation of the norm on a case-by-case basis. This promising approach will be extended to multimodal images and multiclass estimation in future developments.

  5. HER2 challenge contest: a detailed assessment of automated HER2 scoring algorithms in whole slide images of breast cancer tissues.

    Science.gov (United States)

    Qaiser, Talha; Mukherjee, Abhik; Reddy Pb, Chaitanya; Munugoti, Sai D; Tallam, Vamsi; Pitkäaho, Tomi; Lehtimäki, Taina; Naughton, Thomas; Berseth, Matt; Pedraza, Aníbal; Mukundan, Ramakrishnan; Smith, Matthew; Bhalerao, Abhir; Rodner, Erik; Simon, Marcel; Denzler, Joachim; Huang, Chao-Hui; Bueno, Gloria; Snead, David; Ellis, Ian O; Ilyas, Mohammad; Rajpoot, Nasir

    2018-01-01

    Evaluating expression of the human epidermal growth factor receptor 2 (HER2) by visual examination of immunohistochemistry (IHC) on invasive breast cancer (BCa) is a key part of the diagnostic assessment of BCa due to its recognized importance as a predictive and prognostic marker in clinical practice. However, visual scoring of HER2 is subjective, and consequently prone to interobserver variability. Given the prognostic and therapeutic implications of HER2 scoring, a more objective method is required. In this paper, we report on a recent automated HER2 scoring contest, held in conjunction with the annual PathSoc meeting held in Nottingham in June 2016, aimed at systematically comparing and advancing the state-of-the-art artificial intelligence (AI)-based automated methods for HER2 scoring. The contest data set comprised digitized whole slide images (WSI) of sections from 86 cases of invasive breast carcinoma stained with both haematoxylin and eosin (H&E) and IHC for HER2. The contesting algorithms predicted scores of the IHC slides automatically for an unseen subset of the data set and the predicted scores were compared with the 'ground truth' (a consensus score from at least two experts). We also report on a simple 'Man versus Machine' contest for the scoring of HER2 and show that the automated methods could beat the pathology experts on this contest data set. This paper presents a benchmark for comparing the performance of automated algorithms for scoring of HER2. It also demonstrates the enormous potential of automated algorithms in assisting the pathologist with objective IHC scoring. © 2017 John Wiley & Sons Ltd.

  6. Development of Reinforcement Learning Algorithm for Automation of Slide Gate Check Structure in Canals

    Directory of Open Access Journals (Sweden)

    K. Shahverdi

    2016-02-01

    Full Text Available Introduction: Nowadays considering water shortage and weak management in agricultural water sector and for optimal uses of water, irrigation networks performance need to be improveed. Recently, intelligent management of water conveyance and delivery, and better control technologies have been considered for improving the performance of irrigation networks and their operation. For this affair, providing of mathematical model of automatic control system and related structures, which connected with hydrodynamic models, is necessary. The main objective of this research, is development of mathematical model of RL upstream control algorithm inside ICSS hydrodynamic model as a subroutine. Materials and Methods: In the learning systems, a set of state-action rules called classifiers compete to control the system based on the system's receipt from the environment. One could be identified five main elements of the RL: an agent, an environment, a policy, a reward function, and a simulator. The learner (decision-maker is called the agent. The thing it interacts with, comprising everything outside the agent, is called the environment. The agent selects an action based on existing state in the environment. When the agent takes an action and performs on environment, the environment goes new state and reward is assigned based on it. The agent and the environment continually interact to maximize the reward. The policy is a set of state-action pair, which have higher rewards. It defines the agent's behavior and says which action must be taken in which state. The reward function defines the goal in a RL problem. The reward function defines what the good and bad events are for the agent. The higher the reward, the better the action. The simulator provides environment information. In irrigation canals, the agent is the check structures. The action and state are the check structures adjustment and the water depth, respectively. The environment comprises the hydraulic

  7. Security system signal supervision

    Energy Technology Data Exchange (ETDEWEB)

    Chritton, M.R. (BE, Inc., Barnwell, SC (United States)); Matter, J.C. (Sandia National Labs., Albuquerque, NM (United States))

    1991-09-01

    This purpose of this NUREG is to present technical information that should be useful to NRC licensees for understanding and applying line supervision techniques to security communication links. A review of security communication links is followed by detailed discussions of link physical protection and DC/AC static supervision and dynamic supervision techniques. Material is also presented on security for atmospheric transmission and video line supervision. A glossary of security communication line supervision terms is appended. 16 figs.

  8. Inductive Supervised Quantum Learning

    Science.gov (United States)

    Monràs, Alex; Sentís, Gael; Wittek, Peter

    2017-05-01

    In supervised learning, an inductive learning algorithm extracts general rules from observed training instances, then the rules are applied to test instances. We show that this splitting of training and application arises naturally, in the classical setting, from a simple independence requirement with a physical interpretation of being nonsignaling. Thus, two seemingly different definitions of inductive learning happen to coincide. This follows from the properties of classical information that break down in the quantum setup. We prove a quantum de Finetti theorem for quantum channels, which shows that in the quantum case, the equivalence holds in the asymptotic setting, that is, for large numbers of test instances. This reveals a natural analogy between classical learning protocols and their quantum counterparts, justifying a similar treatment, and allowing us to inquire about standard elements in computational learning theory, such as structural risk minimization and sample complexity.

  9. Automated Conflict Resolution For Air Traffic Control

    Science.gov (United States)

    Erzberger, Heinz

    2005-01-01

    The ability to detect and resolve conflicts automatically is considered to be an essential requirement for the next generation air traffic control system. While systems for automated conflict detection have been used operationally by controllers for more than 20 years, automated resolution systems have so far not reached the level of maturity required for operational deployment. Analytical models and algorithms for automated resolution have been traffic conditions to demonstrate that they can handle the complete spectrum of conflict situations encountered in actual operations. The resolution algorithm described in this paper was formulated to meet the performance requirements of the Automated Airspace Concept (AAC). The AAC, which was described in a recent paper [1], is a candidate for the next generation air traffic control system. The AAC's performance objectives are to increase safety and airspace capacity and to accommodate user preferences in flight operations to the greatest extent possible. In the AAC, resolution trajectories are generated by an automation system on the ground and sent to the aircraft autonomously via data link .The algorithm generating the trajectories must take into account the performance characteristics of the aircraft, the route structure of the airway system, and be capable of resolving all types of conflicts for properly equipped aircraft without requiring supervision and approval by a controller. Furthermore, the resolution trajectories should be compatible with the clearances, vectors and flight plan amendments that controllers customarily issue to pilots in resolving conflicts. The algorithm described herein, although formulated specifically to meet the needs of the AAC, provides a generic engine for resolving conflicts. Thus, it can be incorporated into any operational concept that requires a method for automated resolution, including concepts for autonomous air to air resolution.

  10. Algorithms

    Indian Academy of Sciences (India)

    Algorithms. 3. Procedures and Recursion. R K Shyamasundar. In this article we introduce procedural abstraction and illustrate its uses. Further, we illustrate the notion of recursion which is one of the most useful features of procedural abstraction. Procedures. Let us consider a variation of the pro blem of summing the first M.

  11. Algorithms

    Indian Academy of Sciences (India)

    number of elements. We shall illustrate the widely used matrix multiplication algorithm using the two dimensional arrays in the following. Consider two matrices A and B of integer type with di- mensions m x nand n x p respectively. Then, multiplication of. A by B denoted, A x B , is defined by matrix C of dimension m xp where.

  12. Evaluation of an automated spike-and-wave complex detection algorithm in the EEG from a rat model of absence epilepsy.

    Science.gov (United States)

    Bauquier, Sebastien H; Lai, Alan; Jiang, Jonathan L; Sui, Yi; Cook, Mark J

    2015-10-01

    The aim of this prospective blinded study was to evaluate an automated algorithm for spike-and-wave discharge (SWD) detection applied to EEGs from genetic absence epilepsy rats from Strasbourg (GAERS). Five GAERS underwent four sessions of 20-min EEG recording. Each EEG was manually analyzed for SWDs longer than one second by two investigators and automatically using an algorithm developed in MATLAB®. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated for the manual (reference) versus the automatic (test) methods. The results showed that the algorithm had specificity, sensitivity, PPV and NPV >94%, comparable to published methods that are based on analyzing EEG changes in the frequency domain. This provides a good alternative as a method designed to mimic human manual marking in the time domain.

  13. A Supervision of Solidarity

    Science.gov (United States)

    Reynolds, Vikki

    2010-01-01

    This article illustrates an approach to therapeutic supervision informed by a philosophy of solidarity and social justice activism. Called a "Supervision of Solidarity", this approach addresses the particular challenges in the supervision of therapists who work alongside clients who are subjected to social injustice and extreme marginalization. It…

  14. Practising Offender Supervision

    NARCIS (Netherlands)

    Gwen Robinson; Jacqueline Bosker; Kerstin Svensson; Dr Andrea Donker

    2013-01-01

    Hoofdstuk 4 in Offender Supervision in Europe van McNeill, F. en Beyens, K. (27 p.). Offender supervision in Europe has developed rapidly in scale, distribution and intensity in recent years. However, the emergence of mass supervision in the community has largely escaped the attention of legal

  15. The influence of cardiograph design and automated algorithms on the incidence and detection of electrode cable reversals in an academic electrocardiogram laboratory.

    Science.gov (United States)

    Nilsson, Kent R; Sewell, Phyllis M; Blunden-Kasdorf, Patricia; Starkey, Kimberly; Grant, Augustus O; Wagner, Galen S

    2008-01-01

    Medical errors have been increasingly identified as a major source of morbidity and mortality in both outpatient and acute care settings. Central to the evaluation of many medical problems, the 12-lead electrocardiogram (ECG) is susceptible to both technical and interpretative errors. Proper interpretation, however, is dependent on the quality and accuracy of the acquired ECG. We evaluated the impact of both a newly designed electrocardiograph and a newly developed automated computer algorithm on the incidence and detection of electrode cable reversals (lead reversals). The study tested the association of the incidence of electrode cable reversals and the design of the connection terminal. The study was performed during a 7-month period preceding (53,875 ECGs) and after (53,344 ECGs) the implementation of the new system. Electrode cable reversals occurring in various sites of the medical center were tabulated and compared. We then sought to determine if computer detection algorithms could increase point-of-care detection of electrode cable reversals and, thereby, offset the influence of cardiograph design changes. Two commercially available automated detection algorithms were compared for their abilities to identify electrode cable reversals in our study population. During the 7-month postimplementation period, there was a significant increase in the incidence in electrode cable reversals (0.5% vs 0.1%, P algorithm supplied by the manufacturer. Electrode cable reversals are a prevalent source of medical errors that receives very little attention by the clinical community. The association of an increase in electrode cable reversals with an altered electrode cable connection terminal, coupled with an increased ability to detect electrode cable reversals using the manufacturer's recently developed algorithms, emphasizes the importance of ongoing research efforts to identify technical errors in electrocardiography.

  16. Personnel Supervision: A Descriptive Framework.

    Science.gov (United States)

    Storey, Vernon J.; Housego, Ian

    1980-01-01

    Presents a generalized model of personnel supervision that may assist in describing a given supervisory program, facilitating interorganizational comparison, guiding further study of supervision, developing an overall approach to supervision, and assessing the effectiveness of supervision programs. (Author/IRT)

  17. Good supervision and PBL

    DEFF Research Database (Denmark)

    Otrel-Cass, Kathrin

    This field study was conducted at the Faculty of Social Sciences at Aalborg University with the intention to investigate how students reflect on their experiences with supervision in a PBL environment. The overall aim of this study was to inform about the continued work in strengthening supervision...... at this faculty. This particular study invited Master level students to discuss: • How a typical supervision process proceeds • How they experienced and what they expected of PBL in the supervision process • What makes a good supervision process...

  18. Nonlinear analysis of the heartbeats in public patient ECGs using an automated PD2i algorithm for risk stratification of arrhythmic death

    Directory of Open Access Journals (Sweden)

    James E Skinner

    2008-04-01

    Full Text Available James E Skinner, Jerry M Anchin, Daniel N WeissVicor Technologies, Inc., Bangor, PA, USAAbstract: Heart rate variability (HRV reflects both cardiac autonomic function and risk of arrhythmic death (AD. Reduced indices of HRV based on linear stochastic models are independent risk factors for AD in post-myocardial infarct cohorts. Indices based on nonlinear deterministic models have a significantly higher sensitivity and specificity for predicting AD in retrospective data. A need exists for nonlinear analytic software easily used by a medical technician. In the current study, an automated nonlinear algorithm, the time-dependent point correlation dimension (PD2i, was evaluated. The electrocardiogram (ECG data were provided through an National Institutes of Health-sponsored internet archive (PhysioBank and consisted of all 22 malignant arrhythmia ECG files (VF/VT and 22 randomly selected arrhythmia files as the controls. The results were blindly calculated by automated software (Vicor 2.0, Vicor Technologies, Inc., Boca Raton, FL and showed all analyzable VF/VT files had PD2i < 1.4 and all analyzable controls had PD2i > 1.4. Five VF/VT and six controls were excluded because surrogate testing showed the RR-intervals to contain noise, possibly resulting from the low digitization rate of the ECGs. The sensitivity was 100%, specificity 85%, relative risk > 100; p < 0.01, power > 90%. Thus, automated heartbeat analysis by the time-dependent nonlinear PD2i-algorithm can accurately stratify risk of AD in public data made available for competitive testing of algorithms.Keywords: heart rate variability, sudden death, ventricular arrhythmias, nonlinear, chaos

  19. A Supervised Approach to Windowing Detection on Dynamic Networks

    Science.gov (United States)

    2017-07-01

    windowing algorithms that leverages task -dependency. We also introduce windowing al- gorithms that takes a supervised-machine-learning approach . We...at hand. In other words, we treat the task algorithms as black boxes. However, as we show in Section 7, the supervised approaches are still able to...2This online algorithm is described explicitly for Katz, but it is worth nothing that the same approach may in principle be used for any online task . 6.3

  20. Detection of facilities in satellite imagery using semi-supervised image classification and auxiliary contextual observables

    Energy Technology Data Exchange (ETDEWEB)

    Harvey, Neal R [Los Alamos National Laboratory; Ruggiero, Christy E [Los Alamos National Laboratory; Pawley, Norma H [Los Alamos National Laboratory; Brumby, Steven P [Los Alamos National Laboratory; Macdonald, Brian [Los Alamos National Laboratory; Balick, Lee [Los Alamos National Laboratory; Oyer, Alden [Los Alamos National Laboratory

    2009-01-01

    Detecting complex targets, such as facilities, in commercially available satellite imagery is a difficult problem that human analysts try to solve by applying world knowledge. Often there are known observables that can be extracted by pixel-level feature detectors that can assist in the facility detection process. Individually, each of these observables is not sufficient for an accurate and reliable detection, but in combination, these auxiliary observables may provide sufficient context for detection by a machine learning algorithm. We describe an approach for automatic detection of facilities that uses an automated feature extraction algorithm to extract auxiliary observables, and a semi-supervised assisted target recognition algorithm to then identify facilities of interest. We illustrate the approach using an example of finding schools in Quickbird image data of Albuquerque, New Mexico. We use Los Alamos National Laboratory's Genie Pro automated feature extraction algorithm to find a set of auxiliary features that should be useful in the search for schools, such as parking lots, large buildings, sports fields and residential areas and then combine these features using Genie Pro's assisted target recognition algorithm to learn a classifier that finds schools in the image data.

  1. Validation of the Total Visual Acuity Extraction Algorithm (TOVA) for Automated Extraction of Visual Acuity Data From Free Text, Unstructured Clinical Records.

    Science.gov (United States)

    Baughman, Douglas M; Su, Grace L; Tsui, Irena; Lee, Cecilia S; Lee, Aaron Y

    2017-03-01

    With increasing volumes of electronic health record data, algorithm-driven extraction may aid manual extraction. Visual acuity often is extracted manually in vision research. The total visual acuity extraction algorithm (TOVA) is presented and validated for automated extraction of visual acuity from free text, unstructured clinical notes. Consecutive inpatient ophthalmology notes over an 8-year period from the University of Washington healthcare system in Seattle, WA were used for validation of TOVA. The total visual acuity extraction algorithm applied natural language processing to recognize Snellen visual acuity in free text notes and assign laterality. The best corrected measurement was determined for each eye and converted to logMAR. The algorithm was validated against manual extraction of a subset of notes. A total of 6266 clinical records were obtained giving 12,452 data points. In a subset of 644 validated notes, comparison of manually extracted data versus TOVA output showed 95% concordance. Interrater reliability testing gave κ statistics of 0.94 (95% confidence interval [CI], 0.89-0.99), 0.96 (95% CI, 0.94-0.98), 0.95 (95% CI, 0.92-0.98), and 0.94 (95% CI, 0.90-0.98) for acuity numerators, denominators, adjustments, and signs, respectively. Pearson correlation coefficient was 0.983. Linear regression showed an R2 of 0.966 (P unstructured clinical notes and provides an open source method of data extraction. Automated visual acuity extraction through natural language processing can be a valuable tool for data extraction from free text ophthalmology notes.

  2. INVESTIGATION OF NEURAL NETWORK ALGORITHM FOR DETECTION OF NETWORK HOST ANOMALIES IN THE AUTOMATED SEARCH FOR XSS VULNERABILITIES AND SQL INJECTIONS

    Directory of Open Access Journals (Sweden)

    Y. D. Shabalin

    2016-03-01

    Full Text Available A problem of aberrant behavior detection for network communicating computer is discussed. A novel approach based on dynamic response of computer is introduced. The computer is suggested as a multiple-input multiple-output (MIMO plant. To characterize dynamic response of the computer on incoming requests a correlation between input data rate and observed output response (outgoing data rate and performance metrics is used. To distinguish normal and aberrant behavior of the computer one-class neural network classifieris used. General idea of the algorithm is shortly described. Configuration of network testbed for experiments with real attacks and their detection is presented (the automated search for XSS and SQL injections. Real found-XSS and SQL injection attack software was used to model the intrusion scenario. It would be expectable that aberrant behavior of the server will reveal itself by some instantaneous correlation response which will be significantly different from any of normal ones. It is evident that correlation picture of attacks from different malware running, the site homepage overriding on the server (so called defacing, hardware and software failures will differ from correlation picture of normal functioning. Intrusion detection algorithm is investigated to estimate false positive and false negative rates in relation to algorithm parameters. The importance of correlation width value and threshold value selection was emphasized. False positive rate was estimated along the time series of experimental data. Some ideas about enhancement of the algorithm quality and robustness were mentioned.

  3. Mapping of arterial location for the design of automated identification and analysis algorithms in whole body MRA.

    Science.gov (United States)

    McCormick, Lynne; Weir-McCall, Jonathan; Gandy, Stephen; White, Richard; McNeil, Andrew; Trucco, Emanuele; Houston, J Graeme

    2015-01-01

    Technological and medical advances have led to the realisation of full body imaging, with systemic diagnostic approaches becoming increasingly more prevalent. In the imaging of atherosclerotic disease, contrast -enhanced whole-body MRA has been demonstrated to enable detection of stenosis with a high sensitivity and specificity. Characterization of the systemic cardiovascular disease burden has significant prognostic value. A whole-body acquisition does however generate a large volume of three-dimensional data and as such there are expected to be significant advantages in developing automated techniques for the analysis of these images. Improved radiological workflow, reduced analysis time and increased analytical standardization are expected to be among the benefits offered by this approach. As part of a process of automated software development this study aimed to collect and validate arterial location ground truth. The data will be used to inform the development of semi-automated vascular identity tools, and allow the potential for the further development of semi-automated anatomically informed cardiovascular disease analysis and reporting.

  4. Volumetric analysis of lung nodules in computed tomography (CT): comparison of two different segmentation algorithm softwares and two different reconstruction filters on automated volume calculation.

    Science.gov (United States)

    Christe, Andreas; Brönnimann, Alain; Vock, Peter

    2014-02-01

    A precise detection of volume change allows for better estimating the biological behavior of the lung nodules. Postprocessing tools with automated detection, segmentation, and volumetric analysis of lung nodules may expedite radiological processes and give additional confidence to the radiologists. To compare two different postprocessing software algorithms (LMS Lung, Median Technologies; LungCARE®, Siemens) in CT volumetric measurement and to analyze the effect of soft (B30) and hard reconstruction filter (B70) on automated volume measurement. Between January 2010 and April 2010, 45 patients with a total of 113 pulmonary nodules were included. The CT exam was performed on a 64-row multidetector CT scanner (Somatom Sensation, Siemens, Erlangen, Germany) with the following parameters: collimation, 24x1.2 mm; pitch, 1.15; voltage, 120 kVp; reference tube current-time, 100 mAs. Automated volumetric measurement of each lung nodule was performed with the two different postprocessing algorithms based on two reconstruction filters (B30 and B70). The average relative volume measurement difference (VME%) and the limits of agreement between two methods were used for comparison. At soft reconstruction filters the LMS system produced mean nodule volumes that were 34.1% (P LMS and 1.6% for LungCARE®, respectively (both with P LMS measured greater volumes with both filters, 13.6% for soft and 3.8% for hard filters, respectively (P  0.05). There is a substantial inter-software (LMS/LungCARE®) as well as intra-software variability (B30/B70) in lung nodule volume measurement; therefore, it is mandatory to use the same equipment with the same reconstruction filter for the follow-up of lung nodule volume.

  5. Definition and Analysis of a System for the Automated Comparison of Curriculum Sequencing Algorithms in Adaptive Distance Learning

    Science.gov (United States)

    Limongelli, Carla; Sciarrone, Filippo; Temperini, Marco; Vaste, Giulia

    2011-01-01

    LS-Lab provides automatic support to comparison/evaluation of the Learning Object Sequences produced by different Curriculum Sequencing Algorithms. Through this framework a teacher can verify the correspondence between the behaviour of different sequencing algorithms and her pedagogical preferences. In fact the teacher can compare algorithms…

  6. An automated sleep-state classification algorithm for quantifying sleep timing and sleep-dependent dynamics of electroencephalographic and cerebral metabolic parameters

    Directory of Open Access Journals (Sweden)

    Rempe MJ

    2015-09-01

    Full Text Available Michael J Rempe,1,2 William C Clegern,2 Jonathan P Wisor2 1Mathematics and Computer Science, Whitworth University, Spokane, WA, USA; 2College of Medical Sciences and Sleep and Performance Research Center, Washington State University, Spokane, WA, USAIntroduction: Rodent sleep research uses electroencephalography (EEG and electromyography (EMG to determine the sleep state of an animal at any given time. EEG and EMG signals, typically sampled at >100 Hz, are segmented arbitrarily into epochs of equal duration (usually 2–10 seconds, and each epoch is scored as wake, slow-wave sleep (SWS, or rapid-eye-movement sleep (REMS, on the basis of visual inspection. Automated state scoring can minimize the burden associated with state and thereby facilitate the use of shorter epoch durations.Methods: We developed a semiautomated state-scoring procedure that uses a combination of principal component analysis and naïve Bayes classification, with the EEG and EMG as inputs. We validated this algorithm against human-scored sleep-state scoring of data from C57BL/6J and BALB/CJ mice. We then applied a general homeostatic model to characterize the state-dependent dynamics of sleep slow-wave activity and cerebral glycolytic flux, measured as lactate concentration.Results: More than 89% of epochs scored as wake or SWS by the human were scored as the same state by the machine, whether scoring in 2-second or 10-second epochs. The majority of epochs scored as REMS by the human were also scored as REMS by the machine. However, of epochs scored as REMS by the human, more than 10% were scored as SWS by the machine and 18 (10-second epochs to 28% (2-second epochs were scored as wake. These biases were not strain-specific, as strain differences in sleep-state timing relative to the light/dark cycle, EEG power spectral profiles, and the homeostatic dynamics of both slow waves and lactate were detected equally effectively with the automated method or the manual scoring

  7. Partially supervised speaker clustering.

    Science.gov (United States)

    Tang, Hao; Chu, Stephen Mingyu; Hasegawa-Johnson, Mark; Huang, Thomas S

    2012-05-01

    model-based distance metrics, 2) our advocated use of the cosine distance metric yields consistent increases in the speaker clustering performance as compared to the commonly used euclidean distance metric, 3) our partially supervised speaker clustering concept and strategies significantly improve the speaker clustering performance over the baselines, and 4) our proposed LSDA algorithm further leads to state-of-the-art speaker clustering performance.

  8. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  9. Influencing Trust for Human-Automation Collaborative Scheduling of Multiple Unmanned Vehicles.

    Science.gov (United States)

    Clare, Andrew S; Cummings, Mary L; Repenning, Nelson P

    2015-11-01

    We examined the impact of priming on operator trust and system performance when supervising a decentralized network of heterogeneous unmanned vehicles (UVs). Advances in autonomy have enabled a future vision of single-operator control of multiple heterogeneous UVs. Real-time scheduling for multiple UVs in uncertain environments requires the computational ability of optimization algorithms combined with the judgment and adaptability of human supervisors. Because of system and environmental uncertainty, appropriate operator trust will be instrumental to maintain high system performance and prevent cognitive overload. Three groups of operators experienced different levels of trust priming prior to conducting simulated missions in an existing, multiple-UV simulation environment. Participants who play computer and video games frequently were found to have a higher propensity to overtrust automation. By priming gamers to lower their initial trust to a more appropriate level, system performance was improved by 10% as compared to gamers who were primed to have higher trust in the automation. Priming was successful at adjusting the operator's initial and dynamic trust in the automated scheduling algorithm, which had a substantial impact on system performance. These results have important implications for personnel selection and training for futuristic multi-UV systems under human supervision. Although gamers may bring valuable skills, they may also be potentially prone to automation bias. Priming during training and regular priming throughout missions may be one potential method for overcoming this propensity to overtrust automation. © 2015, Human Factors and Ergonomics Society.

  10. Computer algorithms for automated detection and analysis of local Ca2+ releases in spontaneously beating cardiac pacemaker cells.

    Directory of Open Access Journals (Sweden)

    Alexander V Maltsev

    Full Text Available Local Ca2+ Releases (LCRs are crucial events involved in cardiac pacemaker cell function. However, specific algorithms for automatic LCR detection and analysis have not been developed in live, spontaneously beating pacemaker cells. In the present study we measured LCRs using a high-speed 2D-camera in spontaneously contracting sinoatrial (SA node cells isolated from rabbit and guinea pig and developed a new algorithm capable of detecting and analyzing the LCRs spatially in two-dimensions, and in time. Our algorithm tracks points along the midline of the contracting cell. It uses these points as a coordinate system for affine transform, producing a transformed image series where the cell does not contract. Action potential-induced Ca2+ transients and LCRs were thereafter isolated from recording noise by applying a series of spatial filters. The LCR birth and death events were detected by a differential (frame-to-frame sensitivity algorithm applied to each pixel (cell location. An LCR was detected when its signal changes sufficiently quickly within a sufficiently large area. The LCR is considered to have died when its amplitude decays substantially, or when it merges into the rising whole cell Ca2+ transient. Ultimately, our algorithm provides major LCR parameters such as period, signal mass, duration, and propagation path area. As the LCRs propagate within live cells, the algorithm identifies splitting and merging behaviors, indicating the importance of locally propagating Ca2+-induced-Ca2+-release for the fate of LCRs and for generating a powerful ensemble Ca2+ signal. Thus, our new computer algorithms eliminate motion artifacts and detect 2D local spatiotemporal events from recording noise and global signals. While the algorithms were developed to detect LCRs in sinoatrial nodal cells, they have the potential to be used in other applications in biophysics and cell physiology, for example, to detect Ca2+ wavelets (abortive waves, sparks and

  11. Computer algorithms for automated detection and analysis of local Ca2+ releases in spontaneously beating cardiac pacemaker cells.

    Science.gov (United States)

    Maltsev, Alexander V; Parsons, Sean P; Kim, Mary S; Tsutsui, Kenta; Stern, Michael D; Lakatta, Edward G; Maltsev, Victor A; Monfredi, Oliver

    2017-01-01

    Local Ca2+ Releases (LCRs) are crucial events involved in cardiac pacemaker cell function. However, specific algorithms for automatic LCR detection and analysis have not been developed in live, spontaneously beating pacemaker cells. In the present study we measured LCRs using a high-speed 2D-camera in spontaneously contracting sinoatrial (SA) node cells isolated from rabbit and guinea pig and developed a new algorithm capable of detecting and analyzing the LCRs spatially in two-dimensions, and in time. Our algorithm tracks points along the midline of the contracting cell. It uses these points as a coordinate system for affine transform, producing a transformed image series where the cell does not contract. Action potential-induced Ca2+ transients and LCRs were thereafter isolated from recording noise by applying a series of spatial filters. The LCR birth and death events were detected by a differential (frame-to-frame) sensitivity algorithm applied to each pixel (cell location). An LCR was detected when its signal changes sufficiently quickly within a sufficiently large area. The LCR is considered to have died when its amplitude decays substantially, or when it merges into the rising whole cell Ca2+ transient. Ultimately, our algorithm provides major LCR parameters such as period, signal mass, duration, and propagation path area. As the LCRs propagate within live cells, the algorithm identifies splitting and merging behaviors, indicating the importance of locally propagating Ca2+-induced-Ca2+-release for the fate of LCRs and for generating a powerful ensemble Ca2+ signal. Thus, our new computer algorithms eliminate motion artifacts and detect 2D local spatiotemporal events from recording noise and global signals. While the algorithms were developed to detect LCRs in sinoatrial nodal cells, they have the potential to be used in other applications in biophysics and cell physiology, for example, to detect Ca2+ wavelets (abortive waves), sparks and embers in muscle

  12. A multi-stage heuristic algorithm for matching problem in the modified miniload automated storage and retrieval system of e-commerce

    Science.gov (United States)

    Wang, Wenrui; Wu, Yaohua; Wu, Yingying

    2016-05-01

    E-commerce, as an emerging marketing mode, has attracted more and more attention and gradually changed the way of our life. However, the existing layout of distribution centers can't fulfill the storage and picking demands of e-commerce sufficiently. In this paper, a modified miniload automated storage/retrieval system is designed to fit these new characteristics of e-commerce in logistics. Meanwhile, a matching problem, concerning with the improvement of picking efficiency in new system, is studied in this paper. The problem is how to reduce the travelling distance of totes between aisles and picking stations. A multi-stage heuristic algorithm is proposed based on statement and model of this problem. The main idea of this algorithm is, with some heuristic strategies based on similarity coefficients, minimizing the transportations of items which can not arrive in the destination picking stations just through direct conveyors. The experimental results based on the cases generated by computers show that the average reduced rate of indirect transport times can reach 14.36% with the application of multi-stage heuristic algorithm. For the cases from a real e-commerce distribution center, the order processing time can be reduced from 11.20 h to 10.06 h with the help of the modified system and the proposed algorithm. In summary, this research proposed a modified system and a multi-stage heuristic algorithm that can reduce the travelling distance of totes effectively and improve the whole performance of e-commerce distribution center.

  13. Performance Monitoring Applied to System Supervision

    Directory of Open Access Journals (Sweden)

    Bertille Somon

    2017-07-01

    Full Text Available Nowadays, automation is present in every aspect of our daily life and has some benefits. Nonetheless, empirical data suggest that traditional automation has many negative performance and safety consequences as it changed task performers into task supervisors. In this context, we propose to use recent insights into the anatomical and neurophysiological substrates of action monitoring in humans, to help further characterize performance monitoring during system supervision. Error monitoring is critical for humans to learn from the consequences of their actions. A wide variety of studies have shown that the error monitoring system is involved not only in our own errors, but also in the errors of others. We hypothesize that the neurobiological correlates of the self-performance monitoring activity can be applied to system supervision. At a larger scale, a better understanding of system supervision may allow its negative effects to be anticipated or even countered. This review is divided into three main parts. First, we assess the neurophysiological correlates of self-performance monitoring and their characteristics during error execution. Then, we extend these results to include performance monitoring and error observation of others or of systems. Finally, we provide further directions in the study of system supervision and assess the limits preventing us from studying a well-known phenomenon: the Out-Of-the-Loop (OOL performance problem.

  14. Spectral matching techniques (SMTs) and automated cropland classification algorithms (ACCAs) for mapping croplands of Australia using MODIS 250-m time-series (2000–2015) data

    Science.gov (United States)

    Teluguntla, Pardhasaradhi G.; Thenkabail, Prasad S.; Xiong, Jun N.; Gumma, Murali Krishna; Congalton, Russell G.; Oliphant, Adam; Poehnelt, Justin; Yadav, Kamini; Rao, Mahesh N.; Massey, Richard

    2017-01-01

    Mapping croplands, including fallow areas, are an important measure to determine the quantity of food that is produced, where they are produced, and when they are produced (e.g. seasonality). Furthermore, croplands are known as water guzzlers by consuming anywhere between 70% and 90% of all human water use globally. Given these facts and the increase in global population to nearly 10 billion by the year 2050, the need for routine, rapid, and automated cropland mapping year-after-year and/or season-after-season is of great importance. The overarching goal of this study was to generate standard and routine cropland products, year-after-year, over very large areas through the use of two novel methods: (a) quantitative spectral matching techniques (QSMTs) applied at continental level and (b) rule-based Automated Cropland Classification Algorithm (ACCA) with the ability to hind-cast, now-cast, and future-cast. Australia was chosen for the study given its extensive croplands, rich history of agriculture, and yet nonexistent routine yearly generated cropland products using multi-temporal remote sensing. This research produced three distinct cropland products using Moderate Resolution Imaging Spectroradiometer (MODIS) 250-m normalized difference vegetation index 16-day composite time-series data for 16 years: 2000 through 2015. The products consisted of: (1) cropland extent/areas versus cropland fallow areas, (2) irrigated versus rainfed croplands, and (3) cropping intensities: single, double, and continuous cropping. An accurate reference cropland product (RCP) for the year 2014 (RCP2014) produced using QSMT was used as a knowledge base to train and develop the ACCA algorithm that was then applied to the MODIS time-series data for the years 2000–2015. A comparison between the ACCA-derived cropland products (ACPs) for the year 2014 (ACP2014) versus RCP2014 provided an overall agreement of 89.4% (kappa = 0.814) with six classes: (a) producer’s accuracies varying

  15. The Teaching of Supervision.

    Science.gov (United States)

    Pavan, Barbara N.

    To gain perspective on the future preparation of supervisors, the Council of Professors of Instruction Supervision (COPIS) membership was surveyed concerning topics currently included in introductory supervision courses. The survey form included a list of possible topics, with space for additions, and asked each professor to indicate how many…

  16. An ImageJ-based algorithm for a semi-automated method for microscopic image enhancement and DNA repair foci counting

    Energy Technology Data Exchange (ETDEWEB)

    Klokov, D., E-mail: dmitry.klokov@cnl.ca [Canadian Nuclear Laboratories, Chalk River, Ontario (Canada); Suppiah, R. [Queen' s Univ., Dept. of Biomedical and Molecular Sciences, Kingston, Ontario (Canada)

    2015-06-15

    Proper evaluation of the health risks of low-dose ionizing radiation exposure heavily relies on the ability to accurately measure very low levels of DNA damage in cells. One of the most sensitive methods for measuring DNA damage levels is the quantification of DNA repair foci that consist of macromolecular aggregates of DNA repair proteins, such as γH2AX and 53BP1, forming around individual DNA double-strand breaks. They can be quantified using immunofluorescence microscopy and are widely used as markers of DNA double-strand breaks. However this quantification, if performed manually, may be very tedious and prone to inter-individual bias. Low-dose radiation studies are especially sensitive to this potential bias due to a very low magnitude of the effects anticipated. Therefore, we designed and validated an algorithm for the semi-automated processing of microscopic images and quantification of DNA repair foci. The algorithm uses ImageJ, a freely available image analysis software that is customizable to individual cellular properties or experimental conditions. We validated the algorithm using immunolabeled 53BP1 and γH2AX in normal human fibroblast AG01522 cells under both normal and irradiated conditions. This method is easy to learn, can be used by nontrained personnel, and can help avoiding discrepancies in inter-laboratory comparison studies examining the effects of low-dose radiation. (author)

  17. An automated sleep-state classification algorithm for quantifying sleep timing and sleep-dependent dynamics of electroencephalographic and cerebral metabolic parameters.

    Science.gov (United States)

    Rempe, Michael J; Clegern, William C; Wisor, Jonathan P

    2015-01-01

    Rodent sleep research uses electroencephalography (EEG) and electromyography (EMG) to determine the sleep state of an animal at any given time. EEG and EMG signals, typically sampled at >100 Hz, are segmented arbitrarily into epochs of equal duration (usually 2-10 seconds), and each epoch is scored as wake, slow-wave sleep (SWS), or rapid-eye-movement sleep (REMS), on the basis of visual inspection. Automated state scoring can minimize the burden associated with state and thereby facilitate the use of shorter epoch durations. We developed a semiautomated state-scoring procedure that uses a combination of principal component analysis and naïve Bayes classification, with the EEG and EMG as inputs. We validated this algorithm against human-scored sleep-state scoring of data from C57BL/6J and BALB/CJ mice. We then applied a general homeostatic model to characterize the state-dependent dynamics of sleep slow-wave activity and cerebral glycolytic flux, measured as lactate concentration. More than 89% of epochs scored as wake or SWS by the human were scored as the same state by the machine, whether scoring in 2-second or 10-second epochs. The majority of epochs scored as REMS by the human were also scored as REMS by the machine. However, of epochs scored as REMS by the human, more than 10% were scored as SWS by the machine and 18 (10-second epochs) to 28% (2-second epochs) were scored as wake. These biases were not strain-specific, as strain differences in sleep-state timing relative to the light/dark cycle, EEG power spectral profiles, and the homeostatic dynamics of both slow waves and lactate were detected equally effectively with the automated method or the manual scoring method. Error associated with mathematical modeling of temporal dynamics of both EEG slow-wave activity and cerebral lactate either did not differ significantly when state scoring was done with automated versus visual scoring, or was reduced with automated state scoring relative to manual

  18. Semi-supervised clustering methods

    Science.gov (United States)

    Bair, Eric

    2013-01-01

    Cluster analysis methods seek to partition a data set into homogeneous subgroups. It is useful in a wide variety of applications, including document processing and modern genetics. Conventional clustering methods are unsupervised, meaning that there is no outcome variable nor is anything known about the relationship between the observations in the data set. In many situations, however, information about the clusters is available in addition to the values of the features. For example, the cluster labels of some observations may be known, or certain observations may be known to belong to the same cluster. In other cases, one may wish to identify clusters that are associated with a particular outcome variable. This review describes several clustering algorithms (known as “semi-supervised clustering” methods) that can be applied in these situations. The majority of these methods are modifications of the popular k-means clustering method, and several of them will be described in detail. A brief description of some other semi-supervised clustering algorithms is also provided. PMID:24729830

  19. Automated Extraction of Gravity Wave Signatures from the Super Dual Auroral Radar Network (SuperDARN) Database Using Spatio-Temporal Process Discovery Algorithms

    Science.gov (United States)

    Baker, J. B.; Ramakrishnan, N.; Ruohoniemi, J. M.; Hossain, M.; Ribeiro, A.

    2011-12-01

    A major challenge in space physics research is the automated extraction of recurrent features from multi-dimensional datasets which tend to be irregularly gridded in both space and time. In many cases, the complexity of the datasets impedes their use by scientists who are often times most interested in extracting a simple time-series of higher level data product that can be easily compared with other measurements. As such, the collective archive of space physics measurements is vastly under-utilized at the present time. Application of cutting-edge computer-aided data mining and knowledge discovery techniques has the potential to improve this situation by making space physics datasets much more accessible to the scientific user community and accelerating the rate of research and collaboration. As a first step in this direction, we are applying the principles of feature extraction, sub-clustering and motif mining to the analysis of HF backscatter measurements from the Super Dual Auroral Radar Network (SuperDARN). The SuperDARN database is an ideal test-bed for development of space physics data mining algorithms because: (1) there is a richness of geophysical phenomena manifested in the data; (2) the data is multi-dimensional and exhibits a high degree of spatiotemporal sparseness; and (3) some of the radars have been operating continuously with infrequent outages for more than 25 years. In this presentation we discuss results obtained from the application of new data mining algorithms designed specifically to automate the extraction of gravity wave signatures from the SuperDARN database. In particular, we examine the occurrence statistics of gravity waves as a function of latitude, local time, and geomagnetic conditions.

  20. A Semi-Automated Machine Learning Algorithm for Tree Cover Delineation from 1-m Naip Imagery Using a High Performance Computing Architecture

    Science.gov (United States)

    Basu, S.; Ganguly, S.; Nemani, R. R.; Mukhopadhyay, S.; Milesi, C.; Votava, P.; Michaelis, A.; Zhang, G.; Cook, B. D.; Saatchi, S. S.; Boyda, E.

    2014-12-01

    Accurate tree cover delineation is a useful instrument in the derivation of Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree cover delineation in high to coarse resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR datasets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree cover estimates for the whole of Continental United States, using a High Performance Computing Architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field (CRF), which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the state of California, which covers a total of 11,095 NAIP tiles and spans a total geographical area of 163,696 sq. miles. Our framework produced correct detection rates of around 85% for fragmented forests and 70% for urban tree cover areas, with false positive rates lower than 3% for both regions. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR high-resolution canopy height model shows the effectiveness of our algorithm in generating accurate high-resolution tree cover maps.

  1. Automated Algorithm for Generalized Tonic–Clonic Epileptic Seizure Onset Detection Based on sEMG Zero-Crossing Rate

    DEFF Research Database (Denmark)

    Conradsen, Isa; Beniczky, Sándor; Hoppe, Karsten

    2012-01-01

    Patients are not able to call for help during a generalized tonic–clonic epileptic seizure. Our objective was to develop a robust generic algorithm for automatic detection of tonic–clonic seizures, based on surface electromyography (sEMG) signals suitable for a portable device. Twenty-two seizure...

  2. Reflecting reflection in supervision

    DEFF Research Database (Denmark)

    Lystbæk, Christian Tang

    Reflection has moved from the margins to the mainstream in supervision. Notions of reflection have become well established since the late 1980s. These notions have provided useful framing devices to help conceptualize some important processes in guidance and counseling. However, some applications...... of reflection, rehabilitate them in order to capture broader connotations or move to new ways of regarding reflection that are more in keeping with not only reflective but also emotive, normative and formative views on supervision. The paper presents a critical perspective on supervision that challenge...

  3. Clinical Supervision in Denmark

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard

    Data fra den danske undersøgelse af psykoterapeuters faglige udvikling indsamlet ved hjælp af DPCCQ. Oplægget fokuserer på supervision (modtaget, givet, uddannelse i) blandt danske psykoterapeutiske arbejdende psykologer....

  4. A supervised approach

    OpenAIRE

    Khoramshahi, Ehsan; Hietaoja, Juha; Valros, Anna; Yun, Jinhyeon; Pastell, Matti

    2014-01-01

    This paper proposes a supervised classification approach for the real-time pattern recognition of sows in an animal supervision system (asup). Our approach offers the possibility of the foreground subtraction in an asup’s image processing module where there is lack of statistical information regarding the background. A set of 7 farrowing sessions of sows, during day and night, have been captured (approximately 7 days/sow), which is used for this study. The frames of these recordings have been...

  5. Automated real-time epileptic seizure detection in scalp EEG recordings using an algorithm based on wavelet packet transform.

    Science.gov (United States)

    Zandi, Ali Shahidi; Javidan, Manouchehr; Dumont, Guy A; Tafreshi, Reza

    2010-07-01

    A novel wavelet-based algorithm for real-time detection of epileptic seizures using scalp EEG is proposed. In a moving-window analysis, the EEG from each channel is decomposed by wavelet packet transform. Using wavelet coefficients from seizure and nonseizure references, a patient-specific measure is developed to quantify the separation between seizure and nonseizure states for the frequency range of 1-30 Hz. Utilizing this measure, a frequency band representing the maximum separation between the two states is determined and employed to develop a normalized index, called combined seizure index (CSI). CSI is derived for each epoch of every EEG channel based on both rhythmicity and relative energy of that epoch as well as consistency among different channels. Increasing significantly during ictal states, CSI is inspected using one-sided cumulative sum test to generate proper channel alarms. Analyzing alarms from all channels, a seizure alarm is finally generated. The algorithm was tested on scalp EEG recordings from 14 patients, totaling approximately 75.8 h with 63 seizures. Results revealed a high sensitivity of 90.5%, a false detection rate of 0.51 h(-1) and a median detection delay of 7 s. The algorithm could also lateralize the focus side for patients with temporal lobe epilepsy.

  6. Improved fingerprint identification with supervised filtering enhancement

    Science.gov (United States)

    Bal, Abdullah; El-Saba, Aed M.; Alam, Mohammad S.

    2005-02-01

    An important step in the fingerprint identification system is the reliable extraction of distinct features from fingerprint images. Identification performance is directly related to the enhancement of fingerprint images during or after the enrollment phase. Among the various enhancement algorithms, artificial-intelligence-based feature-extraction techniques are attractive owing to their adaptive learning properties. We present a new supervised filtering technique that is based on a dynamic neural-network approach to develop a robust fingerprint enhancement algorithm. For pattern matching, a joint transform correlation (JTC) algorithm has been incorporated that offers high processing speed for real-time applications. Because the fringe-adjusted JTC algorithm has been found to yield a significantly better correlation output compared with alternate JTCs, we used this algorithm for the identification process. Test results are presented to verify the effectiveness of the proposed algorithm.

  7. The implementation of an automated tracking algorithm for the track detection of migratory anticyclones affecting the Mediterranean

    Science.gov (United States)

    Hatzaki, Maria; Flocas, Elena A.; Simmonds, Ian; Kouroutzoglou, John; Keay, Kevin; Rudeva, Irina

    2013-04-01

    Migratory cyclones and anticyclones mainly account for the short-term weather variations in extra-tropical regions. By contrast to cyclones that have drawn major scientific attention due to their direct link to active weather and precipitation, climatological studies on anticyclones are limited, even though they also are associated with extreme weather phenomena and play an important role in global and regional climate. This is especially true for the Mediterranean, a region particularly vulnerable to climate change, and the little research which has been done is essentially confined to the manual analysis of synoptic charts. For the construction of a comprehensive climatology of migratory anticyclonic systems in the Mediterranean using an objective methodology, the Melbourne University automatic tracking algorithm is applied, based to the ERA-Interim reanalysis mean sea level pressure database. The algorithm's reliability in accurately capturing the weather patterns and synoptic climatology of the transient activity has been widely proven. This algorithm has been extensively applied for cyclone studies worldwide and it has been also successfully applied for the Mediterranean, though its use for anticyclone tracking is limited to the Southern Hemisphere. In this study the performance of the tracking algorithm under different data resolutions and different choices of parameter settings in the scheme is examined. Our focus is on the appropriate modification of the algorithm in order to efficiently capture the individual characteristics of the anticyclonic tracks in the Mediterranean, a closed basin with complex topography. We show that the number of the detected anticyclonic centers and the resulting tracks largely depend upon the data resolution and the search radius. We also find that different scale anticyclones and secondary centers that lie within larger anticyclone structures can be adequately represented; this is important, since the extensions of major

  8. Local Image Descriptors Using Supervised Kernel ICA

    Science.gov (United States)

    Yamazaki, Masaki; Fels, Sidney

    PCA-SIFT is an extension to SIFT which aims to reduce SIFT's high dimensionality (128 dimensions) by applying PCA to the gradient image patches. However PCA is not a discriminative representation for recognition due to its global feature nature and unsupervised algorithm. In addition, linear methods such as PCA and ICA can fail in the case of non-linearity. In this paper, we propose a new discriminative method called Supervised Kernel ICA (SKICA) that uses a non-linear kernel approach combined with Supervised ICA-based local image descriptors. Our approach blends the advantages of supervised learning with nonlinear properties of kernels. Using five different test data sets we show that the SKICA descriptors produce better object recognition performance than other related approaches with the same dimensionality. The SKICA-based representation has local sensitivity, non-linear independence and high class separability providing an effective method for local image descriptors.

  9. Feasibility of a semi-automated contrast-oriented algorithm for tumor segmentation in retrospectively gated PET images: phantom and clinical validation.

    Science.gov (United States)

    Carles, Montserrat; Fechter, Tobias; Nemer, Ursula; Nanko, Norbert; Mix, Michael; Nestle, Ursula; Schaefer, Andrea

    2015-12-21

    PET/CT plays an important role in radiotherapy planning for lung tumors. Several segmentation algorithms have been proposed for PET tumor segmentation. However, most of them do not take into account respiratory motion and are not well validated. The aim of this work was to evaluate a semi-automated contrast-oriented algorithm (COA) for PET tumor segmentation adapted to retrospectively gated (4D) images. The evaluation involved a wide set of 4D-PET/CT acquisitions of dynamic experimental phantoms and lung cancer patients. In addition, segmentation accuracy of 4D-COA was compared with four other state-of-the-art algorithms. In phantom evaluation, the physical properties of the objects defined the gold standard. In clinical evaluation, the ground truth was estimated by the STAPLE (Simultaneous Truth and Performance Level Estimation) consensus of three manual PET contours by experts. Algorithm evaluation with phantoms resulted in: (i) no statistically significant diameter differences for different targets and movements (Δφ = 0.3 ± 1.6 mm); (ii) reproducibility for heterogeneous and irregular targets independent of user initial interaction and (iii) good segmentation agreement for irregular targets compared to manual CT delineation in terms of Dice Similarity Coefficient (DSC = 0.66 ± 0.04), Positive Predictive Value (PPV  = 0.81 ± 0.06) and Sensitivity (Sen. = 0.49 ± 0.05). In clinical evaluation, the segmented volume was in reasonable agreement with the consensus volume (difference in volume (%Vol) = 40 ± 30, DSC = 0.71 ± 0.07 and PPV = 0.90 ± 0.13). High accuracy in target tracking position (ΔME) was obtained for experimental and clinical data (ΔME(exp) = 0 ± 3 mm; ΔME(clin) 0.3 ± 1.4 mm). In the comparison with other lung segmentation methods, 4D-COA has shown the highest volume accuracy in both experimental and clinical data. In conclusion, the accuracy in volume delineation, position tracking and its robustness on highly irregular target movements

  10. Feasibility of a semi-automated contrast-oriented algorithm for tumor segmentation in retrospectively gated PET images: phantom and clinical validation

    Science.gov (United States)

    Carles, Montserrat; Fechter, Tobias; Nemer, Ursula; Nanko, Norbert; Mix, Michael; Nestle, Ursula; Schaefer, Andrea

    2015-12-01

    PET/CT plays an important role in radiotherapy planning for lung tumors. Several segmentation algorithms have been proposed for PET tumor segmentation. However, most of them do not take into account respiratory motion and are not well validated. The aim of this work was to evaluate a semi-automated contrast-oriented algorithm (COA) for PET tumor segmentation adapted to retrospectively gated (4D) images. The evaluation involved a wide set of 4D-PET/CT acquisitions of dynamic experimental phantoms and lung cancer patients. In addition, segmentation accuracy of 4D-COA was compared with four other state-of-the-art algorithms. In phantom evaluation, the physical properties of the objects defined the gold standard. In clinical evaluation, the ground truth was estimated by the STAPLE (Simultaneous Truth and Performance Level Estimation) consensus of three manual PET contours by experts. Algorithm evaluation with phantoms resulted in: (i) no statistically significant diameter differences for different targets and movements (Δ φ =0.3+/- 1.6 mm); (ii) reproducibility for heterogeneous and irregular targets independent of user initial interaction and (iii) good segmentation agreement for irregular targets compared to manual CT delineation in terms of Dice Similarity Coefficient (DSC  =  0.66+/- 0.04 ), Positive Predictive Value (PPV  =  0.81+/- 0.06 ) and Sensitivity (Sen.  =  0.49+/- 0.05 ). In clinical evaluation, the segmented volume was in reasonable agreement with the consensus volume (difference in volume (%Vol)  =  40+/- 30 , DSC  =  0.71+/- 0.07 and PPV  =  0.90+/- 0.13 ). High accuracy in target tracking position (Δ ME) was obtained for experimental and clinical data (Δ ME{{}\\text{exp}}=0+/- 3 mm; Δ ME{{}\\text{clin}}=0.3+/- 1.4 mm). In the comparison with other lung segmentation methods, 4D-COA has shown the highest volume accuracy in both experimental and clinical data. In conclusion, the accuracy in volume

  11. Semi-supervised and unsupervised extreme learning machines.

    Science.gov (United States)

    Huang, Gao; Song, Shiji; Gupta, Jatinder N D; Wu, Cheng

    2014-12-01

    Extreme learning machines (ELMs) have proven to be efficient and effective learning mechanisms for pattern classification and regression. However, ELMs are primarily applied to supervised learning problems. Only a few existing research papers have used ELMs to explore unlabeled data. In this paper, we extend ELMs for both semi-supervised and unsupervised tasks based on the manifold regularization, thus greatly expanding the applicability of ELMs. The key advantages of the proposed algorithms are as follows: 1) both the semi-supervised ELM (SS-ELM) and the unsupervised ELM (US-ELM) exhibit learning capability and computational efficiency of ELMs; 2) both algorithms naturally handle multiclass classification or multicluster clustering; and 3) both algorithms are inductive and can handle unseen data at test time directly. Moreover, it is shown in this paper that all the supervised, semi-supervised, and unsupervised ELMs can actually be put into a unified framework. This provides new perspectives for understanding the mechanism of random feature mapping, which is the key concept in ELM theory. Empirical study on a wide range of data sets demonstrates that the proposed algorithms are competitive with the state-of-the-art semi-supervised or unsupervised learning algorithms in terms of accuracy and efficiency.

  12. [Working under supervision].

    Science.gov (United States)

    Pafko, P; Mach, J

    2013-08-01

    A graduate from a medical school who has obtained only, so called, adequate competence/ expertise must work under a supervision of a doctor who has the relevant specialised competence certification in order for the graduate to obtain this specialised competence as well. It is recommended to determine in writing which activities the doctor in training can perform alone during that time, which need to be supervised by the responsible doctor, and which treatment he is not yet prepared to perform at all. The head doctor or the head of the clinic is responsible for ensuring a full-time supervision of the doctor trainee by a certified specialist. If the trainee has obtained a certificate of completion of the basic level training of the relevant specialised module then he can serve during emergency hours under the condition that the presence of a specialised doctor is ensured to be within 20 minutes if needed. The head doctor, the head of the clinic or the specialised medical supervisor have no universal responsibility for any possible misconduct of the trainee. Everyone is responsible for his own conduct. The responsibility of the head doctor and the head of the clinic may be relevant if they did not fulfil the duties of being a supervising doctor. The responsibility may also be relevant if the supervising was neglected. Responsibility for the trainee may arise if he did not respect the orders from the supervisor or if the misconduct was such that it would be unacceptable even for a doctor without the relevant professional competence. If it is the case that the misconduct happens despite the supervising being absolutely correct, and under the conditions that the misconduct was unavoidable, the supervising attested professional doctor cannot be responsible for the consequences. Manual misconduct during a surgery is usually regarded to be accepted risk not violation of the rules of medical science.

  13. Computer-aided tumor detection based on multi-scale blob detection algorithm in automated breast ultrasound images.

    Science.gov (United States)

    Moon, Woo Kyung; Shen, Yi-Wei; Bae, Min Sun; Huang, Chiun-Sheng; Chen, Jeon-Hor; Chang, Ruey-Feng

    2013-07-01

    Automated whole breast ultrasound (ABUS) is an emerging screening tool for detecting breast abnormalities. In this study, a computer-aided detection (CADe) system based on multi-scale blob detection was developed for analyzing ABUS images. The performance of the proposed CADe system was tested using a database composed of 136 breast lesions (58 benign lesions and 78 malignant lesions) and 37 normal cases. After speckle noise reduction, Hessian analysis with multi-scale blob detection was applied for the detection of tumors. This method detected every tumor, but some nontumors were also detected. The tumor like lihoods for the remaining candidates were estimated using a logistic regression model based on blobness, internal echo, and morphology features. The tumor candidates with tumor likelihoods higher than a specific threshold (0.4) were considered tumors. By using the combination of blobness, internal echo, and morphology features with 10-fold cross-validation, the proposed CAD system showed sensitivities of 100%, 90%, and 70% with false positives per pass of 17.4, 8.8, and 2.7, respectively. Our results suggest that CADe systems based on multi-scale blob detection can be used to detect breast tumors in ABUS images.

  14. Clinical feasibility of 3D automated coronary atherosclerotic plaque quantification algorithm on coronary computed tomography angiography: Comparison with intravascular ultrasound

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hyung-Bok [Yonsei University Health System, Yonsei-Cedar Sinai Integrative Cardiovascular Imaging Research Center, Seoul (Korea, Republic of); Myongji Hospital, Division of Cardiology, Cardiovascular Center, Goyang (Korea, Republic of); Lee, Byoung Kwon [Yonsei University College of Medicine, Division of Cardiology, Gangnam Severance Hospital, Seoul (Korea, Republic of); Shin, Sanghoon [Yonsei University Health System, Yonsei-Cedar Sinai Integrative Cardiovascular Imaging Research Center, Seoul (Korea, Republic of); National Health Insurance Corporation Ilsan Hospital, Division of Cardiology, Goyang (Korea, Republic of); Heo, Ran; Chang, Hyuk-Jae; Chung, Namsik [Yonsei University Health System, Yonsei-Cedar Sinai Integrative Cardiovascular Imaging Research Center, Seoul (Korea, Republic of); Yonsei University Health System, Division of Cardiology, Severance Cardiovascular Hospital, Seoul (Korea, Republic of); Arsanjani, Reza [Cedars-Sinai Medical Center, Departments of Imaging and Medicine, Cedars-Sinai Heart Institute, Los Angeles, CA (United States); Kitslaar, Pieter H. [Leiden University Medical Center, Department of Radiology, Division of Image Processing, Leiden (Netherlands); Medis medical Imaging Systems B.V., Leiden (Netherlands); Broersen, Alexander; Dijkstra, Jouke [Leiden University Medical Center, Department of Radiology, Division of Image Processing, Leiden (Netherlands); Ahn, Sung Gyun [Yonsei University Wonju Severance Christian Hospital, Division of Cardiology, Wonju (Korea, Republic of); Min, James K. [New York-Presbyterian Hospital, Institute for Cardiovascular Imaging, Weill-Cornell Medical College, New York, NY (United States); Hong, Myeong-Ki; Jang, Yangsoo [Yonsei University Health System, Division of Cardiology, Severance Cardiovascular Hospital, Seoul (Korea, Republic of)

    2015-10-15

    To evaluate the diagnostic performance of automated coronary atherosclerotic plaque quantification (QCT) by different users (expert/non-expert/automatic). One hundred fifty coronary artery segments from 142 patients who underwent coronary computed tomography angiography (CCTA) and intravascular ultrasound (IVUS) were analyzed. Minimal lumen area (MLA), maximal lumen area stenosis percentage (%AS), mean plaque burden percentage (%PB), and plaque volume were measured semi-automatically by expert, non-expert, and fully automatic QCT analyses, and then compared to IVUS. Between IVUS and expert QCT analysis, the correlation coefficients (r) for the MLA, %AS, %PB, and plaque volume were excellent: 0.89 (p < 0.001), 0.84 (p < 0.001), 0.91 (p < 0.001), and 0.94 (p < 0.001), respectively. There were no significant differences in the mean parameters (all p values >0.05) except %AS (p = 0.01). The automatic QCT analysis showed comparable performance to non-expert QCT analysis, showing correlation coefficients (r) of the MLA (0.80 vs. 0.82), %AS (0.82 vs. 0.80), %PB (0.84 vs. 0.73), and plaque volume (0.84 vs. 0.79) when they were compared to IVUS, respectively. Fully automatic QCT analysis showed clinical utility compared with IVUS, as well as a compelling performance when compared with semiautomatic analyses. (orig.)

  15. Multi-objective genetic algorithm for the automated planning of a wireless sensor network to monitor a critical facility

    Science.gov (United States)

    Jourdan, Damien B.; de Weck, Olivier L.

    2004-09-01

    This paper examines the optimal placement of nodes for a Wireless Sensor Network (WSN) designed to monitor a critical facility in a hostile region. The sensors are dropped from an aircraft, and they must be connected (directly or via hops) to a High Energy Communication Node (HECN), which serves as a relay from the ground to a satellite or a high-altitude aircraft. The sensors are assumed to have fixed communication and sensing ranges. The facility is modeled as circular and served by two roads. This simple model is used to benchmark the performance of the optimizer (a Multi-Objective Genetic Algorithm, or MOGA) in creating WSN designs that provide clear assessments of movements in and out of the facility, while minimizing both the likelihood of sensors being discovered and the number of sensors to be dropped. The algorithm is also tested on two other scenarios; in the first one the WSN must detect movements in and out of a circular area, and in the second one it must cover uniformly a square region. The MOGA is shown again to perform well on those scenarios, which shows its flexibility and possible application to more complex mission scenarios with multiple and diverse targets of observation.

  16. A cloud pattern recognition algorithm to automate the estimation of mass eruption rates from an umbrella cloud or downwind plume observed via satellite imagery

    Science.gov (United States)

    Jansons, E.; Pouget, S.; Bursik, M. I.; Patra, A. K.; Pitman, E. B.; Tupper, A.

    2013-12-01

    The eruption of Eyjafjallajökull, Iceland in April and May, 2010, brought to light the importance of Volcanic Ash Transport and Dispersion models (VATD) to the estimation of the position and concentration of ash with time, and how vital it is for Volcanic Ash Advisory Centers (VAACs) to be able to detect and track ash clouds with both observations and models. The VATD needs to get Eruption Source Parameters (ESP), including mass eruption rate through time, as input, which ultimately relies on the detection of the eruption regardless of the meteorological conditions. Volcanic cloud recognition is especially difficult when meteorological clouds are also present, which is typically the case in the tropics. Given the fact that meteorological clouds and volcanic clouds behave differently, we developed an agent-based pattern definition algorithm to detect and define volcanic clouds on satellite imagery. We have combined this with a plume growth rate methodology to automate the estimation of volumetric and mass growth with time using plume geometry provided by satellite imagery. This allows an estimation of the mass eruption rate (MER) with time. To test our approach, we used the examples of two eruptions of different source strength, in two different climatic regimes and for which therefore the weather during eruption was quite different: Grímsvötn (Iceland) May 21, 2011, which produced an umbrella cloud readily seen above the cloud deck, and Manam (Papua New Guinea) October 24, 2004, which produced a stratospheric umbrella cloud that rapidly turned into a downwind plume, and was difficult to distinguish from meteorological clouds. The new methods may in the future allow for fast, easy and automated detection of volcanic clouds as well as a remote assessment of the mass eruption rate with time, even for inaccessible volcanoes. The methods may thus provide an additional path to estimation of the ESP and the forecasting of ash cloud propagation.

  17. Automated glioblastoma segmentation based on a multiparametric structured unsupervised classification.

    Science.gov (United States)

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V; Robles, Montserrat; Aparici, F; Martí-Bonmatí, L; García-Gómez, Juan M

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation.

  18. Automated glioblastoma segmentation based on a multiparametric structured unsupervised classification.

    Directory of Open Access Journals (Sweden)

    Javier Juan-Albarracín

    Full Text Available Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM, whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF. An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation.

  19. Supervised classification of solar features using prior information

    Science.gov (United States)

    De Visscher, Ruben; Delouille, Véronique; Dupont, Pierre; Deledalle, Charles-Alban

    2015-10-01

    Context: The Sun as seen by Extreme Ultraviolet (EUV) telescopes exhibits a variety of large-scale structures. Of particular interest for space-weather applications is the extraction of active regions (AR) and coronal holes (CH). The next generation of GOES-R satellites will provide continuous monitoring of the solar corona in six EUV bandpasses that are similar to the ones provided by the SDO-AIA EUV telescope since May 2010. Supervised segmentations of EUV images that are consistent with manual segmentations by for example space-weather forecasters help in extracting useful information from the raw data. Aims: We present a supervised segmentation method that is based on the Maximum A Posteriori rule. Our method allows integrating both manually segmented images as well as other type of information. It is applied on SDO-AIA images to segment them into AR, CH, and the remaining Quiet Sun (QS) part. Methods: A Bayesian classifier is applied on training masks provided by the user. The noise structure in EUV images is non-trivial, and this suggests the use of a non-parametric kernel density estimator to fit the intensity distribution within each class. Under the Naive Bayes assumption we can add information such as latitude distribution and total coverage of each class in a consistent manner. Those information can be prescribed by an expert or estimated with an Expectation-Maximization algorithm. Results: The segmentation masks are in line with the training masks given as input and show consistency over time. Introduction of additional information besides pixel intensity improves upon the quality of the final segmentation. Conclusions: Such a tool can aid in building automated segmentations that are consistent with some ground truth' defined by the users.

  20. Supervised classification of solar features using prior information

    Directory of Open Access Journals (Sweden)

    De Visscher Ruben

    2015-01-01

    Full Text Available Context: The Sun as seen by Extreme Ultraviolet (EUV telescopes exhibits a variety of large-scale structures. Of particular interest for space-weather applications is the extraction of active regions (AR and coronal holes (CH. The next generation of GOES-R satellites will provide continuous monitoring of the solar corona in six EUV bandpasses that are similar to the ones provided by the SDO-AIA EUV telescope since May 2010. Supervised segmentations of EUV images that are consistent with manual segmentations by for example space-weather forecasters help in extracting useful information from the raw data. Aims: We present a supervised segmentation method that is based on the Maximum A Posteriori rule. Our method allows integrating both manually segmented images as well as other type of information. It is applied on SDO-AIA images to segment them into AR, CH, and the remaining Quiet Sun (QS part. Methods: A Bayesian classifier is applied on training masks provided by the user. The noise structure in EUV images is non-trivial, and this suggests the use of a non-parametric kernel density estimator to fit the intensity distribution within each class. Under the Naive Bayes assumption we can add information such as latitude distribution and total coverage of each class in a consistent manner. Those information can be prescribed by an expert or estimated with an Expectation-Maximization algorithm. Results: The segmentation masks are in line with the training masks given as input and show consistency over time. Introduction of additional information besides pixel intensity improves upon the quality of the final segmentation. Conclusions: Such a tool can aid in building automated segmentations that are consistent with some ground truth’ defined by the users.

  1. Researching online supervision

    DEFF Research Database (Denmark)

    Bengtsen, Søren S. E.; Mathiasen, Helle

    2014-01-01

    Online supervision and the use of digital media in supervisory dialogues is a fast increasing practice in higher education today. However, the concepts in our pedagogical repertoire often reflect the digital tools used for supervision purposes as either a prolongation of the face-to-face contact......, or a poor substitution of such. This one-sidedness on the conceptual level makes it challenging to empirically study the deeper implications digital tools have for the supervisory dialogue. Drawing on phenomenology and systems theory we argue that we need new concepts in qualitative methodology that allow...

  2. Collective academic supervision

    DEFF Research Database (Denmark)

    Nordentoft, Helle Merete; Thomsen, Rie; Wichmann-Hansen, Gitte

    2013-01-01

    are interconnected. Collective Academic Supervision provides possibilities for systematic interaction between individual master students in their writing process. In this process they learn core academic competencies, such as the ability to assess theoretical and practical problems in their practice and present them...... process. This article fills these gaps by discussing potentials and challenges in “Collective Academic Supervision”, a model for supervision at the Master of Education in Guidance at Aarhus University in Denmark. The pedagogical rationale behind the model is that students’ participation and learning...

  3. Scalable Automated Model Search

    Science.gov (United States)

    2014-05-20

    related to GHOSTFACE is Auto- Weka [38]. As the name suggests, Auto- Weka aims to automate the use of Weka [10] by ap- plying recent derivative-free...algorithm is one of the many optimization algorithms we use as part of GHOST- FACE. However, in contrast to GHOSTFACE, Auto- Weka focuses on single node...performance and does not optimize the parallel ex- ecution of algorithms. Moreover, Auto- Weka treats algorithms as black boxes to be executed and

  4. Automated source classification of new transient sources

    Science.gov (United States)

    Oertel, M.; Kreikenbohm, A.; Wilms, J.; DeLuca, A.

    2017-10-01

    The EXTraS project harvests the hitherto unexplored temporal domain information buried in the serendipitous data collected by the European Photon Imaging Camera (EPIC) onboard the ESA XMM-Newton mission since its launch. This includes a search for fast transients, missed by standard image analysis, and a search and characterization of variability in hundreds of thousands of sources. We present an automated classification scheme for new transient sources in the EXTraS project. The method is as follows: source classification features of a training sample are used to train machine learning algorithms (performed in R; randomForest (Breiman, 2001) in supervised mode) which are then tested on a sample of known source classes and used for classification.

  5. Slotting optimization of automated storage and retrieval system (AS/RS) for efficient delivery of parts in an assembly shop using genetic algorithm: A case Study

    Science.gov (United States)

    Yue, L.; Guan, Z.; He, C.; Luo, D.; Saif, U.

    2017-06-01

    In recent years, the competitive pressure on manufacturing companies shifted them from mass production to mass customization to produce large variety of products. It is a great challenge for companies nowadays to produce customized mixed flow mode of production to meet customized demand on time. Due to large variety of products, the storage system to deliver variety of products to production lines influences on the timely production of variety of products, as investigated from by simulation study of an inefficient storage system of a real Company, in the current research. Therefore, current research proposed a slotting optimization model with mixed model sequence to assemble in consideration of the final flow lines to optimize whole automated storage and retrieval system (AS/RS) and distribution system in the case company. Current research is aimed to minimize vertical height of centre of gravity of AS/RS and total time spent for taking the materials out from the AS/RS simultaneously. Genetic algorithm is adopted to solve the proposed problem and computational result shows significant improvement in stability and efficiency of AS/RS as compared to the existing method used in the case company.

  6. Abdominal adipose tissue quantification on water-suppressed and non-water-suppressed MRI at 3T using semi-automated FCM clustering algorithm

    Science.gov (United States)

    Valaparla, Sunil K.; Peng, Qi; Gao, Feng; Clarke, Geoffrey D.

    2014-03-01

    Accurate measurements of human body fat distribution are desirable because excessive body fat is associated with impaired insulin sensitivity, type 2 diabetes mellitus (T2DM) and cardiovascular disease. In this study, we hypothesized that the performance of water suppressed (WS) MRI is superior to non-water suppressed (NWS) MRI for volumetric assessment of abdominal subcutaneous (SAT), intramuscular (IMAT), visceral (VAT), and total (TAT) adipose tissues. We acquired T1-weighted images on a 3T MRI system (TIM Trio, Siemens), which was analyzed using semi-automated segmentation software that employs a fuzzy c-means (FCM) clustering algorithm. Sixteen contiguous axial slices, centered at the L4-L5 level of the abdomen, were acquired in eight T2DM subjects with water suppression (WS) and without (NWS). Histograms from WS images show improved separation of non-fatty tissue pixels from fatty tissue pixels, compared to NWS images. Paired t-tests of WS versus NWS showed a statistically significant lower volume of lipid in the WS images for VAT (145.3 cc less, p=0.006) and IMAT (305 cc less, p2). There is strong correlation between WS and NWS quantification methods for SAT measurements (r=0.999), but poorer correlation for VAT studies (r=0.845). These results suggest that NWS pulse sequences may overestimate adipose tissue volumes and that WS pulse sequences are more desirable due to the higher contrast generated between fatty and non-fatty tissues.

  7. Optimization of automated segmentation of monkeypox virus-induced lung lesions from normal lung CT images using hard C-means algorithm

    Science.gov (United States)

    Castro, Marcelo A.; Thomasson, David; Avila, Nilo A.; Hufton, Jennifer; Senseney, Justin; Johnson, Reed F.; Dyall, Julie

    2013-03-01

    Monkeypox virus is an emerging zoonotic pathogen that results in up to 10% mortality in humans. Knowledge of clinical manifestations and temporal progression of monkeypox disease is limited to data collected from rare outbreaks in remote regions of Central and West Africa. Clinical observations show that monkeypox infection resembles variola infection. Given the limited capability to study monkeypox disease in humans, characterization of the disease in animal models is required. A previous work focused on the identification of inflammatory patterns using PET/CT image modality in two non-human primates previously inoculated with the virus. In this work we extended techniques used in computer-aided detection of lung tumors to identify inflammatory lesions from monkeypox virus infection and their progression using CT images. Accurate estimation of partial volumes of lung lesions via segmentation is difficult because of poor discrimination between blood vessels, diseased regions, and outer structures. We used hard C-means algorithm in conjunction with landmark based registration to estimate the extent of monkeypox virus induced disease before inoculation and after disease progression. Automated estimation is in close agreement with manual segmentation.

  8. Optimal Installation Locations for Automated External Defibrillators in Taipei 7-Eleven Stores: Using GIS and a Genetic Algorithm with a New Stirring Operator

    Directory of Open Access Journals (Sweden)

    Chung-Yuan Huang

    2014-01-01

    Full Text Available Immediate treatment with an automated external defibrillator (AED increases out-of-hospital cardiac arrest (OHCA patient survival potential. While considerable attention has been given to determining optimal public AED locations, spatial and temporal factors such as time of day and distance from emergency medical services (EMSs are understudied. Here we describe a geocomputational genetic algorithm with a new stirring operator (GANSO that considers spatial and temporal cardiac arrest occurrence factors when assessing the feasibility of using Taipei 7-Eleven stores as installation locations for AEDs. Our model is based on two AED conveyance modes, walking/running and driving, involving service distances of 100 and 300 meters, respectively. Our results suggest different AED allocation strategies involving convenience stores in urban settings. In commercial areas, such installations can compensate for temporal gaps in EMS locations when responding to nighttime OHCA incidents. In residential areas, store installations can compensate for long distances from fire stations, where AEDs are currently held in Taipei.

  9. Spatiotemporal patterns of High Mountain Asia's snowmelt season identified with an automated snowmelt detection algorithm, 1987–2016

    Directory of Open Access Journals (Sweden)

    T. Smith

    2017-10-01

    Full Text Available High Mountain Asia (HMA – encompassing the Tibetan Plateau and surrounding mountain ranges – is the primary water source for much of Asia, serving more than a billion downstream users. Many catchments receive the majority of their yearly water budget in the form of snow, which is poorly monitored by sparse in situ weather networks. Both the timing and volume of snowmelt play critical roles in downstream water provision, as many applications – such as agriculture, drinking-water generation, and hydropower – rely on consistent and predictable snowmelt runoff. Here, we examine passive microwave data across HMA with five sensors (SSMI, SSMIS, AMSR-E, AMSR2, and GPM from 1987 to 2016 to track the timing of the snowmelt season – defined here as the time between maximum passive microwave signal separation and snow clearance. We validated our method against climate model surface temperatures, optical remote-sensing snow-cover data, and a manual control dataset (n = 2100, 3 variables at 25 locations over 28 years; our algorithm is generally accurate within 3–5 days. Using the algorithm-generated snowmelt dates, we examine the spatiotemporal patterns of the snowmelt season across HMA. The climatically short (29-year time series, along with complex interannual snowfall variations, makes determining trends in snowmelt dates at a single point difficult. We instead identify trends in snowmelt timing by using hierarchical clustering of the passive microwave data to determine trends in self-similar regions. We make the following four key observations. (1 The end of the snowmelt season is trending almost universally earlier in HMA (negative trends. Changes in the end of the snowmelt season are generally between 2 and 8 days decade−1 over the 29-year study period (5–25 days total. The length of the snowmelt season is thus shrinking in many, though not all, regions of HMA. Some areas exhibit later peak signal separation (positive

  10. Spatiotemporal patterns of High Mountain Asia's snowmelt season identified with an automated snowmelt detection algorithm, 1987-2016

    Science.gov (United States)

    Smith, Taylor; Bookhagen, Bodo; Rheinwalt, Aljoscha

    2017-10-01

    High Mountain Asia (HMA) - encompassing the Tibetan Plateau and surrounding mountain ranges - is the primary water source for much of Asia, serving more than a billion downstream users. Many catchments receive the majority of their yearly water budget in the form of snow, which is poorly monitored by sparse in situ weather networks. Both the timing and volume of snowmelt play critical roles in downstream water provision, as many applications - such as agriculture, drinking-water generation, and hydropower - rely on consistent and predictable snowmelt runoff. Here, we examine passive microwave data across HMA with five sensors (SSMI, SSMIS, AMSR-E, AMSR2, and GPM) from 1987 to 2016 to track the timing of the snowmelt season - defined here as the time between maximum passive microwave signal separation and snow clearance. We validated our method against climate model surface temperatures, optical remote-sensing snow-cover data, and a manual control dataset (n = 2100, 3 variables at 25 locations over 28 years); our algorithm is generally accurate within 3-5 days. Using the algorithm-generated snowmelt dates, we examine the spatiotemporal patterns of the snowmelt season across HMA. The climatically short (29-year) time series, along with complex interannual snowfall variations, makes determining trends in snowmelt dates at a single point difficult. We instead identify trends in snowmelt timing by using hierarchical clustering of the passive microwave data to determine trends in self-similar regions. We make the following four key observations. (1) The end of the snowmelt season is trending almost universally earlier in HMA (negative trends). Changes in the end of the snowmelt season are generally between 2 and 8 days decade-1 over the 29-year study period (5-25 days total). The length of the snowmelt season is thus shrinking in many, though not all, regions of HMA. Some areas exhibit later peak signal separation (positive trends), but with generally smaller magnitudes

  11. Clinical Supervision in Denmark

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard

    2011-01-01

    on giving and receiving clinical supervision as reported by therapists in Denmark. Method: Currently, the Danish sample consists of 350 clinical psychologist doing psychotherapy who completed DPCCQ. Data are currently being prepared for statistical analysis. Results: This paper will focus primarily...

  12. Supervision and group dynamics

    DEFF Research Database (Denmark)

    Hansen, Søren; Jensen, Lars Peter

    2004-01-01

     An important aspect of the problem based and project organized study at Aalborg University is the supervision of the project groups. At the basic education (first year) it is stated in the curriculum that part of the supervisors' job is to deal with group dynamics. This is due to the experience ...

  13. Graph-based semi-supervised learning

    CERN Document Server

    Subramanya, Amarnag

    2014-01-01

    While labeled data is expensive to prepare, ever increasing amounts of unlabeled data is becoming widely available. In order to adapt to this phenomenon, several semi-supervised learning (SSL) algorithms, which learn from labeled as well as unlabeled data, have been developed. In a separate line of work, researchers have started to realize that graphs provide a natural way to represent data in a variety of domains. Graph-based SSL algorithms, which bring together these two lines of work, have been shown to outperform the state-of-the-art in many applications in speech processing, computer visi

  14. Spectral Learning for Supervised Topic Models.

    Science.gov (United States)

    Ren, Yong; Wang, Yining; Zhu, Jun

    2017-03-15

    Supervised topic models simultaneously model the latent topic structure of large collections of documents and a response variable associated with each document. Existing inference methods are based on variational approximation or Monte Carlo sampling, which often suffers from the local minimum defect. Spectral methods have been applied to learn unsupervised topic models, such as latent Dirichlet allocation (LDA), with provable guarantees. This paper investigates the possibility of applying spectral methods to recover the parameters of supervised LDA (sLDA). We first present a two-stage spectral method, which recovers the parameters of LDA followed by a power update method to recover the regression model parameters. Then, we further present a single-phase spectral algorithm to jointly recover the topic distribution matrix as well as the regression weights. Our spectral algorithms are provably correct and computationally efficient. We prove a sample complexity bound for each algorithm and subsequently derive a sufficient condition for the identifiability of sLDA. Thorough experiments on synthetic and real-world datasets verify the theory and demonstrate the practical effectiveness of the spectral algorithms. In fact, our results on a large-scale review rating dataset demonstrate that our single-phase spectral algorithm alone gets comparable or even better performance than state-of-the-art methods, while previous work on spectral methods has rarely reported such promising performance.

  15. A New Method for Solving Supervised Data Classification Problems

    Directory of Open Access Journals (Sweden)

    Parvaneh Shabanzadeh

    2014-01-01

    Full Text Available Supervised data classification is one of the techniques used to extract nontrivial information from data. Classification is a widely used technique in various fields, including data mining, industry, medicine, science, and law. This paper considers a new algorithm for supervised data classification problems associated with the cluster analysis. The mathematical formulations for this algorithm are based on nonsmooth, nonconvex optimization. A new algorithm for solving this optimization problem is utilized. The new algorithm uses a derivative-free technique, with robustness and efficiency. To improve classification performance and efficiency in generating classification model, a new feature selection algorithm based on techniques of convex programming is suggested. Proposed methods are tested on real-world datasets. Results of numerical experiments have been presented which demonstrate the effectiveness of the proposed algorithms.

  16. SU-E-I-89: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Pediatric Anthropomorphic and ACR Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Mahmood, U; Erdi, Y; Wang, W [Memorial Sloan Kettering Cancer Center, NY, NY (United States)

    2014-06-01

    Purpose: To assess the impact of General Electrics automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of a pediatric anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, 80 mA, 0.7s rotation time. Image quality was assessed by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: For the baseline protocol, CNR was found to decrease from 0.460 ± 0.182 to 0.420 ± 0.057 when kVa was activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.620 ± 0.040. The liver dose decreased by 30% with kVa activation. Conclusion: Application of kVa reduces the liver dose up to 30%. However, reduction in image quality for abdominal scans occurs when using the automated tube voltage selection feature at the baseline protocol. As demonstrated by the CNR and NPS analysis, the texture and magnitude of the noise in reconstructed images at ASiR 40% was found to be the same as our baseline images. We have demonstrated that 30% dose reduction is possible when using 40% ASiR with kVa in pediatric patients.

  17. Automated wholeslide analysis of multiplex-brightfield IHC images for cancer cells and carcinoma-associated fibroblasts

    Science.gov (United States)

    Lorsakul, Auranuch; Andersson, Emilia; Vega Harring, Suzana; Sade, Hadassah; Grimm, Oliver; Bredno, Joerg

    2017-03-01

    Multiplex-brightfield immunohistochemistry (IHC) staining and quantitative measurement of multiple biomarkers can support therapeutic targeting of carcinoma-associated fibroblasts (CAF). This paper presents an automated digitalpathology solution to simultaneously analyze multiple biomarker expressions within a single tissue section stained with an IHC duplex assay. Our method was verified against ground truth provided by expert pathologists. In the first stage, the automated method quantified epithelial-carcinoma cells expressing cytokeratin (CK) using robust nucleus detection and supervised cell-by-cell classification algorithms with a combination of nucleus and contextual features. Using fibroblast activation protein (FAP) as biomarker for CAFs, the algorithm was trained, based on ground truth obtained from pathologists, to automatically identify tumor-associated stroma using a supervised-generation rule. The algorithm reported distance to nearest neighbor in the populations of tumor cells and activated-stromal fibroblasts as a wholeslide measure of spatial relationships. A total of 45 slides from six indications (breast, pancreatic, colorectal, lung, ovarian, and head-and-neck cancers) were included for training and verification. CK-positive cells detected by the algorithm were verified by a pathologist with good agreement (R2=0.98) to ground-truth count. For the area occupied by FAP-positive cells, the inter-observer agreement between two sets of ground-truth measurements was R2=0.93 whereas the algorithm reproduced the pathologists' areas with R2=0.96. The proposed methodology enables automated image analysis to measure spatial relationships of cells stained in an IHC-multiplex assay. Our proof-of-concept results show an automated algorithm can be trained to reproduce the expert assessment and provide quantitative readouts that potentially support a cutoff determination in hypothesis testing related to CAF-targeting-therapy decisions.

  18. Datamining the NOAO NVO Portal: Automated Image Classification

    Science.gov (United States)

    Vaswani, Pooja; Miller, C. J.; Barg, I.; Smith, R. C.

    2006-12-01

    Image metadata describes the properties of an image and can be used for classification, e.g., galactic, extra-galactic, solar system, standard star, among others. We are developing a data mining application to automate such a classification process based on supervised learning using decision trees. We are applying this application to the NOAO NVO Portal (www.nvo.noao.edu). The core concepts of Quinlan's C4.5 decision tree induction algorithm are used to train, build a decision tree, and generate classification rules. These rules are then used to classify previously unseen image metadata. We utilize a collection of decision trees instead of a single classifier and average the classification probabilities. The concept of ``Bagging'' was used to create the collection of classifiers. The classification algorithm also facilitates the addition of weights to the probability estimate of the classes when prior knowledge of the class distribution is known.

  19. Ethics in education supervision

    Directory of Open Access Journals (Sweden)

    Fatma ÖZMEN

    2008-06-01

    Full Text Available Supervision in education plays a crucial role in attaining educational goals. In addition to determining the present situation, it has a theoretical and practical function regarding the actions to be taken in general and the achievement of teacher development in particular to meet the educational goals in the most effective way. For the education supervisors to act ethically in their tasks while achieving this vital mission shall facilitate them to build up trust, to enhance the level of collaboration and sharing, thus it shall contribute to organizational effectiveness. Ethics is an essential component of educational supervision. Yet, it demonstrates rather vague quality due to the conditions, persons, and situations. Therefore, it is a difficult process to develop the ethical standards in institutions. This study aims to clarify the concept of ethics, to bring up its importance, and to make recommendations for more effective supervisions from the aspect of ethics, based on the literature review, some research results, and sample cases reported by teachers and supervisors.

  20. Supervision af psykoterapi via Skype

    DEFF Research Database (Denmark)

    Jacobsen, Claus Haugaard; Grünbaum, Liselotte

    2011-01-01

    The article accounts for the situations in which distance supervision may be necessary. Skype™ is introduced as a way to make audiovisual distance supervision easy accessible. The literature on empirical studies of clinical supervision mediated by video is reviewed. Furthermore, the authors’ own...... clinical experience of Skype™ in supervision, mainly of psychoanalytic child psychotherapy, is presented and reflected upon. Finally, the reluctance of the Danish Board for Psychologists’s to recognize audiovisual distance supervision as part of the required training demands is discussed. It is concluded...

  1. Resistance to group clinical supervision

    DEFF Research Database (Denmark)

    Buus, Niels; Delgado, Cynthia; Traynor, Michael

    2018-01-01

    This present study is a report of an interview study exploring personal views on participating in group clinical supervision among mental health nursing staff members who do not participate in supervision. There is a paucity of empirical research on resistance to supervision, which has...... between group members. Many informants perceived group clinical supervision as an unacceptable intrusion, which could indicate a need for developing more acceptable types of post-registration clinical education and reflective practice for this group....... traditionally been theorized as a supervisee's maladaptive coping with anxiety in the supervision process. The aim of the present study was to examine resistance to group clinical supervision by interviewing nurses who did not participate in supervision. In 2015, we conducted semistructured interviews with 24...

  2. Extracting PICO Sentences from Clinical Trial Reports using Supervised Distant Supervision.

    Science.gov (United States)

    Wallace, Byron C; Kuiper, Joël; Sharma, Aakash; Zhu, Mingxi Brian; Marshall, Iain J

    2016-01-01

    Systematic reviews underpin Evidence Based Medicine (EBM) by addressing precise clinical questions via comprehensive synthesis of all relevant published evidence. Authors of systematic reviews typically define a Population/Problem, Intervention, Comparator, and Outcome (a PICO criteria) of interest, and then retrieve, appraise and synthesize results from all reports of clinical trials that meet these criteria. Identifying PICO elements in the full-texts of trial reports is thus a critical yet time-consuming step in the systematic review process. We seek to expedite evidence synthesis by developing machine learning models to automatically extract sentences from articles relevant to PICO elements. Collecting a large corpus of training data for this task would be prohibitively expensive. Therefore, we derive distant supervision (DS) with which to train models using previously conducted reviews. DS entails heuristically deriving 'soft' labels from an available structured resource. However, we have access only to unstructured, free-text summaries of PICO elements for corresponding articles; we must derive from these the desired sentence-level annotations. To this end, we propose a novel method - supervised distant supervision (SDS) - that uses a small amount of direct supervision to better exploit a large corpus of distantly labeled instances by learning to pseudo-annotate articles using the available DS. We show that this approach tends to outperform existing methods with respect to automated PICO extraction.

  3. Towards a Framework for Testing Drivers' Interaction with Partially Automated Driving

    NARCIS (Netherlands)

    van den Beukel, Arie Paul; van der Voort, Mascha C.; Eger, Arthur O.

    2015-01-01

    Partially automated driving takes away driving control from the driver in situations which allow complete automation, but leaves final responsibility for safe driving at the human operator. Accordingly, the driver's role changes to supervision, and - occasionally - intervention. For testing required

  4. ScreenClust: Advanced statistical software for supervised and unsupervised high resolution melting (HRM) analysis.

    Science.gov (United States)

    Reja, Valin; Kwok, Alister; Stone, Glenn; Yang, Linsong; Missel, Andreas; Menzel, Christoph; Bassam, Brant

    2010-04-01

    High resolution melting (HRM) is an emerging new method for interrogating and characterizing DNA samples. An important aspect of this technology is data analysis. Traditional HRM curves can be difficult to interpret and the method has been criticized for lack of statistical interrogation and arbitrary interpretation of results. Here we report the basic principles and first applications of a new statistical approach to HRM analysis addressing these concerns. Our method allows automated genotyping of unknown samples coupled with formal statistical information on the likelihood, if an unknown sample is of a known genotype (by discriminant analysis or "supervised learning"). It can also determine the assortment of alleles present (by cluster analysis or "unsupervised learning") without a priori knowledge of the genotypes present. The new algorithms provide highly sensitive and specific auto-calling of genotypes from HRM data in both supervised an unsupervised analysis mode. The method is based on pure statistical interrogation of the data set with a high degree of standardization. The hypothesis-free unsupervised mode offers various possibilities for de novo HRM applications such as mutation discovery. Copyright 2010. Published by Elsevier Inc.

  5. A Semi-Supervised Learning Approach to Enhance Health Care Community-Based Question Answering: A Case Study in Alcoholism.

    Science.gov (United States)

    Wongchaisuwat, Papis; Klabjan, Diego; Jonnalagadda, Siddhartha Reddy

    2016-08-02

    Community-based question answering (CQA) sites play an important role in addressing health information needs. However, a significant number of posted questions remain unanswered. Automatically answering the posted questions can provide a useful source of information for Web-based health communities. In this study, we developed an algorithm to automatically answer health-related questions based on past questions and answers (QA). We also aimed to understand information embedded within Web-based health content that are good features in identifying valid answers. Our proposed algorithm uses information retrieval techniques to identify candidate answers from resolved QA. To rank these candidates, we implemented a semi-supervised leaning algorithm that extracts the best answer to a question. We assessed this approach on a curated corpus from Yahoo! Answers and compared against a rule-based string similarity baseline. On our dataset, the semi-supervised learning algorithm has an accuracy of 86.2%. Unified medical language system-based (health related) features used in the model enhance the algorithm's performance by proximately 8%. A reasonably high rate of accuracy is obtained given that the data are considerably noisy. Important features distinguishing a valid answer from an invalid answer include text length, number of stop words contained in a test question, a distance between the test question and other questions in the corpus, and a number of overlapping health-related terms between questions. Overall, our automated QA system based on historical QA pairs is shown to be effective according to the dataset in this case study. It is developed for general use in the health care domain, which can also be applied to other CQA sites.

  6. Supervised Transfer Sparse Coding

    KAUST Repository

    Al-Shedivat, Maruan

    2014-07-27

    A combination of the sparse coding and transfer learn- ing techniques was shown to be accurate and robust in classification tasks where training and testing objects have a shared feature space but are sampled from differ- ent underlying distributions, i.e., belong to different do- mains. The key assumption in such case is that in spite of the domain disparity, samples from different domains share some common hidden factors. Previous methods often assumed that all the objects in the target domain are unlabeled, and thus the training set solely comprised objects from the source domain. However, in real world applications, the target domain often has some labeled objects, or one can always manually label a small num- ber of them. In this paper, we explore such possibil- ity and show how a small number of labeled data in the target domain can significantly leverage classifica- tion accuracy of the state-of-the-art transfer sparse cod- ing methods. We further propose a unified framework named supervised transfer sparse coding (STSC) which simultaneously optimizes sparse representation, domain transfer and classification. Experimental results on three applications demonstrate that a little manual labeling and then learning the model in a supervised fashion can significantly improve classification accuracy.

  7. SU-E-I-81: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Adult Anthropomorphic and ACR Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Mahmood, U; Erdi, Y; Wang, W [Memorial Sloan Kettering Cancer Center, NY, NY (United States)

    2014-06-01

    Purpose: To assess the impact of General Electrics (GE) automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of an adult anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, Auto mA (180 to 380 mA), noise index (NI) = 14, adaptive iterative statistical reconstruction (ASiR) of 20%, 0.8s rotation time. Image quality was evaluated by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: The CNR for the adult male was found to decrease from CNR = 0.912 ± 0.045 for the baseline protocol without kVa to a CNR = 0.756 ± 0.049 with kVa activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.903 ± 0.023. The difference in the central liver dose with and without kVa was found to be 0.07%. Conclusion: Dose reduction was insignificant in the adult phantom. As determined by NPS analysis, ASiR of 40% produced images with similar noise texture to the baseline protocol. However, the CNR at ASiR of 40% with kVa fails to meet the current ACR CNR passing requirement of 1.0.

  8. An Automated Treatment Plan Quality Control Tool for Intensity-Modulated Radiation Therapy Using a Voxel-Weighting Factor-Based Re-Optimization Algorithm.

    Directory of Open Access Journals (Sweden)

    Ting Song

    Full Text Available Intensity-modulated radiation therapy (IMRT currently plays an important role in radiotherapy, but its treatment plan quality can vary significantly among institutions and planners. Treatment plan quality control (QC is a necessary component for individual clinics to ensure that patients receive treatments with high therapeutic gain ratios. The voxel-weighting factor-based plan re-optimization mechanism has been proved able to explore a larger Pareto surface (solution domain and therefore increase the possibility of finding an optimal treatment plan. In this study, we incorporated additional modules into an in-house developed voxel weighting factor-based re-optimization algorithm, which was enhanced as a highly automated and accurate IMRT plan QC tool (TPS-QC tool. After importing an under-assessment plan, the TPS-QC tool was able to generate a QC report within 2 minutes. This QC report contains the plan quality determination as well as information supporting the determination. Finally, the IMRT plan quality can be controlled by approving quality-passed plans and replacing quality-failed plans using the TPS-QC tool. The feasibility and accuracy of the proposed TPS-QC tool were evaluated using 25 clinically approved cervical cancer patient IMRT plans and 5 manually created poor-quality IMRT plans. The results showed high consistency between the QC report quality determinations and the actual plan quality. In the 25 clinically approved cases that the TPS-QC tool identified as passed, a greater difference could be observed for dosimetric endpoints for organs at risk (OAR than for planning target volume (PTV, implying that better dose sparing could be achieved in OAR than in PTV. In addition, the dose-volume histogram (DVH curves of the TPS-QC tool re-optimized plans satisfied the dosimetric criteria more frequently than did the under-assessment plans. In addition, the criteria for unsatisfied dosimetric endpoints in the 5 poor-quality plans could

  9. Advanced Music Therapy Supervision Training

    DEFF Research Database (Denmark)

    Pedersen, Inge Nygaard

    2009-01-01

    supervision training excerpts live in the workshop will be offered. The workshop will include demonstrating a variety of supervision methods and techniques used in A) post graduate music therapy training programs b) a variety of work contexts such as psychiatry and somatic music psychotherapy. The workshop......The presentation will illustrate training models in supervision for experienced music therapists where transference/counter transference issues are in focus. Musical, verbal and body related tools will be illustrated from supervision practice by the presenters. A possibility to experience small...

  10. Towards harmonized seismic analysis across Europe using supervised machine learning approaches

    Science.gov (United States)

    Zaccarelli, Riccardo; Bindi, Dino; Cotton, Fabrice; Strollo, Angelo

    2017-04-01

    In the framework of the Thematic Core Services for Seismology of EPOS-IP (European Plate Observing System-Implementation Phase), a service for disseminating a regionalized logic-tree of ground motions models for Europe is under development. While for the Mediterranean area the large availability of strong motion data qualified and disseminated through the Engineering Strong Motion database (ESM-EPOS), supports the development of both selection criteria and ground motion models, for the low-to-moderate seismic regions of continental Europe the development of ad-hoc models using weak motion recordings of moderate earthquakes is unavoidable. Aim of this work is to present a platform for creating application-oriented earthquake databases by retrieving information from EIDA (European Integrated Data Archive) and applying supervised learning models for earthquake records selection and processing suitable for any specific application of interest. Supervised learning models, i.e. the task of inferring a function from labelled training data, have been extensively used in several fields such as spam detection, speech and image recognition and in general pattern recognition. Their suitability to detect anomalies and perform a semi- to fully- automated filtering on large waveform data set easing the effort of (or replacing) human expertise is therefore straightforward. Being supervised learning algorithms capable of learning from a relatively small training set to predict and categorize unseen data, its advantage when processing large amount of data is crucial. Moreover, their intrinsic ability to make data driven predictions makes them suitable (and preferable) in those cases where explicit algorithms for detection might be unfeasible or too heuristic. In this study, we consider relatively simple statistical classifiers (e.g., Naive Bayes, Logistic Regression, Random Forest, SVMs) where label are assigned to waveform data based on "recognized classes" needed for our use case

  11. Public Supervision over Private Relationships : Towards European Supervision Private Law?

    NARCIS (Netherlands)

    Cherednychenko, O.O.

    2014-01-01

    The rise of public supervision over private relationships in many areas of private law has led to the development of what, in the author’s view, could be called ‘European supervision private law’. This emerging body of law forms part of European regulatory private law and is made up of

  12. Supervising PETE Candidates Using the Situational Supervision Model

    Science.gov (United States)

    Levy, Linda S.; Johnson, Lynn V.

    2012-01-01

    Physical education teacher candidates (PETCs) often, as part of their curricular requirements, engage in early field experiences that prepare them for student teaching. Matching the PETC's developmental level with the mentor's supervision style enhances this experience. The situational supervision model, based on the situational leadership model,…

  13. Validation of an active shape model-based semi-automated segmentation algorithm for the analysis of thigh muscle and adipose tissue cross-sectional areas.

    Science.gov (United States)

    Kemnitz, Jana; Eckstein, Felix; Culvenor, Adam G; Ruhdorfer, Anja; Dannhauer, Torben; Ring-Dimitriou, Susanne; Sänger, Alexandra M; Wirth, Wolfgang

    2017-04-28

    To validate a semi-automated method for thigh muscle and adipose tissue cross-sectional area (CSA) segmentation from MRI. An active shape model (ASM) was trained using 113 MRI CSAs from the Osteoarthritis Initiative (OAI) and combined with an active contour model and thresholding-based post-processing steps. This method was applied to 20 other MRIs from the OAI and to baseline and follow-up MRIs from a 12-week lower-limb strengthening or endurance training intervention (n = 35 females). The agreement of semi-automated vs. previous manual segmentation was assessed using the Dice similarity coefficient and Bland-Altman analyses. Longitudinal changes observed in the training intervention were compared between semi-automated and manual segmentations. High agreement was observed between manual and semi-automated segmentations for subcutaneous fat, quadriceps and hamstring CSAs. With strength training, both the semi-automated and manual segmentation method detected a significant reduction in adipose tissue CSA and a significant gain in quadriceps, hamstring and adductor CSAs. With endurance training, a significant reduction in adipose tissue CSAs was observed with both methods. The semi-automated approach showed high agreement with manual segmentation of thigh muscle and adipose tissue CSAs and showed longitudinal training effects similar to that observed using manual segmentation.

  14. Semi-supervised Text Classification Using RBF Networks

    Science.gov (United States)

    Jiang, Eric P.

    Semi-supervised text classification has numerous applications and is particularly applicable to the problems where large quantities of unlabeled data are readily available while only a small number of labeled training samples are accessible. The paper proposes a semi-supervised classifier that integrates a clustering based Expectation Maximization (EM) algorithm into radial basis function (RBF) neural networks and can learn for classification from a very small number of labeled training samples and a large pool of unlabeled data effectively. A generalized centroid clustering algorithm is also investigated in this work to balance predictive values between labeled and unlabeled training data and to improve classification accuracy. Experimental results with three popular text classification corpora show that the proper use of additional unlabeled data in this semi-supervised approach can reduce classification errors by up to 26%.

  15. Supervised hub-detection for brain connectivity

    Science.gov (United States)

    Kasenburg, Niklas; Liptrot, Matthew; Reislev, Nina Linde; Garde, Ellen; Nielsen, Mads; Feragen, Aasa

    2016-03-01

    A structural brain network consists of physical connections between brain regions. Brain network analysis aims to find features associated with a parameter of interest through supervised prediction models such as regression. Unsupervised preprocessing steps like clustering are often applied, but can smooth discriminative signals in the population, degrading predictive performance. We present a novel hub-detection optimized for supervised learning that both clusters network nodes based on population level variation in connectivity and also takes the learning problem into account. The found hubs are a low-dimensional representation of the network and are chosen based on predictive performance as features for a linear regression. We apply our method to the problem of finding age-related changes in structural connectivity. We compare our supervised hub-detection (SHD) to an unsupervised hub-detection and a linear regression using the original network connections as features. The results show that the SHD is able to retain regression performance, while still finding hubs that represent the underlying variation in the population. Although here we applied the SHD to brain networks, it can be applied to any network regression problem. Further development of the presented algorithm will be the extension to other predictive models such as classification or non-linear regression.

  16. Design of Supervision Systems: Theory and Practice

    Science.gov (United States)

    Bouamama, Belkacem Ould

    2008-06-01

    The term "supervision" means a set of tools and methods used to operate an industrial process in normal situation as well as in the presence of failures or undesired disturbances. The activities concerned with the supervision are the Fault Detection and Isolation (FDI) in the diagnosis level, and the Fault Tolerant Control (FTC) through necessary reconfiguration, whenever possible, in the fault accommodation level. The final goal of a supervision platform is to provide the operator a set of tools that helps to safely run the process and to take appropriate decision in the presence of faults. Different approaches to the design of such decision making tools have been developed in the past twenty years, depending on the kind of knowledge (structural, statistical, fuzzy, expert rules, functional, behavioural…) used to describe the plant operation. How to elaborate models for FDI design, how to develop the FDI algorithm, how to avoid false alarms, how to improve the diagnosability of the faults for alarm management design, how to continue to control the process in failure mode, what are the limits of each method,…?. Such are the main purposes concerned by the presented plenary session from an industrial and theoretical point of view.

  17. FIGENIX: Intelligent automation of genomic annotation: expertise integration in a new software platform

    Directory of Open Access Journals (Sweden)

    Pontarotti Pierre

    2005-08-01

    Full Text Available Abstract Background Two of the main objectives of the genomic and post-genomic era are to structurally and functionally annotate genomes which consists of detecting genes' position and structure, and inferring their function (as well as of other features of genomes. Structural and functional annotation both require the complex chaining of numerous different software, algorithms and methods under the supervision of a biologist. The automation of these pipelines is necessary to manage huge amounts of data released by sequencing projects. Several pipelines already automate some of these complex chaining but still necessitate an important contribution of biologists for supervising and controlling the results at various steps. Results Here we propose an innovative automated platform, FIGENIX, which includes an expert system capable to substitute to human expertise at several key steps. FIGENIX currently automates complex pipelines of structural and functional annotation under the supervision of the expert system (which allows for example to make key decisions, check intermediate results or refine the dataset. The quality of the results produced by FIGENIX is comparable to those obtained by expert biologists with a drastic gain in terms of time costs and avoidance of errors due to the human manipulation of data. Conclusion The core engine and expert system of the FIGENIX platform currently handle complex annotation processes of broad interest for the genomic community. They could be easily adapted to new, or more specialized pipelines, such as for example the annotation of miRNAs, the classification of complex multigenic families, annotation of regulatory elements and other genomic features of interest.

  18. African Journal of Science and Technology (AJST) SUPERVISED ...

    African Journals Online (AJOL)

    NORBERT OPIYO AKECH

    ABSTRACT: TThis paper proposes a new method for supervised color image classification by the. Kohonen map, based on LVQ algorithms. The sample of observations, constituted by image pixels with 3 color components in the color space, is at first projected into a Kohonen map. This map is represented in the ...

  19. Supervised color image segmentation, using LVQ networks and K ...

    African Journals Online (AJOL)

    This paper proposes a new method for supervised color image classification by the. Kohonen map, based on LVQ algorithms. The sample of observations, constituted by image pixels with 3 color components in the color space, is at first projected into a Kohonen map. This map is represented in the 3-dimensional space, ...

  20. Semi-supervised prediction of gene regulatory networks using ...

    Indian Academy of Sciences (India)

    2015-09-28

    Sep 28, 2015 ... [Patel N and Wang JTL 2015 Semi-supervised prediction of gene regulatory networks using machine learning algorithms. J. Biosci. 40 731–740]. DOI 10.1007/s12038-015-9558-9. 1. Introduction. 1.1 Background. Using gene expression data to infer gene regulatory net- works (GRNs) is a key approach to ...

  1. Supervision Duty of School Principals

    Directory of Open Access Journals (Sweden)

    Kürşat YILMAZ

    2009-04-01

    Full Text Available Supervision by school administrators is becoming more and more important. The change in the roles ofschool administrators has a great effect on that increase. At present, school administrators are consideredmore than as technical directors, but as instructional leaders. This increased the importance of schooladministrators’ expected supervision acts. In this respect, the aim of this study is to make a conceptualanalysis about school administrators’ supervision duties. For this reason, a literature review related withsupervision and contemporary supervision approaches was done, and the official documents concerningsupervision were examined. As a result, it can be said that school administrators’ supervision duties havebecome very important. And these duties must certainly be carried out by school administrators.

  2. Learning Dynamics in Doctoral Supervision

    DEFF Research Database (Denmark)

    Kobayashi, Sofie

    This doctoral research explores doctoral supervision within life science research in a Danish university. From one angle it investigates doctoral students’ experiences with strengthening the relationship with their supervisors through a structured meeting with the supervisor, prepared as part of ...... of different theoretical frameworks from the perspectives of learning as individual acquisition and a sociocultural perspective on learning contributed to a nuanced illustration of the otherwise implicit practices of supervision....... investigates learning opportunities in supervision with multiple supervisors. This was investigated through observations and recording of supervision, and subsequent analysis of transcripts. The analyses used different perspectives on learning; learning as participation, positioning theory and variation theory....... The research illuminates how learning opportunities are created in the interaction through the scientific discussions. It also shows how multiple supervisors can contribute to supervision by providing new perspectives and opinions that have a potential for creating new understandings. The combination...

  3. Group supervision for general practitioners

    DEFF Research Database (Denmark)

    Galina Nielsen, Helena; Sofie Davidsen, Annette; Dalsted, Rikke

    2013-01-01

    to the communication with local community psychiatry centres. Furthermore, the GPs experienced that supervision had a positive 'spill-over effect' on everyday consultations, and that the supervision group became a forum for coping with other difficulties in their professional life as well. Trust and continuity were......AIM: Group supervision is a sparsely researched method for professional development in general practice. The aim of this study was to explore general practitioners' (GPs') experiences of the benefits of group supervision for improving the treatment of mental disorders. METHODS: One long......-established supervision group was studied closely for six months by observing the group sessions, and by interviewing GPs and their supervisors, individually and collectively. The interviews were recorded digitally and transcribed verbatim. The data were analysed using systematic text condensation. RESULTS: The GPs found...

  4. Indirect Instruments of Prudential Supervision

    Directory of Open Access Journals (Sweden)

    Nicolae Dardac

    2006-10-01

    Full Text Available The qualifying of supervision as “prudential” is used to differentiate it from other forms of supervision, which regards issues related to banking consumer protection. In order to achieve its goal, prudential supervision needs relevant information, provided mostly by the credit institutions their selves. Hence, the existence of a reporting system is essential, a system capable to insure, on the one hand, homogenity of the provided data and, on the other hand, its efficient processing. One indirect instrument which was used more and more during the last decade in banking supervision is represented by credit registers. The first system is directly associated either to the central bank or the supervision authority, and is, in most of cases, managed by the latter. The second system, the so called credit bureau, is mostly operated by private banks.

  5. An Automated Artificial Neural Network System for Land Use/Land Cover Classification from Landsat TM Imagery

    Directory of Open Access Journals (Sweden)

    Siamak Khorram

    2009-07-01

    Full Text Available This paper focuses on an automated ANN classification system consisting of two modules: an unsupervised Kohonen’s Self-Organizing Mapping (SOM neural network module, and a supervised Multilayer Perceptron (MLP neural network module using the Backpropagation (BP training algorithm. Two training algorithms were provided for the SOM network module: the standard SOM, and a refined SOM learning algorithm which incorporated Simulated Annealing (SA. The ability of our automated ANN system to perform Land-Use/Land-Cover (LU/LC classifications of a Landsat Thematic Mapper (TM image was tested using a supervised MLP network, an unsupervised SOM network, and a combination of SOM with SA network. Our case study demonstrated that the ANN classification system fulfilled the tasks of network training pattern creation, network training, and network generalization. The results from the three networks were assessed via a comparison with reference data derived from the high spatial resolution Digital Colour Infrared (CIR Digital Orthophoto Quarter Quad (DOQQ data. The supervised MLP network obtained the most accurate classification accuracy as compared to the two unsupervised SOM networks. Additionally, the classification performance of the refined SOM network was found to be significantly better than that of the standard SOM network essentially due to the incorporation of SA. This is mainly due to the SA-assisted classification utilizing the scheduling cooling scheme. It is concluded that our automated ANN classification system can be utilized for LU/LC applications and will be particularly useful when traditional statistical classification methods are not suitable due to a statistically abnormal distribution of the input data.

  6. Automics: an integrated platform for NMR-based metabonomics spectral processing and data analysis

    Directory of Open Access Journals (Sweden)

    Qu Lijia

    2009-03-01

    Full Text Available Abstract Background Spectral processing and post-experimental data analysis are the major tasks in NMR-based metabonomics studies. While there are commercial and free licensed software tools available to assist these tasks, researchers usually have to use multiple software packages for their studies because software packages generally focus on specific tasks. It would be beneficial to have a highly integrated platform, in which these tasks can be completed within one package. Moreover, with open source architecture, newly proposed algorithms or methods for spectral processing and data analysis can be implemented much more easily and accessed freely by the public. Results In this paper, we report an open source software tool, Automics, which is specifically designed for NMR-based metabonomics studies. Automics is a highly integrated platform that provides functions covering almost all the stages of NMR-based metabonomics studies. Automics provides high throughput automatic modules with most recently proposed algorithms and powerful manual modules for 1D NMR spectral processing. In addition to spectral processing functions, powerful features for data organization, data pre-processing, and data analysis have been implemented. Nine statistical methods can be applied to analyses including: feature selection (Fisher's criterion, data reduction (PCA, LDA, ULDA, unsupervised clustering (K-Mean and supervised regression and classification (PLS/PLS-DA, KNN, SIMCA, SVM. Moreover, Automics has a user-friendly graphical interface for visualizing NMR spectra and data analysis results. The functional ability of Automics is demonstrated with an analysis of a type 2 diabetes metabolic profile. Conclusion Automics facilitates high throughput 1D NMR spectral processing and high dimensional data analysis for NMR-based metabonomics applications. Using Automics, users can complete spectral processing and data analysis within one software package in most cases

  7. Verification of automata-based programs (supervised by Anatoly Shalyto)

    OpenAIRE

    Evgeny, Kurbatsky

    2008-01-01

    This paper describes a verification method of automata based programs [1] based on symbolic model checking algorithms [2]. Author makes an attempt to develop verification method that can automate process of verification and can be useful for peoples unacquainted with model checking algorithms or tools.

  8. A Semi-Supervised Learning Approach to Enhance Health Care Community–Based Question Answering: A Case Study in Alcoholism

    Science.gov (United States)

    Klabjan, Diego; Jonnalagadda, Siddhartha Reddy

    2016-01-01

    Background Community-based question answering (CQA) sites play an important role in addressing health information needs. However, a significant number of posted questions remain unanswered. Automatically answering the posted questions can provide a useful source of information for Web-based health communities. Objective In this study, we developed an algorithm to automatically answer health-related questions based on past questions and answers (QA). We also aimed to understand information embedded within Web-based health content that are good features in identifying valid answers. Methods Our proposed algorithm uses information retrieval techniques to identify candidate answers from resolved QA. To rank these candidates, we implemented a semi-supervised leaning algorithm that extracts the best answer to a question. We assessed this approach on a curated corpus from Yahoo! Answers and compared against a rule-based string similarity baseline. Results On our dataset, the semi-supervised learning algorithm has an accuracy of 86.2%. Unified medical language system–based (health related) features used in the model enhance the algorithm’s performance by proximately 8%. A reasonably high rate of accuracy is obtained given that the data are considerably noisy. Important features distinguishing a valid answer from an invalid answer include text length, number of stop words contained in a test question, a distance between the test question and other questions in the corpus, and a number of overlapping health-related terms between questions. Conclusions Overall, our automated QA system based on historical QA pairs is shown to be effective according to the dataset in this case study. It is developed for general use in the health care domain, which can also be applied to other CQA sites. PMID:27485666

  9. Comparison of two automated instruments for Epstein-Barr virus serology in a large adult hospital and implementation of an Epstein-Barr virus nuclear antigen-based testing algorithm.

    Science.gov (United States)

    Al Sidairi, Hilal; Binkhamis, Khalifa; Jackson, Colleen; Roberts, Catherine; Heinstein, Charles; MacDonald, Jimmy; Needle, Robert; Hatchette, Todd F; LeBlanc, Jason J

    2017-11-01

    Serology remains the mainstay for diagnosis of Epstein-Barr virus (EBV) infection. This study compared two automated platforms (BioPlex 2200 and Architect i2000SR) to test three EBV serological markers: viral capsid antigen (VCA) immunoglobulins of class M (IgM), VCA immunoglobulins of class G (IgG) and EBV nuclear antigen-1 (EBNA-1) IgG. Using sera from 65 patients at various stages of EBV disease, BioPlex demonstrated near-perfect agreement for all EBV markers compared to a consensus reference. The agreement for Architect was near-perfect for VCA IgG and EBNA-1 IgG, and substantial for VCA IgM despite five equivocal results. Since the majority of testing in our hospital was from adults with EBNA-1 IgG positive results, post-implementation analysis of an EBNA-based algorithm showed advantages over parallel testing of the three serologic markers. This small verification demonstrated that both automated systems for EBV serology had good performance for all EBV markers, and an EBNA-based testing algorithm is ideal for an adult hospital.

  10. Online supervision at the university

    DEFF Research Database (Denmark)

    Bengtsen, Søren Smedegaard; Jensen, Gry Sandholm

    2015-01-01

    and development (Bengtsen & Jensen 2013a; Bengtsen & Jensen 2013b; Jensen & Bengtsen 2014). Through an empirical study of supervision and feedback on student assignments at the university across face-to-face and online settings we show firstly, that the traditional dichotomy between face-to- face and online...... supervision proves unhelpful when trying to understand how online supervision and feedback is a pedagogical phenomenon in its own right, and irreducible to the face-to-face context. Secondly we show that not enough attention has been given to the way different digital tools and platforms influence...

  11. Image Segmentation Algorithms Overview

    OpenAIRE

    Yuheng, Song; Hao, Yan

    2017-01-01

    The technology of image segmentation is widely used in medical image processing, face recognition pedestrian detection, etc. The current image segmentation techniques include region-based segmentation, edge detection segmentation, segmentation based on clustering, segmentation based on weakly-supervised learning in CNN, etc. This paper analyzes and summarizes these algorithms of image segmentation, and compares the advantages and disadvantages of different algorithms. Finally, we make a predi...

  12. Supervised learning of probability distributions by neural networks

    Science.gov (United States)

    Baum, Eric B.; Wilczek, Frank

    1988-01-01

    Supervised learning algorithms for feedforward neural networks are investigated analytically. The back-propagation algorithm described by Werbos (1974), Parker (1985), and Rumelhart et al. (1986) is generalized by redefining the values of the input and output neurons as probabilities. The synaptic weights are then varied to follow gradients in the logarithm of likelihood rather than in the error. This modification is shown to provide a more rigorous theoretical basis for the algorithm and to permit more accurate predictions. A typical application involving a medical-diagnosis expert system is discussed.

  13. Nondisclosure in psychotherapy group supervision

    DEFF Research Database (Denmark)

    Reichelt, Sissel; Gullestad, Siri Erika; Hansen, Bjørg Røed

    2009-01-01

    The aim of this study was to investigate aspects of nondisclosure in a sample of 55 student therapists, working within a group format of supervision. The study constituted one part of a larger study, with the other, parallel part addressing nondisclosure in supervisors. The participants were...... of students experienced that the groups became more closed throughout the supervision, and blamed their supervisors for inadequate handling of the group process. This is an issue that needs further exploration....

  14. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  15. 20 CFR 656.21 - Supervised recruitment.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Supervised recruitment. 656.21 Section 656.21... Supervised recruitment. (a) Supervised recruitment. Where the Certifying Officer determines it appropriate, post-filing supervised recruitment may be required of the employer for the pending application or...

  16. Educational Supervision Appropriate for Psychiatry Trainee's Needs

    Science.gov (United States)

    Rele, Kiran; Tarrant, C. Jane

    2010-01-01

    Objective: The authors studied the regularity and content of supervision sessions in one of the U.K. postgraduate psychiatric training schemes (Mid-Trent). Methods: A questionnaire sent to psychiatry trainees assessed the timing and duration of supervision, content and protection of supervision time, and overall quality of supervision. The authors…

  17. Automated detection of cell nuclei in pap smear images using morphological reconstruction and clustering.

    Science.gov (United States)

    Plissiti, Marina E; Nikou, Christophoros; Charchanti, Antonia

    2011-03-01

    In this paper, we present a fully automated method for cell nuclei detection in Pap smear images. The locations of the candidate nuclei centroids in the image are detected with morphological analysis and they are refined in a second step, which incorporates a priori knowledge about the circumference of each nucleus. The elimination of the undesirable artifacts is achieved in two steps: the application of a distance-dependent rule on the resulted centroids; and the application of classification algorithms. In our method, we have examined the performance of an unsupervised (fuzzy C-means) and a supervised (support vector machines) classification technique. In both classification techniques, the effect of the refinement step improves the performance of the clustering algorithm. The proposed method was evaluated using 38 cytological images of conventional Pap smears containing 5617 recognized squamous epithelial cells. The results are very promising, even in the case of images with high degree of cell overlapping.

  18. Twelve tips for supervising research students.

    Science.gov (United States)

    Siddiqui, Zarrin Seema; Jonas-Dwyer, Diana R D

    2012-01-01

    Research supervision is a task that requires a set of abilities and skills. Many academics begin research supervision as novices and develop their abilities and skills through experience over time. We aim to provide advice about research supervision to prospective supervisors. We used critical reflection of our experiences, including feedback received from students under supervision as well as advice from the literature to develop these tips. Twelve tips are presented to assist faculty with research supervision. Research supervision is an important component of many medical academics' work. Beginning supervisors need to understand the dynamics and practicalities of supervision before they embark on this process.

  19. Weakly supervised semantic segmentation using fore-background priors

    Science.gov (United States)

    Han, Zheng; Xiao, Zhitao; Yu, Mingjun

    2017-07-01

    Weakly-supervised semantic segmentation is a challenge in the field of computer vision. Most previous works utilize the labels of the whole training set and thereby need the construction of a relationship graph about image labels, thus result in expensive computation. In this study, we tackle this problem from a different perspective. We proposed a novel semantic segmentation algorithm based on background priors, which avoids the construction of a huge graph in whole training dataset. Specifically, a random forest classifier is obtained using weakly supervised training data .Then semantic texton forest (STF) feature is extracted from image superpixels. Finally, a CRF based optimization algorithm is proposed. The unary potential of CRF derived from the outputting probability of random forest classifier and the robust saliency map as background prior. Experiments on the MSRC21 dataset show that the new algorithm outperforms some previous influential weakly-supervised segmentation algorithms. Furthermore, the use of efficient decision forests classifier and parallel computing of saliency map significantly accelerates the implementation.

  20. Supervised hub-detection for brain connectivity

    DEFF Research Database (Denmark)

    Kasenburg, Niklas; Liptrot, Matthew George; Reislev, Nina Linde

    2016-01-01

    A structural brain network consists of physical connections between brain regions. Brain network analysis aims to find features associated with a parameter of interest through supervised prediction models such as regression. Unsupervised preprocessing steps like clustering are often applied, but ...... network regression problem. Further development of the presented algorithm will be the extension to other predictive models such as classification or non-linear regression....... hubs are a low-dimensional representation of the network and are chosen based on predictive performance as features for a linear regression. We apply our method to the problem of finding age-related changes in structural connectivity. We compare our supervised hub-detection (SHD) to an unsupervised hub......-detection and a linear regression using the original network connections as features. The results show that the SHD is able to retain regression performance, while still finding hubs that represent the underlying variation in the population. Although here we applied the SHD to brain networks, it can be applied to any...

  1. Psychoanalytic supervision: the intersubjective development.

    Science.gov (United States)

    Berman, E

    2000-04-01

    The author argues that an intersubjective perspective on the analytic process makes the notion of purely didactic supervision, avoiding countertransference issues, untenable and that countertransference is both a clue to the analysand's psychic reality and a factor in its evolution. Supervision is seen as a highly personal learning process for both supervisor and supervisee and its emotional climate as a crucial factor in its evolution into a transitional space, generating new meanings. Supervision is portrayed as the crossroads of a matrix of object relations of three persons, of a complex network of transference/countertransference patterns. The avoidance or denial of the supervisor's subjective role in it, maintaining 'a myth of the supervisory situation', may make supervision stilted or even oppressive and stand in the way of resolving supervisory crises and stalemates. It is argued that several factors contribute to the conflictuality of supervision for all partners (often including the analysand): the continuous process of mutual evaluation, the reciprocal fears of exposing one's weaknesses, the impact of the institute as a setting and the transferences it arouses and the inherent conflicts of loyalty for each participant in the analytic/supervisory triad. The resulting dynamics and relational patterns could become a legitimate and freeing topic in supervisory discourse.

  2. Blinking supervision in a working environment.

    Science.gov (United States)

    Morcego, Bernardo; Argilés, Marc; Cabrerizo, Marc; Cardona, Genís; Pérez, Ramon; Pérez-Cabré, Elisabet; Gispets, Joan

    2016-02-01

    The health of the ocular surface requires blinks of the eye to be frequent in order to provide moisture and to renew the tear film. However, blinking frequency has been shown to decrease in certain conditions such as when subjects are conducting tasks with high cognitive and visual demands. These conditions are becoming more common as people work or spend their leisure time in front of video display terminals. Supervision of blinking frequency in such environments is possible, thanks to the availability of computer-integrated cameras. Therefore, the aim of the present study is to develop an algorithm for the detection of eye blinks and to test it, in a number of videos captured, while subjects are conducting a variety of tasks in front of the computer. The sensitivity of the algorithm for blink detection was found to be of 87.54% (range 30% to 100%), with a mean false-positive rate of 0.19% (range 0% to 1.7%), depending on the illumination conditions during which the image was captured and other computer–user spatial configurations. The current automatic process is based on a partly modified pre-existing eye detection and image processing algorithms and consists of four stages that are aimed at eye detection, eye tracking, iris detection and segmentation, and iris height/width ratio assessment.

  3. Integrating the Supervised Information into Unsupervised Learning

    Directory of Open Access Journals (Sweden)

    Ping Ling

    2013-01-01

    Full Text Available This paper presents an assembling unsupervised learning framework that adopts the information coming from the supervised learning process and gives the corresponding implementation algorithm. The algorithm consists of two phases: extracting and clustering data representatives (DRs firstly to obtain labeled training data and then classifying non-DRs based on labeled DRs. The implementation algorithm is called SDSN since it employs the tuning-scaled Support vector domain description to collect DRs, uses spectrum-based method to cluster DRs, and adopts the nearest neighbor classifier to label non-DRs. The validation of the clustering procedure of the first-phase is analyzed theoretically. A new metric is defined data dependently in the second phase to allow the nearest neighbor classifier to work with the informed information. A fast training approach for DRs’ extraction is provided to bring more efficiency. Experimental results on synthetic and real datasets verify that the proposed idea is of correctness and performance and SDSN exhibits higher popularity in practice over the traditional pure clustering procedure.

  4. Magnetic Resonance Parkinsonism Index: diagnostic accuracy of a fully automated algorithm in comparison with the manual measurement in a large Italian multicentre study in patients with progressive supranuclear palsy

    Energy Technology Data Exchange (ETDEWEB)

    Nigro, Salvatore [National Research Council, Institute of Bioimaging and Molecular Physiology, Catanzaro (Italy); Arabia, Gennarina [University ' ' Magna Graecia' ' , Institute of Neurology, Department of Medical and Surgical Sciences, Catanzaro (Italy); Antonini, Angelo; Weis, Luca; Marcante, Andrea [' ' Fondazione Ospedale San Camillo' ' - I.R.C.C.S, Parkinson' s Disease and Movement Disorders Unit, Venice-Lido (Italy); Tessitore, Alessandro; Cirillo, Mario; Tedeschi, Gioacchino [Second University of Naples, Department of Medical, Surgical, Neurological, Metabolic and Aging Sciences, Naples (Italy); Second University of Naples, MRI Research Center SUN-FISM, Naples (Italy); Zanigni, Stefano; Tonon, Caterina [Policlinico S. Orsola - Malpighi, Functional MR Unit, Bologna (Italy); University of Bologna, Department of Biomedical and Neuromotor Sciences, Bologna (Italy); Calandra-Buonaura, Giovanna [University of Bologna, Department of Biomedical and Neuromotor Sciences, Bologna (Italy); IRCCS Istituto delle Scienze Neurologiche di Bologna, Bologna (Italy); Pezzoli, Gianni; Cilia, Roberto [ASST G.Pini - CTO, ex ICP, Parkinson Institute, Milano (Italy); Zappia, Mario; Nicoletti, Alessandra; Cicero, Calogero Edoardo [University of Catania, Department ' ' G.F. Ingrassia' ' , Section of Neurosciences, Catania (Italy); Tinazzi, Michele; Tocco, Pierluigi [University Hospital of Verona, Department of Neurological and Movement Sciences, Verona (Italy); Cardobi, Nicolo [University Hospital of Verona, Institute of Radiology, Verona (Italy); Quattrone, Aldo [National Research Council, Institute of Bioimaging and Molecular Physiology, Catanzaro (Italy); University ' ' Magna Graecia' ' , Institute of Neurology, Department of Medical and Surgical Sciences, Catanzaro (Italy)

    2017-06-15

    To investigate the reliability of a new in-house automatic algorithm for calculating the Magnetic Resonance Parkinsonism Index (MRPI), in a large multicentre study population of patients affected by progressive supranuclear palsy (PSP) or Parkinson's disease (PD), and healthy controls (HC), and to compare the diagnostic accuracy of the automatic and manual MRPI values. The study included 88 PSP patients, 234 PD patients and 117 controls. MRI was performed using both 3T and 1.5T scanners. Automatic and manual MRPI values were evaluated, and accuracy of both methods in distinguishing PSP from PD and controls was calculated. No statistical differences were found between automated and manual MRPI values in all groups. The automatic MRPI values differentiated PSP from PD with an accuracy of 95 % (manual MRPI accuracy 96 %) and 97 % (manual MRPI accuracy 100 %) for 1.5T and 3T scanners, respectively. Our study showed that the new in-house automated method for MRPI calculation was highly accurate in distinguishing PSP from PD. Our automatic approach allows a widespread use of MRPI in clinical practice and in longitudinal research studies. (orig.)

  5. Supervised and Unsupervised Classification Using Mixture Models

    Science.gov (United States)

    Girard, S.; Saracco, J.

    2016-05-01

    This chapter is dedicated to model-based supervised and unsupervised classification. Probability distributions are defined over possible labels as well as over the observations given the labels. To this end, the basic tools are the mixture models. This methodology yields a posterior distribution over the labels given the observations which allows to quantify the uncertainty of the classification. The role of Gaussian mixture models is emphasized leading to Linear Discriminant Analysis and Quadratic Discriminant Analysis methods. Some links with Fisher Discriminant Analysis and logistic regression are also established. The Expectation-Maximization algorithm is introduced and compared to the K-means clustering method. The methods are illustrated both on simulated datasets as well as on real datasets using the R software.

  6. Sensitivity and specificity of automated analysis of single-field non-mydriatic fundus photographs by Bosch DR Algorithm-Comparison with mydriatic fundus photography (ETDRS for screening in undiagnosed diabetic retinopathy.

    Directory of Open Access Journals (Sweden)

    Pritam Bawankar

    Full Text Available Diabetic retinopathy (DR is a leading cause of blindness among working-age adults. Early diagnosis through effective screening programs is likely to improve vision outcomes. The ETDRS seven-standard-field 35-mm stereoscopic color retinal imaging (ETDRS of the dilated eye is elaborate and requires mydriasis, and is unsuitable for screening. We evaluated an image analysis application for the automated diagnosis of DR from non-mydriatic single-field images. Patients suffering from diabetes for at least 5 years were included if they were 18 years or older. Patients already diagnosed with DR were excluded. Physiologic mydriasis was achieved by placing the subjects in a dark room. Images were captured using a Bosch Mobile Eye Care fundus camera. The images were analyzed by the Retinal Imaging Bosch DR Algorithm for the diagnosis of DR. All subjects also subsequently underwent pharmacological mydriasis and ETDRS imaging. Non-mydriatic and mydriatic images were read by ophthalmologists. The ETDRS readings were used as the gold standard for calculating the sensitivity and specificity for the software. 564 consecutive subjects (1128 eyes were recruited from six centers in India. Each subject was evaluated at a single outpatient visit. Forty-four of 1128 images (3.9% could not be read by the algorithm, and were categorized as inconclusive. In four subjects, neither eye provided an acceptable image: these four subjects were excluded from the analysis. This left 560 subjects for analysis (1084 eyes. The algorithm correctly diagnosed 531 of 560 cases. The sensitivity, specificity, and positive and negative predictive values were 91%, 97%, 94%, and 95% respectively. The Bosch DR Algorithm shows favorable sensitivity and specificity in diagnosing DR from non-mydriatic images, and can greatly simplify screening for DR. This also has major implications for telemedicine in the use of screening for retinopathy in patients with diabetes mellitus.

  7. Challenges for Better thesis supervision.

    Science.gov (United States)

    Ghadirian, Laleh; Sayarifard, Azadeh; Majdzadeh, Reza; Rajabi, Fatemeh; Yunesian, Masoud

    2014-01-01

    Conduction of thesis by the students is one of their major academic activities. Thesis quality and acquired experiences are highly dependent on the supervision. Our study is aimed at identifing the challenges in thesis supervision from both students and faculty members point of view. This study was conducted using individual in-depth interviews and Focus Group Discussions (FGD). The participants were 43 students and faculty members selected by purposive sampling. It was carried out in Tehran University of Medical Sciences in 2012. Data analysis was done concurrently with data gathering using content analysis method. Our data analysis resulted in 162 codes, 17 subcategories and 4 major categories, "supervisory knowledge and skills", "atmosphere", "bylaws and regulations relating to supervision" and "monitoring and evaluation". This study showed that more attention and planning in needed for modifying related rules and regulations, qualitative and quantitative improvement in mentorship training, research atmosphere improvement and effective monitoring and evaluation in supervisory area.

  8. Evaluation of a semi-automated computer algorithm for measuring total fat and visceral fat content in lambs undergoing in vivo whole body computed tomography.

    Science.gov (United States)

    Rosenblatt, Alana J; Scrivani, Peter V; Boisclair, Yves R; Reeves, Anthony P; Ramos-Nieves, Jose M; Xie, Yiting; Erb, Hollis N

    2017-10-01

    Computed tomography (CT) is a suitable tool for measuring body fat, since it is non-destructive and can be used to differentiate metabolically active visceral fat from total body fat. Whole body analysis of body fat is likely to be more accurate than single CT slice estimates of body fat. The aim of this study was to assess the agreement between semi-automated computer analysis of whole body volumetric CT data and conventional proximate (chemical) analysis of body fat in lambs. Data were collected prospectively from 12 lambs that underwent duplicate whole body CT, followed by slaughter and carcass analysis by dissection and chemical analysis. Agreement between methods for quantification of total and visceral fat was assessed by Bland-Altman plot analysis. The repeatability of CT was assessed for these measures using the mean difference of duplicated measures. When compared to chemical analysis, CT systematically underestimated total and visceral fat contents by more than 10% of the mean fat weight. Therefore, carcass analysis and semi-automated CT computer measurements were not interchangeable for quantifying body fat content without the use of a correction factor. CT acquisition was repeatable, with a mean difference of repeated measures being close to zero. Therefore, uncorrected whole body CT might have an application for assessment of relative changes in fat content, especially in growing lambs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Metrology automation reliability

    Science.gov (United States)

    Chain, Elizabeth E.

    1996-09-01

    At Motorola's MOS-12 facility automated measurements on 200- mm diameter wafers proceed in a hands-off 'load-and-go' mode requiring only wafer loading, measurement recipe loading, and a 'run' command for processing. Upon completion of all sample measurements, the data is uploaded to the factory's data collection software system via a SECS II interface, eliminating the requirement of manual data entry. The scope of in-line measurement automation has been extended to the entire metrology scheme from job file generation to measurement and data collection. Data analysis and comparison to part specification limits is also carried out automatically. Successful integration of automated metrology into the factory measurement system requires that automated functions, such as autofocus and pattern recognition algorithms, display a high degree of reliability. In the 24- hour factory reliability data can be collected automatically on every part measured. This reliability data is then uploaded to the factory data collection software system at the same time as the measurement data. Analysis of the metrology reliability data permits improvements to be made as needed, and provides an accurate accounting of automation reliability. This reliability data has so far been collected for the CD-SEM (critical dimension scanning electron microscope) metrology tool, and examples are presented. This analysis method can be applied to such automated in-line measurements as CD, overlay, particle and film thickness measurements.

  10. Towards designing an email classification system using multi-view based semi-supervised learning

    NARCIS (Netherlands)

    Li, Wenjuan; Meng, Weizhi; Tan, Zhiyuan; Xiang, Yang

    2014-01-01

    The goal of email classification is to classify user emails into spam and legitimate ones. Many supervised learning algorithms have been invented in this domain to accomplish the task, and these algorithms require a large number of labeled training data. However, data labeling is a labor intensive

  11. Semi-supervised learning and domain adaptation in natural language processing

    CERN Document Server

    Søgaard, Anders

    2013-01-01

    This book introduces basic supervised learning algorithms applicable to natural language processing (NLP) and shows how the performance of these algorithms can often be improved by exploiting the marginal distribution of large amounts of unlabeled data. One reason for that is data sparsity, i.e., the limited amounts of data we have available in NLP. However, in most real-world NLP applications our labeled data is also heavily biased. This book introduces extensions of supervised learning algorithms to cope with data sparsity and different kinds of sampling bias.This book is intended to be both

  12. On the design and implementation of an automated astronomical image analyzer

    Science.gov (United States)

    Zhao, Lei; Sharif, Gabe; Orellana, Sonny; Boussalis, Helen; Liu, Charles; Rad, Khosrow; Dong, Jane

    2006-10-01

    Sponsored by the National Aeronautical Space Association (NASA), the Synergetic Education and Research in Enabling NASA-centered Academic Development of Engineers and Space Scientists (SERENADES) Laboratory was established at California State University, Los Angeles (CSULA). An important on-going research activity in this lab is to develop an easy-to-use image analysis software with the capability of automated object detection to facilitate astronomical research. This paper presents the design and implementation of an automated astronomical image analyzer. The core of this software is the automated object detection algorithm developed in our previous research, which is capable of detecting objects in near galaxy images, including objects located within clouds. In addition to the functionality, human factor is considered in system design and tremendous efforts have been devoted to enhance the user friendliness. Instead of using command line or static menus, graphical methods are enabled in our software system to allow the user to directly manipulate the objects that he/she wants to investigate. Comprehensive tests are conducted by users with and without astronomical backgrounds. Compared to current software tools such as IRAF and Skyview, our developed software has the following advantages: 1) No pre-training is required; 2) The amount of human supervision is significantly reduced by automated object detection; 3) Batch processing capability is supported for fast operation; and 4) A high degree of human computer interaction is realized for better usability.

  13. Algorithms and Algorithmic Languages.

    Science.gov (United States)

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  14. Supervised Object Class Colour Normalisation

    DEFF Research Database (Denmark)

    Riabchenko, Ekatarina; Lankinen, Jukka; Buch, Anders Glent

    2013-01-01

    . In this work, we develop a such colour normalisation technique, where true colours are not important per se but where examples of same classes have photometrically consistent appearance. This is achieved by supervised estimation of a class specic canonical colour space where the examples have minimal variation...

  15. Consultative Instructor Supervision and Evaluation

    Science.gov (United States)

    Lee, William W.

    2010-01-01

    Organizations vary greatly in how they monitor training instructors. The methods used in monitoring vary greatly. This article presents a systematic process for improving instructor skills that result in better teaching and better learning, which results in better-prepared employees for the workforce. The consultative supervision and evaluation…

  16. Supervised query modeling using Wikipedia

    NARCIS (Netherlands)

    Meij, E.; de Rijke, M.; Chen, H.-H.; Efthimiadis, E.N.; Savoy, J.; Crestani, F.; Marchand-Millet, S.

    2010-01-01

    We use Wikipedia articles to semantically inform the generation of query models. To this end, we apply supervised machine learning to automatically link queries to Wikipedia articles and sample terms from the linked articles to re-estimate the query model. On a recent large web corpus, we observe

  17. Automated measurements of metabolic tumor volume and metabolic parameters in lung PET/CT imaging

    Science.gov (United States)

    Orologas, F.; Saitis, P.; Kallergi, M.

    2017-11-01

    Patients with lung tumors or inflammatory lung disease could greatly benefit in terms of treatment and follow-up by PET/CT quantitative imaging, namely measurements of metabolic tumor volume (MTV), standardized uptake values (SUVs) and total lesion glycolysis (TLG). The purpose of this study was the development of an unsupervised or partially supervised algorithm using standard image processing tools for measuring MTV, SUV, and TLG from lung PET/CT scans. Automated metabolic lesion volume and metabolic parameter measurements were achieved through a 5 step algorithm: (i) The segmentation of the lung areas on the CT slices, (ii) the registration of the CT segmented lung regions on the PET images to define the anatomical boundaries of the lungs on the functional data, (iii) the segmentation of the regions of interest (ROIs) on the PET images based on adaptive thresholding and clinical criteria, (iv) the estimation of the number of pixels and pixel intensities in the PET slices of the segmented ROIs, (v) the estimation of MTV, SUVs, and TLG from the previous step and DICOM header data. Whole body PET/CT scans of patients with sarcoidosis were used for training and testing the algorithm. Lung area segmentation on the CT slices was better achieved with semi-supervised techniques that reduced false positive detections significantly. Lung segmentation results agreed with the lung volumes published in the literature while the agreement between experts and algorithm in the segmentation of the lesions was around 88%. Segmentation results depended on the image resolution selected for processing. The clinical parameters, SUV (either mean or max or peak) and TLG estimated by the segmented ROIs and DICOM header data provided a way to correlate imaging data to clinical and demographic data. In conclusion, automated MTV, SUV, and TLG measurements offer powerful analysis tools in PET/CT imaging of the lungs. Custom-made algorithms are often a better approach than the manufacturer

  18. Blastocyst microinjection automation.

    Science.gov (United States)

    Mattos, Leonardo S; Grant, Edward; Thresher, Randy; Kluckman, Kimberly

    2009-09-01

    Blastocyst microinjections are routinely involved in the process of creating genetically modified mice for biomedical research, but their efficiency is highly dependent on the skills of the operators. As a consequence, much time and resources are required for training microinjection personnel. This situation has been aggravated by the rapid growth of genetic research, which has increased the demand for mutant animals. Therefore, increased productivity and efficiency in this area are highly desired. Here, we pursue these goals through the automation of a previously developed teleoperated blastocyst microinjection system. This included the design of a new system setup to facilitate automation, the definition of rules for automatic microinjections, the implementation of video processing algorithms to extract feedback information from microscope images, and the creation of control algorithms for process automation. Experimentation conducted with this new system and operator assistance during the cells delivery phase demonstrated a 75% microinjection success rate. In addition, implantation of the successfully injected blastocysts resulted in a 53% birth rate and a 20% yield of chimeras. These results proved that the developed system was capable of automatic blastocyst penetration and retraction, demonstrating the success of major steps toward full process automation.

  19. Health science students' conceptions of group supervision.

    Science.gov (United States)

    Kangasniemi, Mari; Ahonen, Sanna-Mari; Liikanen, Eeva; Utriainen, Kati

    2011-02-01

    The aim of this study was to describe health science university students' conceptions of group supervision during work on bachelor's thesis. This study is a qualitative research. Data were collected with an open data collection form from health science students (N=77). It was analysed by using inductive content analysis, conducted by a multidisciplinary research team. Appropriate ethical principles and scientific practice were followed. All the participants provided informed consent. Students' conceptions of group supervisions consisted of organization of group supervision, the nature of supervision, the interaction between students, the role of the supervisor and learning results. Group supervision is a student-centred and problem-based method of supervision achieving a common target. It consists of interaction between students and supervisor. The supervisor's role is profiled as scientific and substantial expertise. Group supervision is a suitable supervision method for achieving theoretical and practical scientific skills. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. Automated Digital Dental Articulation

    OpenAIRE

    Xia, James J.; Chang, Yu-Bing; Gateno, Jaime; Xiong, Zixiang; Zhou, Xiaobo

    2010-01-01

    Articulating digital dental models is often inaccurate and very time-consuming. This paper presents an automated approach to efficiently articulate digital dental models to maximum intercuspation (MI). There are two steps in our method. The first step is to position the models to an initial position based on dental curves and a point matching algorithm. The second step is to finally position the models to the MI position based on our novel approach of using iterative surface-based minimum dis...

  1. Automated Speech Rate Measurement in Dysarthria

    Science.gov (United States)

    Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc

    2015-01-01

    Purpose: In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. Method: The new algorithm was trained and tested using Dutch…

  2. Postgraduate research supervision in a socially distributed ...

    African Journals Online (AJOL)

    Postgraduate supervision is a higher education practice with a long history. Through the conventional "apprenticeship" model postgraduate supervision has served as an important vehicle of intellectual inheritance between generations. However, this model of supervision has come under scrutiny as a consequence of the ...

  3. Developing a Critical Practice of Clinical Supervision.

    Science.gov (United States)

    Smyth, W. John

    1985-01-01

    The etymology of the term "clinical supervision" is discussed. How clinical supervision can be used with teachers as an active force toward reform and change is then examined. Through clinical supervision teachers can assist each other to gain control over their own professional lives and destinies. (RM)

  4. Determining the Usages of Clinical Supervision.

    Science.gov (United States)

    Pavan, Barbara Nelson

    Clinical supervision at its best is a collaborative process whereby teacher and observer work together for instructional improvement. The Snyder-Pavan Supervision Practices Questionnaire seeks to obtain a description of the clinical supervision practiced by administrators, supervisors, and teachers. The majority of items are scored five through…

  5. Identity, Authority, and the Heart of Supervision.

    Science.gov (United States)

    Waite, Duncan

    2000-01-01

    Discusses types of authority (bureaucratic, personal, technical-rational, professional, moral, cultural, and ideological) and their implications for teacher supervision. Supervision is a helping profession, in service of the teacher. The heart of supervision lies in its relationships and its mission to improve the total teaching/learning…

  6. 20 CFR 655.30 - Supervised recruitment.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Supervised recruitment. 655.30 Section 655.30... Workers) § 655.30 Supervised recruitment. (a) Supervised recruitment. Where an employer is found to have... failed to adequately conduct recruitment activities or failed in any obligation of this part, the CO may...

  7. Curriculum Supervision: A Process for Improving Instruction.

    Science.gov (United States)

    Payne, Ruby K.

    This paper advocates a more active role for administrators in curriculum supervision, claiming that two of the most neglected areas in supervision are the content and the amount of time allocated to that content and its objectives. An essential task of curriculum supervision should therefore be to make sure that content and corresponding time…

  8. Weakly Supervised Multilabel Clustering and its Applications in Computer Vision.

    Science.gov (United States)

    Xia, Yingjie; Nie, Liqiang; Zhang, Luming; Yang, Yi; Hong, Richang; Li, Xuelong

    2016-12-01

    Clustering is a useful statistical tool in computer vision and machine learning. It is generally accepted that introducing supervised information brings remarkable performance improvement to clustering. However, assigning accurate labels is expensive when the amount of training data is huge. Existing supervised clustering methods handle this problem by transferring the bag-level labels into the instance-level descriptors. However, the assumption that each bag has a single label limits the application scope seriously. In this paper, we propose weakly supervised multilabel clustering, which allows assigning multiple labels to a bag. Based on this, the instance-level descriptors can be clustered with the guidance of bag-level labels. The key technique is a weakly supervised random forest that infers the model parameters. Thereby, a deterministic annealing strategy is developed to optimize the nonconvex objective function. The proposed algorithm is efficient in both the training and the testing stages. We apply it to three popular computer vision tasks: 1) image clustering; 2) semantic image segmentation; and 3) multiple objects localization. Impressive performance on the state-of-the-art image data sets is achieved in our experiments.

  9. BANKING SUPERVISION IN EUROPEAN UNION

    Directory of Open Access Journals (Sweden)

    Lavinia Mihaela GUȚU

    2013-10-01

    Full Text Available The need for prudential supervision imposed to banks by law arises from the action that banking market’s basic factors have. Therefore, it is about banks’ role in economy. The normal functioning of banks in all their important duties maintains the stability of banking system. Further, the stability of the entire economy depends on the stability of the banking system. Under conditions of imbalance regarding treasury or liquidity, banks are faced with unmanageable crisis and the consequences can be fatal. To ensure long-term stability of the banking system, supervisory regulations were constituted in order to prevent banks focusing on achieving rapidly high profits and protect the interests of depositors. Starting from this point, this paper will carry out a study on existing models of supervision in the European Union’s Member States. A comparison between them will support identifying the advantages and disadvantages of each of them.

  10. PIXiE: an algorithm for automated ion mobility arrival time extraction and collision cross section calculation using global data association

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Jian; Casey, Cameron P.; Zheng, Xueyun; Ibrahim, Yehia M.; Wilkins, Christopher S.; Renslow, Ryan S.; Thomas, Dennis G.; Payne, Samuel H.; Monroe, Matthew E.; Smith, Richard D.; Teeguarden, Justin G.; Baker, Erin S.; Metz, Thomas O.

    2017-05-15

    Motivation: Drift tube ion mobility spectrometry (DTIMS) is increasingly implemented in high throughput omics workflows, and new informatics approaches are necessary for processing the associated data. To automatically extract arrival times for molecules measured by DTIMS coupled with mass spectrometry and compute their associated collisional cross sections (CCS) we created the PNNL Ion Mobility Cross Section Extractor (PIXiE). The primary application presented for this algorithm is the extraction of information necessary to create a reference library containing accu-rate masses, DTIMS arrival times and CCSs for use in high throughput omics analyses. Results: We demonstrate the utility of this approach by automatically extracting arrival times and calculating the associated CCSs for a set of endogenous metabolites and xenobiotics. The PIXiE-generated CCS values were identical to those calculated by hand and within error of those calcu-lated using commercially available instrument vendor software.

  11. Library Automation

    OpenAIRE

    Dhakne, B. N.; Giri, V. V; Waghmode, S. S.

    2010-01-01

    New technologies library provides several new materials, media and mode of storing and communicating the information. Library Automation reduces the drudgery of repeated manual efforts in library routine. By use of library automation collection, Storage, Administration, Processing, Preservation and communication etc.

  12. Skærpet bevidsthed om supervision

    DEFF Research Database (Denmark)

    Pedersen, Inge Nygaard

    2002-01-01

    This article presents a historical survey of the initiatives which have taken place in european music therapy towards developing a deeper consciousness about supervision. Supervision as a disciplin in music therapy training, as a maintenance of music therapy profession and as a postgraduate train...... training for examined music therapists. Definitions are presented and methods developed by working groups in european music therapy supervision are presented.......This article presents a historical survey of the initiatives which have taken place in european music therapy towards developing a deeper consciousness about supervision. Supervision as a disciplin in music therapy training, as a maintenance of music therapy profession and as a postgraduate...

  13. [Software version and medical device software supervision].

    Science.gov (United States)

    Peng, Liang; Liu, Xiaoyan

    2015-01-01

    The importance of software version in the medical device software supervision does not cause enough attention at present. First of all, the effect of software version in the medical device software supervision is discussed, and then the necessity of software version in the medical device software supervision is analyzed based on the discussion of the misunderstanding of software version. Finally the concrete suggestions on software version naming rules, software version supervision for the software in medical devices, and software version supervision scheme are proposed.

  14. Semi-Supervised Affinity Propagation with Soft Instance-Level Constraints.

    Science.gov (United States)

    Arzeno, Natalia M; Vikalo, Haris

    2015-05-01

    Soft-constraint semi-supervised affinity propagation (SCSSAP) adds supervision to the affinity propagation (AP) clustering algorithm without strictly enforcing instance-level constraints. Constraint violations lead to an adjustment of the AP similarity matrix at every iteration of the proposed algorithm and to addition of a penalty to the objective function. This formulation is particularly advantageous in the presence of noisy labels or noisy constraints since the penalty parameter of SCSSAP can be tuned to express our confidence in instance-level constraints. When the constraints are noiseless, SCSSAP outperforms unsupervised AP and performs at least as well as the previously proposed semi-supervised AP and constrained expectation maximization. In the presence of label and constraint noise, SCSSAP results in a more accurate clustering than either of the aforementioned established algorithms. Finally, we present an extension of SCSSAP which incorporates metric learning in the optimization objective and can further improve the performance of clustering.

  15. Semi-supervised classification via local spline regression.

    Science.gov (United States)

    Xiang, Shiming; Nie, Feiping; Zhang, Changshui

    2010-11-01

    This paper presents local spline regression for semi-supervised classification. The core idea in our approach is to introduce splines developed in Sobolev space to map the data points directly to be class labels. The spline is composed of polynomials and Green's functions. It is smooth, nonlinear, and able to interpolate the scattered data points with high accuracy. Specifically, in each neighborhood, an optimal spline is estimated via regularized least squares regression. With this spline, each of the neighboring data points is mapped to be a class label. Then, the regularized loss is evaluated and further formulated in terms of class label vector. Finally, all of the losses evaluated in local neighborhoods are accumulated together to measure the global consistency on the labeled and unlabeled data. To achieve the goal of semi-supervised classification, an objective function is constructed by combining together the global loss of the local spline regressions and the squared errors of the class labels of the labeled data. In this way, a transductive classification algorithm is developed in which a globally optimal classification can be finally obtained. In the semi-supervised learning setting, the proposed algorithm is analyzed and addressed into the Laplacian regularization framework. Comparative classification experiments on many public data sets and applications to interactive image segmentation and image matting illustrate the validity of our method.

  16. Technological process supervising using vision systems cooperating with the LabVIEW vision builder

    Science.gov (United States)

    Hryniewicz, P.; Banaś, W.; Gwiazda, A.; Foit, K.; Sękala, A.; Kost, G.

    2015-11-01

    One of the most important tasks in the production process is to supervise its proper functioning. Lack of required supervision over the production process can lead to incorrect manufacturing of the final element, through the production line downtime and hence to financial losses. The worst result is the damage of the equipment involved in the manufacturing process. Engineers supervise the production flow correctness use the great range of sensors supporting the supervising of a manufacturing element. Vision systems are one of sensors families. In recent years, thanks to the accelerated development of electronics as well as the easier access to electronic products and attractive prices, they become the cheap and universal type of sensors. These sensors detect practically all objects, regardless of their shape or even the state of matter. The only problem is considered with transparent or mirror objects, detected from the wrong angle. Integrating the vision system with the LabVIEW Vision and the LabVIEW Vision Builder it is possible to determine not only at what position is the given element but also to set its reorientation relative to any point in an analyzed space. The paper presents an example of automated inspection. The paper presents an example of automated inspection of the manufacturing process in a production workcell using the vision supervising system. The aim of the work is to elaborate the vision system that could integrate different applications and devices used in different production systems to control the manufacturing process.

  17. Nursing supervision for care comprehensiveness.

    Science.gov (United States)

    Chaves, Lucieli Dias Pedreschi; Mininel, Vivian Aline; Silva, Jaqueline Alcântara Marcelino da; Alves, Larissa Roberta; Silva, Maria Ferreira da; Camelo, Silvia Helena Henriques

    2017-01-01

    To reflect on nursing supervision as a management tool for care comprehensiveness by nurses, considering its potential and limits in the current scenario. A reflective study based on discourse about nursing supervision, presenting theoretical and practical concepts and approaches. Limits on the exercise of supervision are related to the organization of healthcare services based on the functional and clinical model of care, in addition to possible gaps in the nurse training process and work overload. Regarding the potential, researchers emphasize that supervision is a tool for coordinating care and management actions, which may favor care comprehensiveness, and stimulate positive attitudes toward cooperation and contribution within teams, co-responsibility, and educational development at work. Nursing supervision may help enhance care comprehensiveness by implying continuous reflection on including the dynamics of the healthcare work process and user needs in care networks. refletir a supervisão de enfermagem como instrumento gerencial do enfermeiro para integralidade do cuidado, considerando suas potencialidades e limitações no cenário atual. estudo reflexivo baseado na formulação discursiva sobre a supervisão de enfermagem, apresentando conceitos e enfoques teóricos e/ou práticos. limitações no exercício da supervisão estão relacionadas à organização dos serviços de saúde embasada no modelo funcional e clínico de atenção, assim como possíveis lacunas no processo de formação do enfermeiro e sobrecarga de trabalho. Quanto às potencialidades, destaca-se a supervisão como instrumento de articulação de ações assistenciais e gerenciais, que pode favorecer integralidade da atenção, estimular atitudes de cooperação e colaboração em equipe, além da corresponsabilização e promoção da educação no trabalho. supervisão de enfermagem pode contribuir para fortalecimento da integralidade do cuidado, pressupondo reflexão cont

  18. Algorithms for Reinforcement Learning

    CERN Document Server

    Szepesvari, Csaba

    2010-01-01

    Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'

  19. Semi-supervised Machine Learning for Analysis of Hydrogeochemical Data and Models

    Science.gov (United States)

    Vesselinov, Velimir; O'Malley, Daniel; Alexandrov, Boian; Moore, Bryan

    2017-04-01

    Data- and model-based analyses such as uncertainty quantification, sensitivity analysis, and decision support using complex physics models with numerous model parameters and typically require a huge number of model evaluations (on order of 10^6). Furthermore, model simulations of complex physics may require substantial computational time. For example, accounting for simultaneously occurring physical processes such as fluid flow and biogeochemical reactions in heterogeneous porous medium may require several hours of wall-clock computational time. To address these issues, we have developed a novel methodology for semi-supervised machine learning based on Non-negative Matrix Factorization (NMF) coupled with customized k-means clustering. The algorithm allows for automated, robust Blind Source Separation (BSS) of groundwater types (contamination sources) based on model-free analyses of observed hydrogeochemical data. We have also developed reduced order modeling tools, which coupling support vector regression (SVR), genetic algorithms (GA) and artificial and convolutional neural network (ANN/CNN). SVR is applied to predict the model behavior within prior uncertainty ranges associated with the model parameters. ANN and CNN procedures are applied to upscale heterogeneity of the porous medium. In the upscaling process, fine-scale high-resolution models of heterogeneity are applied to inform coarse-resolution models which have improved computational efficiency while capturing the impact of fine-scale effects at the course scale of interest. These techniques are tested independently on a series of synthetic problems. We also present a decision analysis related to contaminant remediation where the developed reduced order models are applied to reproduce groundwater flow and contaminant transport in a synthetic heterogeneous aquifer. The tools are coded in Julia and are a part of the MADS high-performance computational framework (https://github.com/madsjulia/Mads.jl).

  20. PIXiE: an algorithm for automated ion mobility arrival time extraction and collision cross section calculation using global data association.

    Science.gov (United States)

    Ma, Jian; Casey, Cameron P; Zheng, Xueyun; Ibrahim, Yehia M; Wilkins, Christopher S; Renslow, Ryan S; Thomas, Dennis G; Payne, Samuel H; Monroe, Matthew E; Smith, Richard D; Teeguarden, Justin G; Baker, Erin S; Metz, Thomas O

    2017-09-01

    Drift tube ion mobility spectrometry coupled with mass spectrometry (DTIMS-MS) is increasingly implemented in high throughput omics workflows, and new informatics approaches are necessary for processing the associated data. To automatically extract arrival times for molecules measured by DTIMS at multiple electric fields and compute their associated collisional cross sections (CCS), we created the PNNL Ion Mobility Cross Section Extractor (PIXiE). The primary application presented for this algorithm is the extraction of data that can then be used to create a reference library of experimental CCS values for use in high throughput omics analyses. We demonstrate the utility of this approach by automatically extracting arrival times and calculating the associated CCSs for a set of endogenous metabolites and xenobiotics. The PIXiE-generated CCS values were within error of those calculated using commercially available instrument vendor software. PIXiE is an open-source tool, freely available on Github. The documentation, source code of the software, and a GUI can be found at https://github.com/PNNL-Comp-Mass-Spec/PIXiE and the source code of the backend workflow library used by PIXiE can be found at https://github.com/PNNL-Comp-Mass-Spec/IMS-Informed-Library . erin.baker@pnnl.gov or thomas.metz@pnnl.gov. Supplementary data are available at Bioinformatics online.

  1. Classification of Automated Search Traffic

    Science.gov (United States)

    Buehrer, Greg; Stokes, Jack W.; Chellapilla, Kumar; Platt, John C.

    As web search providers seek to improve both relevance and response times, they are challenged by the ever-increasing tax of automated search query traffic. Third party systems interact with search engines for a variety of reasons, such as monitoring a web site’s rank, augmenting online games, or possibly to maliciously alter click-through rates. In this paper, we investigate automated traffic (sometimes referred to as bot traffic) in the query stream of a large search engine provider. We define automated traffic as any search query not generated by a human in real time. We first provide examples of different categories of query logs generated by automated means. We then develop many different features that distinguish between queries generated by people searching for information, and those generated by automated processes. We categorize these features into two classes, either an interpretation of the physical model of human interactions, or as behavioral patterns of automated interactions. Using the these detection features, we next classify the query stream using multiple binary classifiers. In addition, a multiclass classifier is then developed to identify subclasses of both normal and automated traffic. An active learning algorithm is used to suggest which user sessions to label to improve the accuracy of the multiclass classifier, while also seeking to discover new classes of automated traffic. Performance analysis are then provided. Finally, the multiclass classifier is used to predict the subclass distribution for the search query stream.

  2. Semi-Supervised Eigenbasis Novelty Detection

    Science.gov (United States)

    Wagstaff, Kiri L.; Thompson, David R.

    2013-01-01

    Recent discoveries in high-time-resolution radio astronomy data have focused attention on a new class of events. Fast transients are rare pulses of radio frequency energy lasting from microseconds to seconds that might be produced by a variety of exotic astrophysical phenomena. For example, X-ray bursts, neutron stars, and active galactic nuclei are all possible sources of short-duration, transient radio signals. It is difficult to anticipate where such signals might appear, and they are most commonly discovered through analysis of high-time- resolution data that had been collected for other purposes. Transients are often faint and difficult to detect, so improved detection algorithms can directly benefit the science yield of all such commensal monitoring. A new detection algorithm learns a low-dimensional linear manifold for describing the normal data. High reconstruction error indicates a novel signal that does not match the patterns of normal data. One unsupervised portion of the manifold model adapts its representation in response to recent data. A second supervised portion of the model is made of a basis trained in advance using labeled examples of RFI; this prevents false positives due to these events. For a linear model, an orthonormalization operation is used to combine these bases prior to the anomaly detection decision. Another novel aspect of the approach lies in combining basis vectors learned in an unsupervised, online fashion from the data stream with supervised basis vectors learned in advance from known examples of false alarms. Adaptive, data-driven detection is achieved that is also informed by existing domain knowledge about signals that may be statistically anomalous, but are not interesting and should therefore be ignored. The method was evaluated using data from the Parkes Multibeam Survey. This data set was originally collected to search for pulsars, which are astronomical sources that emit radio pulses at regular periods. However, several

  3. Automated Reference Toolset (ART)—Data

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — These environmental raster covariate, geospatial vector data, and tabular data were compiled as input data for the Automated Reference Toolset (ART) algorithm.

  4. Electron beam welding complex diagnostics automated system

    Directory of Open Access Journals (Sweden)

    Є. В. Нікітенко

    2013-07-01

    Full Text Available The structure of the system of technical diagnostics is investigated. The algorithm of technical diagnostic of electron beam welding complex, which serves as the basis for creation of automated system for technical diagnostics, is proposed

  5. Learning Supervised Topic Models for Classification and Regression from Crowds.

    Science.gov (United States)

    Rodrigues, Filipe; Lourenco, Mariana; Ribeiro, Bernardete; Pereira, Francisco C

    2017-12-01

    The growing need to analyze large collections of documents has led to great developments in topic modeling. Since documents are frequently associated with other related variables, such as labels or ratings, much interest has been placed on supervised topic models. However, the nature of most annotation tasks, prone to ambiguity and noise, often with high volumes of documents, deem learning under a single-annotator assumption unrealistic or unpractical for most real-world applications. In this article, we propose two supervised topic models, one for classification and another for regression problems, which account for the heterogeneity and biases among different annotators that are encountered in practice when learning from crowds. We develop an efficient stochastic variational inference algorithm that is able to scale to very large datasets, and we empirically demonstrate the advantages of the proposed model over state-of-the-art approaches.

  6. Cognitive Inference Device for Activity Supervision in the Elderly

    Science.gov (United States)

    2014-01-01

    Human activity, life span, and quality of life are enhanced by innovations in science and technology. Aging individual needs to take advantage of these developments to lead a self-regulated life. However, maintaining a self-regulated life at old age involves a high degree of risk, and the elderly often fail at this goal. Thus, the objective of our study is to investigate the feasibility of implementing a cognitive inference device (CI-device) for effective activity supervision in the elderly. To frame the CI-device, we propose a device design framework along with an inference algorithm and implement the designs through an artificial neural model with different configurations, mapping the CI-device's functions to minimise the device's prediction error. An analysis and discussion are then provided to validate the feasibility of CI-device implementation for activity supervision in the elderly. PMID:25405211

  7. Using Supervised Deep Learning for Human Age Estimation Problem

    Science.gov (United States)

    Drobnyh, K. A.; Polovinkin, A. N.

    2017-05-01

    Automatic facial age estimation is a challenging task upcoming in recent years. In this paper, we propose using the supervised deep learning features to improve an accuracy of the existing age estimation algorithms. There are many approaches solving the problem, an active appearance model and the bio-inspired features are two of them which showed the best accuracy. For experiments we chose popular publicly available FG-NET database, which contains 1002 images with a broad variety of light, pose, and expression. LOPO (leave-one-person-out) method was used to estimate the accuracy. Experiments demonstrated that adding supervised deep learning features has improved accuracy for some basic models. For example, adding the features to an active appearance model gave the 4% gain (the error decreased from 4.59 to 4.41).

  8. Antenna arrays: waveguide layout designing automation

    OpenAIRE

    Anamova, R. R.

    2014-01-01

    Waveguide layout designing automation in the large-sized phased antenna arrays is studied. A new methodology of the automation and algorithms based on the flexible connection routing method are suggested. Results are realized in the software module WDS (Waveguide Design Solution) based on SolidWorks system. This module gives an opportunity to decrease design and engineering time and costs.

  9. Classifier Directed Data Hybridization for Geographic Sample Supervised Segment Generation

    Directory of Open Access Journals (Sweden)

    Christoff Fourie

    2014-11-01

    Full Text Available Quality segment generation is a well-known challenge and research objective within Geographic Object-based Image Analysis (GEOBIA. Although methodological avenues within GEOBIA are diverse, segmentation commonly plays a central role in most approaches, influencing and being influenced by surrounding processes. A general approach using supervised quality measures, specifically user provided reference segments, suggest casting the parameters of a given segmentation algorithm as a multidimensional search problem. In such a sample supervised segment generation approach, spatial metrics observing the user provided reference segments may drive the search process. The search is commonly performed by metaheuristics. A novel sample supervised segment generation approach is presented in this work, where the spectral content of provided reference segments is queried. A one-class classification process using spectral information from inside the provided reference segments is used to generate a probability image, which in turn is employed to direct a hybridization of the original input imagery. Segmentation is performed on such a hybrid image. These processes are adjustable, interdependent and form a part of the search problem. Results are presented detailing the performances of four method variants compared to the generic sample supervised segment generation approach, under various conditions in terms of resultant segment quality, required computing time and search process characteristics. Multiple metrics, metaheuristics and segmentation algorithms are tested with this approach. Using the spectral data contained within user provided reference segments to tailor the output generally improves the results in the investigated problem contexts, but at the expense of additional required computing time.

  10. A review of supervised machine learning applied to ageing research.

    Science.gov (United States)

    Fabris, Fabio; Magalhães, João Pedro de; Freitas, Alex A

    2017-04-01

    Broadly speaking, supervised machine learning is the computational task of learning correlations between variables in annotated data (the training set), and using this information to create a predictive model capable of inferring annotations for new data, whose annotations are not known. Ageing is a complex process that affects nearly all animal species. This process can be studied at several levels of abstraction, in different organisms and with different objectives in mind. Not surprisingly, the diversity of the supervised machine learning algorithms applied to answer biological questions reflects the complexities of the underlying ageing processes being studied. Many works using supervised machine learning to study the ageing process have been recently published, so it is timely to review these works, to discuss their main findings and weaknesses. In summary, the main findings of the reviewed papers are: the link between specific types of DNA repair and ageing; ageing-related proteins tend to be highly connected and seem to play a central role in molecular pathways; ageing/longevity is linked with autophagy and apoptosis, nutrient receptor genes, and copper and iron ion transport. Additionally, several biomarkers of ageing were found by machine learning. Despite some interesting machine learning results, we also identified a weakness of current works on this topic: only one of the reviewed papers has corroborated the computational results of machine learning algorithms through wet-lab experiments. In conclusion, supervised machine learning has contributed to advance our knowledge and has provided novel insights on ageing, yet future work should have a greater emphasis in validating the predictions.

  11. SUPERVISED MACHINE LEARNING MODEL FOR MICRORNA EXPRESSION DATA IN CANCER

    Directory of Open Access Journals (Sweden)

    Indra Waspada

    2017-06-01

    Full Text Available The cancer cell gene expression data in general has a very large feature and requires analysis to find out which genes are strongly influencing the specific disease for diagnosis and drug discovery. In this paper several methods of supervised learning (decisien tree, naïve bayes, neural network, and deep learning are used to classify cancer cells based on the expression of the microRNA gene to obtain the best method that can be used for gene analysis. In this study there is no optimization and tuning of the algorithm to test the ability of general algorithms. There are 1881 features of microRNA gene epresi on 25 cancer classes based on tissue location. A simple feature selection method is used to test the comparison of the algorithm. Expreriments were conducted with various scenarios to test the accuracy of the classification.

  12. Sundhedsfaglig supervision som klinisk metode

    DEFF Research Database (Denmark)

    Nordentoft, Helle Merete

    2011-01-01

    Kapitlet gennemgår og diskuterer sundhedsfaglig supervision (SFS) som metode. Formålet med kapitlet er at give læseren indsigt i og et kritisk blik på metodens muligheder og begrænsninger. Indledningsvis uddyber kapitlet den historiske baggrund for SFS og hvordan metoden udfolder sig i praksis me...... metodens fremtidige perspektiver - set i lyset af de mange nye metoder, der 'kæmper' om de samme målgrupper i vejledningslandskabet så som f.eks. coaching, debriefing og mentoring....

  13. Marketing automation

    National Research Council Canada - National Science Library

    Raluca Dania Todor

    2016-01-01

      The automation of the marketing process seems to be nowadays, the only solution to face the major changes brought by the fast evolution of technology and the continuous increase in supply and demand...

  14. An Effective Big Data Supervised Imbalanced Classification Approach for Ortholog Detection in Related Yeast Species

    Directory of Open Access Journals (Sweden)

    Deborah Galpert

    2015-01-01

    Full Text Available Orthology detection requires more effective scaling algorithms. In this paper, a set of gene pair features based on similarity measures (alignment scores, sequence length, gene membership to conserved regions, and physicochemical profiles are combined in a supervised pairwise ortholog detection approach to improve effectiveness considering low ortholog ratios in relation to the possible pairwise comparison between two genomes. In this scenario, big data supervised classifiers managing imbalance between ortholog and nonortholog pair classes allow for an effective scaling solution built from two genomes and extended to other genome pairs. The supervised approach was compared with RBH, RSD, and OMA algorithms by using the following yeast genome pairs: Saccharomyces cerevisiae-Kluyveromyces lactis, Saccharomyces cerevisiae-Candida glabrata, and Saccharomyces cerevisiae-Schizosaccharomyces pombe as benchmark datasets. Because of the large amount of imbalanced data, the building and testing of the supervised model were only possible by using big data supervised classifiers managing imbalance. Evaluation metrics taking low ortholog ratios into account were applied. From the effectiveness perspective, MapReduce Random Oversampling combined with Spark SVM outperformed RBH, RSD, and OMA, probably because of the consideration of gene pair features beyond alignment similarities combined with the advances in big data supervised classification.

  15. An Effective Big Data Supervised Imbalanced Classification Approach for Ortholog Detection in Related Yeast Species.

    Science.gov (United States)

    Galpert, Deborah; Del Río, Sara; Herrera, Francisco; Ancede-Gallardo, Evys; Antunes, Agostinho; Agüero-Chapin, Guillermin

    2015-01-01

    Orthology detection requires more effective scaling algorithms. In this paper, a set of gene pair features based on similarity measures (alignment scores, sequence length, gene membership to conserved regions, and physicochemical profiles) are combined in a supervised pairwise ortholog detection approach to improve effectiveness considering low ortholog ratios in relation to the possible pairwise comparison between two genomes. In this scenario, big data supervised classifiers managing imbalance between ortholog and nonortholog pair classes allow for an effective scaling solution built from two genomes and extended to other genome pairs. The supervised approach was compared with RBH, RSD, and OMA algorithms by using the following yeast genome pairs: Saccharomyces cerevisiae-Kluyveromyces lactis, Saccharomyces cerevisiae-Candida glabrata, and Saccharomyces cerevisiae-Schizosaccharomyces pombe as benchmark datasets. Because of the large amount of imbalanced data, the building and testing of the supervised model were only possible by using big data supervised classifiers managing imbalance. Evaluation metrics taking low ortholog ratios into account were applied. From the effectiveness perspective, MapReduce Random Oversampling combined with Spark SVM outperformed RBH, RSD, and OMA, probably because of the consideration of gene pair features beyond alignment similarities combined with the advances in big data supervised classification.

  16. Development of Raman microspectroscopy for automated detection and imaging of basal cell carcinoma

    Science.gov (United States)

    Larraona-Puy, Marta; Ghita, Adrian; Zoladek, Alina; Perkins, William; Varma, Sandeep; Leach, Iain H.; Koloydenko, Alexey A.; Williams, Hywel; Notingher, Ioan

    2009-09-01

    We investigate the potential of Raman microspectroscopy (RMS) for automated evaluation of excised skin tissue during Mohs micrographic surgery (MMS). The main aim is to develop an automated method for imaging and diagnosis of basal cell carcinoma (BCC) regions. Selected Raman bands responsible for the largest spectral differences between BCC and normal skin regions and linear discriminant analysis (LDA) are used to build a multivariate supervised classification model. The model is based on 329 Raman spectra measured on skin tissue obtained from 20 patients. BCC is discriminated from healthy tissue with 90+/-9% sensitivity and 85+/-9% specificity in a 70% to 30% split cross-validation algorithm. This multivariate model is then applied on tissue sections from new patients to image tumor regions. The RMS images show excellent correlation with the gold standard of histopathology sections, BCC being detected in all positive sections. We demonstrate the potential of RMS as an automated objective method for tumor evaluation during MMS. The replacement of current histopathology during MMS by a ``generalization'' of the proposed technique may improve the feasibility and efficacy of MMS, leading to a wider use according to clinical need.

  17. Target Localization in Wireless Sensor Networks Using Online Semi-Supervised Support Vector Regression

    Directory of Open Access Journals (Sweden)

    Jaehyun Yoo

    2015-05-01

    Full Text Available Machine learning has been successfully used for target localization in wireless sensor networks (WSNs due to its accurate and robust estimation against highly nonlinear and noisy sensor measurement. For efficient and adaptive learning, this paper introduces online semi-supervised support vector regression (OSS-SVR. The first advantage of the proposed algorithm is that, based on semi-supervised learning framework, it can reduce the requirement on the amount of the labeled training data, maintaining accurate estimation. Second, with an extension to online learning, the proposed OSS-SVR automatically tracks changes of the system to be learned, such as varied noise characteristics. We compare the proposed algorithm with semi-supervised manifold learning, an online Gaussian process and online semi-supervised colocalization. The algorithms are evaluated for estimating the unknown location of a mobile robot in a WSN. The experimental results show that the proposed algorithm is more accurate under the smaller amount of labeled training data and is robust to varying noise. Moreover, the suggested algorithm performs fast computation, maintaining the best localization performance in comparison with the other methods.

  18. Identifying Active Travel Behaviors in Challenging Environments Using GPS, Accelerometers, and Machine Learning Algorithms.

    Science.gov (United States)

    Ellis, Katherine; Godbole, Suneeta; Marshall, Simon; Lanckriet, Gert; Staudenmayer, John; Kerr, Jacqueline

    2014-01-01

    Active travel is an important area in physical activity research, but objective measurement of active travel is still difficult. Automated methods to measure travel behaviors will improve research in this area. In this paper, we present a supervised machine learning method for transportation mode prediction from global positioning system (GPS) and accelerometer data. We collected a dataset of about 150 h of GPS and accelerometer data from two research assistants following a protocol of prescribed trips consisting of five activities: bicycling, riding in a vehicle, walking, sitting, and standing. We extracted 49 features from 1-min windows of this data. We compared the performance of several machine learning algorithms and chose a random forest algorithm to classify the transportation mode. We used a moving average output filter to smooth the output predictions over time. The random forest algorithm achieved 89.8% cross-validated accuracy on this dataset. Adding the moving average filter to smooth output predictions increased the cross-validated accuracy to 91.9%. Machine learning methods are a viable approach for automating measurement of active travel, particularly for measuring travel activities that traditional accelerometer data processing methods misclassify, such as bicycling and vehicle travel.

  19. Identifying active travel behaviors in challenging environments using GPS, accelerometers and machine learning algorithms

    Directory of Open Access Journals (Sweden)

    Katherine eEllis

    2014-04-01

    Full Text Available Background: Active travel is an important area in physical activity research, but objective measurement of active travel is still difficult. Automated methods to measure travel behaviors will improve research in this area. In this paper we present a supervised machine learning method for transportation mode prediction from GPS and accelerometer data. Methods: We collected a dataset of about 150 hours of GPS and accelerometer data from two research assistants following a protocol of prescribed trips consisting of five activities: bicycling, riding in a vehicle, walking, sitting, and standing. We extracted 49 features from 1-minute windows of this data. We compared the performance of several machine learning algorithms and chose a random forest algorithm to classify the transportation mode. We used a moving average output filter to smooth the output predictions over time. Results: The random forest algorithm achieved 89.8% cross-validated accuracy on this dataset. Adding the moving average filter to smooth output predictions increased the cross-validated accuracy to 91.9%. Conclusions: Machine learning methods are a viable approach for automating measurement of active travel, particularly for measuring travel activities that traditional accelerometer data processing methods misclassify, such as bicycling and vehicle travel.

  20. Subsampled Hessian Newton Methods for Supervised Learning.

    Science.gov (United States)

    Wang, Chien-Chih; Huang, Chun-Heng; Lin, Chih-Jen

    2015-08-01

    Newton methods can be applied in many supervised learning approaches. However, for large-scale data, the use of the whole Hessian matrix can be time-consuming. Recently, subsampled Newton methods have been proposed to reduce the computational time by using only a subset of data for calculating an approximation of the Hessian matrix. Unfortunately, we find that in some situations, the running speed is worse than the standard Newton method because cheaper but less accurate search directions are used. In this work, we propose some novel techniques to improve the existing subsampled Hessian Newton method. The main idea is to solve a two-dimensional subproblem per iteration to adjust the search direction to better minimize the second-order approximation of the function value. We prove the theoretical convergence of the proposed method. Experiments on logistic regression, linear SVM, maximum entropy, and deep networks indicate that our techniques significantly reduce the running time of the subsampled Hessian Newton method. The resulting algorithm becomes a compelling alternative to the standard Newton method for large-scale data classification.

  1. Estimating travel and service times for automated route planning and service certification in municipal waste management.

    Science.gov (United States)

    Ghiani, Gianpaolo; Guerrieri, Antonio; Manni, Andrea; Manni, Emanuele

    2015-12-01

    Nowadays, route planning algorithms are commonly used to generate detailed work schedules for solid waste collection vehicles. However, the reliability of such schedules relies heavily on the accuracy of a number of parameters, such as the actual service time at each collection location and the traversal times of the streets (which depend on the specific day of the week and the time of day). In this paper, we propose an automated classification and estimation algorithm that, based on Global Positioning System data collected by the fleet, estimates such parameters in a timely and accurate fashion. In particular, our approach is able to classify automatically events like stops due to traffic jams, stops at traffic lights and stops at collection sites. The system can also be used for automated fleet supervision and in order to notify on a web site whether certain services have been actually provided on a certain day, thus making waste management more accountable to citizens. An experimentation carried out in an Italian municipality shows the advantages of our approach. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Supervised filters for EEG signal in naturally occurring epilepsy forecasting.

    Directory of Open Access Journals (Sweden)

    Francisco Javier Muñoz-Almaraz

    Full Text Available Nearly 1% of the global population has Epilepsy. Forecasting epileptic seizures with an acceptable confidence level, could improve the disease treatment and thus the lifestyle of the people who suffer it. To do that the electroencephalogram (EEG signal is usually studied through spectral power band filtering, but this paper proposes an alternative novel method of preprocessing the EEG signal based on supervised filters. Such filters have been employed in a machine learning algorithm, such as the K-Nearest Neighbor (KNN, to improve the prediction of seizures. The proposed solution extends with this novel approach an algorithm that was submitted to win the third prize of an international Data Science challenge promoted by Kaggle contest platform and the American Epilepsy Society, the Epilepsy Foundation, National Institutes of Health (NIH and Mayo Clinic. A formal description of these preprocessing methods is presented and a detailed analysis in terms of Receiver Operating Characteristics (ROC curve and Area Under ROC curve is performed. The obtained results show statistical significant improvements when compared with the spectral power band filtering (PBF typical baseline. A trend between performance and the dataset size is observed, suggesting that the supervised filters bring better information, compared to the conventional PBF filters, as the dataset grows in terms of monitored variables (sensors and time length. The paper demonstrates a better accuracy in forecasting when new filters are employed and its main contribution is in the field of machine learning algorithms to develop more accurate predictive systems.

  3. Applying active learning to supervised word sense disambiguation in MEDLINE.

    Science.gov (United States)

    Chen, Yukun; Cao, Hongxin; Mei, Qiaozhu; Zheng, Kai; Xu, Hua

    2013-01-01

    This study was to assess whether active learning strategies can be integrated with supervised word sense disambiguation (WSD) methods, thus reducing the number of annotated samples, while keeping or improving the quality of disambiguation models. We developed support vector machine (SVM) classifiers to disambiguate 197 ambiguous terms and abbreviations in the MSH WSD collection. Three different uncertainty sampling-based active learning algorithms were implemented with the SVM classifiers and were compared with a passive learner (PL) based on random sampling. For each ambiguous term and each learning algorithm, a learning curve that plots the accuracy computed from the test set as a function of the number of annotated samples used in the model was generated. The area under the learning curve (ALC) was used as the primary metric for evaluation. Our experiments demonstrated that active learners (ALs) significantly outperformed the PL, showing better performance for 177 out of 197 (89.8%) WSD tasks. Further analysis showed that to achieve an average accuracy of 90%, the PL needed 38 annotated samples, while the ALs needed only 24, a 37% reduction in annotation effort. Moreover, we analyzed cases where active learning algorithms did not achieve superior performance and identified three causes: (1) poor models in the early learning stage; (2) easy WSD cases; and (3) difficult WSD cases, which provide useful insight for future improvements. This study demonstrated that integrating active learning strategies with supervised WSD methods could effectively reduce annotation cost and improve the disambiguation models.

  4. The Science of Home Automation

    Science.gov (United States)

    Thomas, Brian Louis

    Smart home technologies and the concept of home automation have become more popular in recent years. This popularity has been accompanied by social acceptance of passive sensors installed throughout the home. The subsequent increase in smart homes facilitates the creation of home automation strategies. We believe that home automation strategies can be generated intelligently by utilizing smart home sensors and activity learning. In this dissertation, we hypothesize that home automation can benefit from activity awareness. To test this, we develop our activity-aware smart automation system, CARL (CASAS Activity-aware Resource Learning). CARL learns the associations between activities and device usage from historical data and utilizes the activity-aware capabilities to control the devices. To help validate CARL we deploy and test three different versions of the automation system in a real-world smart environment. To provide a foundation of activity learning, we integrate existing activity recognition and activity forecasting into CARL home automation. We also explore two alternatives to using human-labeled data to train the activity learning models. The first unsupervised method is Activity Detection, and the second is a modified DBSCAN algorithm that utilizes Dynamic Time Warping (DTW) as a distance metric. We compare the performance of activity learning with human-defined labels and with automatically-discovered activity categories. To provide evidence in support of our hypothesis, we evaluate CARL automation in a smart home testbed. Our results indicate that home automation can be boosted through activity awareness. We also find that the resulting automation has a high degree of usability and comfort for the smart home resident.

  5. Clinical supervision in a challenging behaviour unit.

    Science.gov (United States)

    Carney, Stewart

    As part of an ongoing service development programme at St Andrew's Hospital, Northampton, it was identified that it would be beneficial to explore whether qualified nursing staff in the hospital's five clinical divisions were satisfied with the clinical supervision they received. Also, the survey examined whether supervision was of good quality, was suitable for different specialist environments and if it affected motivation, skills, confidence and stress levels. The survey also explored if there was a difference between D or E-grade nurses and nurses who are F grade and above regarding their perception of clinical supervision. This included a Likert scaled questionnaire (Ladany et al, 1996); and a retrospective (ex-post-facto) cross-sectional survey design. A questionnaire and information sheet was dispatched to 50 qualified nursing staff. Ten nurses from each of the five divisions were invited to participate. After one month, 35 (70 per cent) had returned the questionnaires. Senior staff benefit more and are more satisfied with regular supervision than junior staff. The survey shows that clinical supervision is in quite good shape, with most nurses receiving regular supervision within a limited time span. Large numbers of qualified nurses receive supervision in the hospital and this is extremely positive. However, there are a number of discrepancies regarding who receives supervision, and within what time frame, and why so many qualified nurses feel the supervision is not helping them work more effectively.

  6. The Optimization of setting the Supervision in Organization

    OpenAIRE

    Vyštejnová, Markéta

    2015-01-01

    The Optimization of setting the Supervision in Organization Abstract This diploma thesis aims to optimize supervision in organization providing social services. The theoretical part contains chapters about supervision, explains the function and types of supervision, the supervisor's personality, the content of the supervision contract and ethical principles in supervision. This part describes the organization's culture and its impact on supervision. The theoretical part concludes with a chapt...

  7. Abusive Supervision and Subordinate Performance : Instrumentality Considerations in the Emergence and Consequences of Abusive Supervision

    NARCIS (Netherlands)

    Walter, Frank; Lam, Catherine K.; van der Vegt, Geert; Huang, X.; Miao, Q.

    Drawing from moral exclusion theory, this article examines outcome dependence and interpersonal liking as key boundary conditions for the linkage between perceived subordinate performance and abusive supervision. Moreover, it investigates the role of abusive supervision for subordinates' subsequent,

  8. Semi-Supervised Half-Quadratic Nonnegative Matrix Factorization for Face Recognition

    KAUST Repository

    Alghamdi, Masheal M.

    2014-05-01

    Face recognition is a challenging problem in computer vision. Difficulties such as slight differences between similar faces of different people, changes in facial expressions, light and illumination condition, and pose variations add extra complications to the face recognition research. Many algorithms are devoted to solving the face recognition problem, among which the family of nonnegative matrix factorization (NMF) algorithms has been widely used as a compact data representation method. Different versions of NMF have been proposed. Wang et al. proposed the graph-based semi-supervised nonnegative learning (S2N2L) algorithm that uses labeled data in constructing intrinsic and penalty graph to enforce separability of labeled data, which leads to a greater discriminating power. Moreover the geometrical structure of labeled and unlabeled data is preserved through using the smoothness assumption by creating a similarity graph that conserves the neighboring information for all labeled and unlabeled data. However, S2N2L is sensitive to light changes, illumination, and partial occlusion. In this thesis, we propose a Semi-Supervised Half-Quadratic NMF (SSHQNMF) algorithm that combines the benefits of S2N2L and the robust NMF by the half- quadratic minimization (HQNMF) algorithm.Our algorithm improves upon the S2N2L algorithm by replacing the Frobenius norm with a robust M-Estimator loss function. A multiplicative update solution for our SSHQNMF algorithmis driven using the half- 4 quadratic (HQ) theory. Extensive experiments on ORL, Yale-A and a subset of the PIE data sets for nine M-estimator loss functions for both SSHQNMF and HQNMF algorithms are investigated, and compared with several state-of-the-art supervised and unsupervised algorithms, along with the original S2N2L algorithm in the context of classification, clustering, and robustness against partial occlusion. The proposed algorithm outperformed the other algorithms. Furthermore, SSHQNMF with Maximum Correntropy

  9. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...... would like to emphasize another side to the algorithmic everyday life. We argue that algorithms can instigate and facilitate imagination, creativity, and frivolity, while saying something that is simultaneously old and new, always almost repeating what was before but never quite returning. We show...... this by threading together stimulating quotes and screenshots from Google’s autocomplete algorithms. In doing so, we invite the reader to re-explore Google’s autocomplete algorithms in a creative, playful, and reflexive way, thereby rendering more visible some of the excitement and frivolity that comes from being...

  10. Greenhouse automation with the OMRON C200HX programmable controller

    OpenAIRE

    Lampret, Dejan

    2012-01-01

    This thesis shows the greenhouse automation. With automation we want to create and maintain suitable climatic conditions in the greenhouse. At the same time we can monitor and supervise the operation through a graphical interface on the PC. We use controller from OMRON company that is able to manage the entire process. When program is running it uses information from sensors and according to their measurement controller power up devices as needed. The program running on the controller is t...

  11. Teaching the computer to code frames in news: comparing two supervised machine learning approaches to frame analysis

    NARCIS (Netherlands)

    Burscher, B.; Odijk, D.; Vliegenthart, R.; de Rijke, M.; de Vreese, C.H.

    2014-01-01

    We explore the application of supervised machine learning (SML) to frame coding. By automating the coding of frames in news, SML facilitates the incorporation of large-scale content analysis into framing research, even if financial resources are scarce. This furthers a more integrated investigation

  12. Primary health care supervision in developing countries.

    Science.gov (United States)

    Bosch-Capblanch, Xavier; Garner, Paul

    2008-03-01

    To (a) summarise opinion about what supervision of primary health care is by those advocating it; (b) compare these features with reports describing supervision in practice; and (c) to appraise the evidence of the effects of sector performance. Systematic review. Reports were classified into three groups and summarised using appropriate methods: policy and opinion papers (narrative summary), descriptive studies (systematically summarised) and experimental or quasi-experimental studies (design and outcomes systematically summarised). Data presented as narrative summaries and tables. 74 reports were included. In eight policy and opinion papers, supervision was conceptualised as the link between the district and the peripheral health staff; it is important in performance and staff motivation; it often includes problem solving, reviewing records, and observing clinical practice; and is usually undertaken by visiting the supervisees place of work. In 54 descriptive studies, the setting was the primary health care (PHC) or specific services and programmes. Supervisor-supervisee dyads were generally district personnel supervising health facilities or lay health workers. Supervision mostly meant visiting supervisees, but also included meetings in the centre; it appeared to focus on administration and checking, sometimes with checklists. Problem solving, feedback and clinical supervision, training and consultation with the community were less commonly described in the descriptive studies. Supervision appears expensive from studies that have reported costs. In 12 quasi-experimental trials, supervision interventions generally showed small positive effects in some of the outcomes assessed. However, trial quality was mixed, and outcomes varied greatly between studies. Supervision is widely recommended, but is a complex intervention and implemented in different ways. There is some evidence of benefit on health care performance, but the studies are generally limited in the rigor

  13. Empirical study of supervised gene screening

    Directory of Open Access Journals (Sweden)

    Ma Shuangge

    2006-12-01

    Full Text Available Abstract Background Microarray studies provide a way of linking variations of phenotypes with their genetic causations. Constructing predictive models using high dimensional microarray measurements usually consists of three steps: (1 unsupervised gene screening; (2 supervised gene screening; and (3 statistical model building. Supervised gene screening based on marginal gene ranking is commonly used to reduce the number of genes in the model building. Various simple statistics, such as t-statistic or signal to noise ratio, have been used to rank genes in the supervised screening. Despite of its extensive usage, statistical study of supervised gene screening remains scarce. Our study is partly motivated by the differences in gene discovery results caused by using different supervised gene screening methods. Results We investigate concordance and reproducibility of supervised gene screening based on eight commonly used marginal statistics. Concordance is assessed by the relative fractions of overlaps between top ranked genes screened using different marginal statistics. We propose a Bootstrap Reproducibility Index, which measures reproducibility of individual genes under the supervised screening. Empirical studies are based on four public microarray data. We consider the cases where the top 20%, 40% and 60% genes are screened. Conclusion From a gene discovery point of view, the effect of supervised gene screening based on different marginal statistics cannot be ignored. Empirical studies show that (1 genes passed different supervised screenings may be considerably different; (2 concordance may vary, depending on the underlying data structure and percentage of selected genes; (3 evaluated with the Bootstrap Reproducibility Index, genes passed supervised screenings are only moderately reproducible; and (4 concordance cannot be improved by supervised screening based on reproducibility.

  14. ICT Strategies and Tools for the Improvement of Instructional Supervision. The Virtual Supervision

    Science.gov (United States)

    Cano, Esteban Vazquez; Garcia, Ma. Luisa Sevillano

    2013-01-01

    This study aims to evaluate and analyze strategies, proposals, and ICT tools to promote a paradigm shift in educational supervision that enhances the schools of this century involved not only in teaching-face learning, but e-learning and blended learning. Traditional models of educational supervision do not guarantee adequate supervision of the…

  15. Cognitive Radio for Smart Grid: Theory, Algorithms, and Security

    Directory of Open Access Journals (Sweden)

    Raghuram Ranganathan

    2011-01-01

    Full Text Available Recently, cognitive radio and smart grid are two areas which have received considerable research impetus. Cognitive radios are intelligent software defined radios (SDRs that efficiently utilize the unused regions of the spectrum, to achieve higher data rates. The smart grid is an automated electric power system that monitors and controls grid activities. In this paper, the novel concept of incorporating a cognitive radio network as the communications infrastructure for the smart grid is presented. A brief overview of the cognitive radio, IEEE 802.22 standard and smart grid, is provided. Experimental results obtained by using dimensionality reduction techniques such as principal component analysis (PCA, kernel PCA, and landmark maximum variance unfolding (LMVU on Wi-Fi signal measurements are presented in a spectrum sensing context. Furthermore, compressed sensing algorithms such as Bayesian compressed sensing and the compressed sensing Kalman filter is employed for recovering the sparse smart meter transmissions. From the power system point of view, a supervised learning method called support vector machine (SVM is used for the automated classification of power system disturbances. The impending problem of securing the smart grid is also addressed, in addition to the possibility of applying FPGA-based fuzzy logic intrusion detection for the smart grid.

  16. Automated Wormscan

    Science.gov (United States)

    Puckering, Timothy; Thompson, Jake; Sathyamurthy, Sushruth; Sukumar, Sinduja; Shapira, Tirosh; Ebert, Paul

    2017-01-01

    There has been a recent surge of interest in computer-aided rapid data acquisition to increase the potential throughput and reduce the labour costs of large scale Caenorhabditis elegans studies. We present Automated WormScan, a low-cost, high-throughput automated system using commercial photo scanners, which is extremely easy to implement and use, capable of scoring tens of thousands of organisms per hour with minimal operator input, and is scalable. The method does not rely on software training for image recognition, but uses the generation of difference images from sequential scans to identify moving objects. This approach results in robust identification of worms with little computational demand. We demonstrate the utility of the system by conducting toxicity, growth and fecundity assays, which demonstrate the consistency of our automated system, the quality of the data relative to manual scoring methods and congruity with previously published results. PMID:28413617

  17. Multiclass semi-supervised learning for animal behavior recognition from accelerometer data

    NARCIS (Netherlands)

    Tanha, J.; van Someren, M.; de Bakker, M.; Bouten, W.; Shamoun-Baranes, J.; Afsarmanesh, H.

    2012-01-01

    In this paper we present a new Multiclass semi-supervised learning algorithm that uses a base classifier in combination with a similarity function applied to all data to find a classifier that maximizes the margin and consistency over all data. A novel multiclass loss function is presented and used

  18. Automated Testing of Event-Driven Applications

    DEFF Research Database (Denmark)

    Jensen, Casper Svenning

    the process of automated testing of web applications that depend on client-server communication, and we present a learning algorithm for inferring such server interface descriptions from concrete observations. We implement tools for web applications and Android mobile applications using the above algorithms...

  19. Teacher Supervision Practices and Principals' Characteristics

    Science.gov (United States)

    April, Daniel; Bouchamma, Yamina

    2015-01-01

    A questionnaire was used to determine the individual and collective teacher supervision practices of school principals and vice-principals in Québec (n = 39) who participated in a research-action study on pedagogical supervision. These practices were then analyzed in terms of the principals' sociodemographic and socioprofessional characteristics…

  20. Supportive supervision for medicines management in government ...

    African Journals Online (AJOL)

    Introduction: effective supportive supervision is widely recognized as essential for optimal management of medicines in government health facilities and also in contributing towards improved access and utilization of health services. This study sought to examine the extent supportive supervision for medicines management ...

  1. Gender and Power in Counselling and Supervision.

    Science.gov (United States)

    Taylor, Maye

    1994-01-01

    Addresses the need to reflect on how the dynamics of gender and power can be articulated together and adversely affect counseling and supervision relationships. Suggests incorporating a social analysis into supervision to help counselors clarify the political nature of some therapeutic issues, thus addressing gender stereotypes. Supports a…

  2. The Elements: A Model of Mindful Supervision

    Science.gov (United States)

    Sturm, Deborah C.; Presbury, Jack; Echterling, Lennis G.

    2012-01-01

    Mindfulness, based on an ancient spiritual practice, is a core quality and way of being that can deepen and enrich the supervision of counselors. This model of mindful supervision incorporates Buddhist and Hindu conceptualizations of the roles of the five elements--space, earth, water, fire, air--as they relate to adhikara or studentship, the…

  3. Supervision of Psychotherapy: Models, Issues, and Recommendations

    Science.gov (United States)

    Westefeld, John S.

    2009-01-01

    Current models and issues related to psychotherapy supervision are examined. These include ethical and legal issues, problems of interpersonal competence, and multicultural issues. As a part of this analysis, interviews about supervision with five prominent counseling psychologists are included to provide their perspectives. Implications for the…

  4. Ethical Issues in the Conduct of Supervision.

    Science.gov (United States)

    Sherry, Patrick

    1991-01-01

    Uses American Psychological Association code of ethics to understand ethical issues present in the conduct of supervision. Discusses ethical issues of responsibility, client and supervisee welfare, confidentiality, competency, moral and legal standards, public statements, and professional relationships in relation to supervision. (Author/NB)

  5. A Gestalt Approach to Group Supervision

    Science.gov (United States)

    Melnick, Joseph; Fall, Marijane

    2008-01-01

    The authors define and then describe the practice of group supervision. The role of creative experiment in assisting supervisees who perceive themselves as confused, moving in circles, or immobilized is described. Fictional case examples illustrate these issues in supervision. The authors posit the "good fit" of Gestalt theory and techniques with…

  6. 19 CFR 146.3 - Customs supervision.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false Customs supervision. 146.3 Section 146.3 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY (CONTINUED) FOREIGN TRADE ZONES General Provisions § 146.3 Customs supervision. (a) Assignment of Customs...

  7. Applying Services Marketing Principles to Postgraduate Supervision

    Science.gov (United States)

    Dann, Stephen

    2008-01-01

    Purpose: The paper aims to describe the application of two key service quality frameworks for improving the delivery of postgraduate research supervision. The services quality frameworks are used to identify key areas of overlap between services marketing practice and postgraduate supervision that can be used by the supervisor to improve research…

  8. On Cavellian scepticism and postgraduate student supervision ...

    African Journals Online (AJOL)

    In this essay I offer an imaginary of postgraduate student supervision focusing on sceptical encounters with the other. Drawing on the seminal thoughts of Harvard philosopher Stanley Cavell (1997), particularly on his ideas on 'living with scepticism', I argue that postgraduate student supervision ought to be an encounter ...

  9. Algorithms Introduction to Algorithms

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 1. Algorithms Introduction to Algorithms. R K Shyamasundar. Series Article Volume 1 Issue 1 January 1996 pp 20-27. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/001/01/0020-0027 ...

  10. Developing an Automated Machine Learning Marine Oil Spill Detection System with Synthetic Aperture Radar

    Science.gov (United States)

    Pinales, J. C.; Graber, H. C.; Hargrove, J. T.; Caruso, M. J.

    2016-02-01

    Previous studies have demonstrated the ability to detect and classify marine hydrocarbon films with spaceborne synthetic aperture radar (SAR) imagery. The dampening effects of hydrocarbon discharges on small surface capillary-gravity waves renders the ocean surface "radar dark" compared with the standard wind-borne ocean surfaces. Given the scope and impact of events like the Deepwater Horizon oil spill, the need for improved, automated and expedient monitoring of hydrocarbon-related marine anomalies has become a pressing and complex issue for governments and the extraction industry. The research presented here describes the development, training, and utilization of an algorithm that detects marine oil spills in an automated, semi-supervised manner, utilizing X-, C-, or L-band SAR data as the primary input. Ancillary datasets include related radar-borne variables (incidence angle, etc.), environmental data (wind speed, etc.) and textural descriptors. Shapefiles produced by an experienced human-analyst served as targets (validation) during the training portion of the investigation. Training and testing datasets were chosen for development and assessment of algorithm effectiveness as well as optimal conditions for oil detection in SAR data. The algorithm detects oil spills by following a 3-step methodology: object detection, feature extraction, and classification. Previous oil spill detection and classification methodologies such as machine learning algorithms, artificial neural networks (ANN), and multivariate classification methods like partial least squares-discriminant analysis (PLS-DA) are evaluated and compared. Statistical, transform, and model-based image texture techniques, commonly used for object mapping directly or as inputs for more complex methodologies, are explored to determine optimal textures for an oil spill detection system. The influence of the ancillary variables is explored, with a particular focus on the role of strong vs. weak wind forcing.

  11. Remote Supervision and Control of Air Conditioning Systems in Different Modes

    Science.gov (United States)

    Rafeeq, Mohammed; Afzal, Asif; Rajendra, Sree

    2018-01-01

    In the era of automation, most of the application of engineering and science are interrelated with system for optimal operation. To get the efficient result of an operation and desired response, interconnected systems should be controlled by directing, regulating and commanding. Here, air conditioning (AC) system is considered for experimentation, to supervise and control its functioning in both, automated and manual mode. This paper reports the work intended to design and develop an automated and manual AC system working in remote and local mode, to increase the level of comfort, easy operation, reducing human intervention and faults occurring in the system. The Programmable Logical Controller (PLC) and Supervisory Control and Data Acquisition (SCADA) system were used for remote supervision and monitoring of AC systems using series ninety protocol and remote terminal unit modbus protocol as communication module to operate in remote mode. PLC was used as remote terminal for continuous supervision and control of AC system. SCADA software was used as a tool for designing user friendly graphical user interface. The proposed SCADA AC system successfully monitors and controls in accordance within the parameter limits like temperature, pressure, humidity and voltage. With all the features, this designed system is capable of efficient handling of the resources like the compressor, humidifier etc., with all the levels of safety and durability. This system also maintains the temperature and controls the humidity of the remote location and also looks after the health of the compressor.

  12. Effective use of technology in clinical supervision

    Directory of Open Access Journals (Sweden)

    Priya Martin

    2017-06-01

    Full Text Available Clinical supervision is integral to continuing professional development of health professionals. With advances in technology, clinical supervision too can be undertaken using mediums such as videoconference, email and teleconference. This mode of clinical supervision is termed as telesupervision. While telesupervision could be useful in any context, its value is amplified for health professionals working in rural and remote areas where access to supervisors within the local work environment is often diminished. While telesupervision offers innovative means to undertake clinical supervision, there remain gaps in the literature in terms of its parameters of use in clinical practice. This article outlines ten evidence-informed, practical tips stemming from a review of the literature that will enable health care stakeholders to use technology effectively and efficiently while undertaking clinical supervision. By highlighting the “how to” aspect, telesupervision can be delivered in the right way, to the right health professional, at the right time.

  13. Conduction Delay Learning Model for Unsupervised and Supervised Classification of Spatio-Temporal Spike Patterns

    Directory of Open Access Journals (Sweden)

    Takashi Matsubara

    2017-11-01

    Full Text Available Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning.

  14. Library Automation.

    Science.gov (United States)

    Husby, Ole

    1990-01-01

    The challenges and potential benefits of automating university libraries are reviewed, with special attention given to cooperative systems. Aspects discussed include database size, the role of the university computer center, storage modes, multi-institutional systems, resource sharing, cooperative system management, networking, and intelligent…

  15. Noise-enhanced clustering and competitive learning algorithms.

    Science.gov (United States)

    Osoba, Osonde; Kosko, Bart

    2013-01-01

    Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Phenotype classification of zebrafish embryos by supervised learning.

    Directory of Open Access Journals (Sweden)

    Nathalie Jeanray

    Full Text Available Zebrafish is increasingly used to assess biological properties of chemical substances and thus is becoming a specific tool for toxicological and pharmacological studies. The effects of chemical substances on embryo survival and development are generally evaluated manually through microscopic observation by an expert and documented by several typical photographs. Here, we present a methodology to automatically classify brightfield images of wildtype zebrafish embryos according to their defects by using an image analysis approach based on supervised machine learning. We show that, compared to manual classification, automatic classification results in 90 to 100% agreement with consensus voting of biological experts in nine out of eleven considered defects in 3 days old zebrafish larvae. Automation of the analysis and classification of zebrafish embryo pictures reduces the workload and time required for the biological expert and increases the reproducibility and objectivity of this classification.

  17. Supervised Learning with Complex-valued Neural Networks

    CERN Document Server

    Suresh, Sundaram; Savitha, Ramasamy

    2013-01-01

    Recent advancements in the field of telecommunications, medical imaging and signal processing deal with signals that are inherently time varying, nonlinear and complex-valued. The time varying, nonlinear characteristics of these signals can be effectively analyzed using artificial neural networks.  Furthermore, to efficiently preserve the physical characteristics of these complex-valued signals, it is important to develop complex-valued neural networks and derive their learning algorithms to represent these signals at every step of the learning process. This monograph comprises a collection of new supervised learning algorithms along with novel architectures for complex-valued neural networks. The concepts of meta-cognition equipped with a self-regulated learning have been known to be the best human learning strategy. In this monograph, the principles of meta-cognition have been introduced for complex-valued neural networks in both the batch and sequential learning modes. For applications where the computati...

  18. Sparse Markov chain-based semi-supervised multi-instance multi-label method for protein function prediction.

    Science.gov (United States)

    Han, Chao; Chen, Jian; Wu, Qingyao; Mu, Shuai; Min, Huaqing

    2015-10-01

    Automated assignment of protein function has received considerable attention in recent years for genome-wide study. With the rapid accumulation of genome sequencing data produced by high-throughput experimental techniques, the process of manually predicting functional properties of proteins has become increasingly cumbersome. Such large genomics data sets can only be annotated computationally. However, automated assignment of functions to unknown protein is challenging due to its inherent difficulty and complexity. Previous studies have revealed that solving problems involving complicated objects with multiple semantic meanings using the multi-instance multi-label (MIML) framework is effective. For the protein function prediction problems, each protein object in nature may associate with distinct structural units (instances) and multiple functional properties (class labels) where each unit is described by an instance and each functional property is considered as a class label. Thus, it is convenient and natural to tackle the protein function prediction problem by using the MIML framework. In this paper, we propose a sparse Markov chain-based semi-supervised MIML method, called Sparse-Markov. A sparse transductive probability graph is constructed to encode the affinity information of the data based on ensemble of Hausdorff distance metrics. Our goal is to exploit the affinity between protein objects in the sparse transductive probability graph to seek a sparse steady state probability of the Markov chain model to do protein function prediction, such that two proteins are given similar functional labels if they are close to each other in terms of an ensemble Hausdorff distance in the graph. Experimental results on seven real-world organism data sets covering three biological domains show that our proposed Sparse-Markov method is able to achieve better performance than four state-of-the-art MIML learning algorithms.

  19. Descriptor Learning via Supervised Manifold Regularization for Multioutput Regression.

    Science.gov (United States)

    Zhen, Xiantong; Yu, Mengyang; Islam, Ali; Bhaduri, Mousumi; Chan, Ian; Li, Shuo

    2017-09-01

    Multioutput regression has recently shown great ability to solve challenging problems in both computer vision and medical image analysis. However, due to the huge image variability and ambiguity, it is fundamentally challenging to handle the highly complex input-target relationship of multioutput regression, especially with indiscriminate high-dimensional representations. In this paper, we propose a novel supervised descriptor learning (SDL) algorithm for multioutput regression, which can establish discriminative and compact feature representations to improve the multivariate estimation performance. The SDL is formulated as generalized low-rank approximations of matrices with a supervised manifold regularization. The SDL is able to simultaneously extract discriminative features closely related to multivariate targets and remove irrelevant and redundant information by transforming raw features into a new low-dimensional space aligned to targets. The achieved discriminative while compact descriptor largely reduces the variability and ambiguity for multioutput regression, which enables more accurate and efficient multivariate estimation. We conduct extensive evaluation of the proposed SDL on both synthetic data and real-world multioutput regression tasks for both computer vision and medical image analysis. Experimental results have shown that the proposed SDL can achieve high multivariate estimation accuracy on all tasks and largely outperforms the algorithms in the state of the arts. Our method establishes a novel SDL framework for multioutput regression, which can be widely used to boost the performance in different applications.

  20. Advances in projection of climate change impacts using supervised nonlinear dimensionality reduction techniques

    Science.gov (United States)

    Sarhadi, Ali; Burn, Donald H.; Yang, Ge; Ghodsi, Ali

    2017-02-01

    One of the main challenges in climate change studies is accurate projection of the global warming impacts on the probabilistic behaviour of hydro-climate processes. Due to the complexity of climate-associated processes, identification of predictor variables from high dimensional atmospheric variables is considered a key factor for improvement of climate change projections in statistical downscaling approaches. For this purpose, the present paper adopts a new approach of supervised dimensionality reduction, which is called "Supervised Principal Component Analysis (Supervised PCA)" to regression-based statistical downscaling. This method is a generalization of PCA, extracting a sequence of principal components of atmospheric variables, which have maximal dependence on the response hydro-climate variable. To capture the nonlinear variability between hydro-climatic response variables and projectors, a kernelized version of Supervised PCA is also applied for nonlinear dimensionality reduction. The effectiveness of the Supervised PCA methods in comparison with some state-of-the-art algorithms for dimensionality reduction is evaluated in relation to the statistical downscaling process of precipitation in a specific site using two soft computing nonlinear machine learning methods, Support Vector Regression and Relevance Vector Machine. The results demonstrate a significant improvement over Supervised PCA methods in terms of performance accuracy.

  1. Videoconference for psychotherapy training and supervision: two case examples.

    Science.gov (United States)

    Rousmaniere, Tony; Abbass, Allan; Frederickson, Jon; Henning, Inés; Taubner, Svenja

    2014-01-01

    Psychotherapy supervision and training are now widely available online. However, many supervisors still may be unclear on how online supervision actually works, or what it actually looks like in practice. In this article, three case examples of online videoconference-based supervision programs will be described. Partial transcripts from two online supervision sessions are provided. The benefits and limitations of online supervision are discussed, including discussion of supervision process, ethics, privacy, and security.

  2. Providing effective supervision in clinical neuropsychology.

    Science.gov (United States)

    Stucky, Kirk J; Bush, Shane; Donders, Jacobus

    2010-01-01

    A specialty like clinical neuropsychology is shaped by its selection of trainees, educational standards, expected competencies, and the structure of its training programs. The development of individual competency in this specialty is dependent to a considerable degree on the provision of competent supervision to its trainees. In clinical neuropsychology, as in other areas of professional health-service psychology, supervision is the most frequently used method for teaching a variety of skills, including assessment, report writing, differential diagnosis, and treatment. Although much has been written about the provision of quality supervision in clinical and counseling psychology, very little published guidance is available regarding the teaching and provision of supervision in clinical neuropsychology. The primary focus of this article is to provide a framework and guidance for the development of suggested competency standards for training of neuropsychological supervisors, particularly at the residency level. In this paper we outline important components of supervision for neuropsychology trainees and suggest ways in which clinicians can prepare for supervisory roles. Similar to Falender and Shafranske (2004), we propose a competency-based approach to supervision that advocates for a science-informed, formalized, and objective process that clearly delineates the competencies required for good supervisory practice. As much as possible, supervisory competencies are related to foundational and functional competencies in professional psychology, as well as recent legislative initiatives mandating training in supervision. It is our hope that this article will foster further discussion regarding this complex topic, and eventually enhance training in clinical neuropsychology.

  3. Automation and control trends in the upstream sector of the oil industry

    Energy Technology Data Exchange (ETDEWEB)

    Plucenio, Agustinho; Pagano, Daniel J. [Universidade Federal de Santa Catarina (UFSC), Florianopolis, SC (Brazil). Programa de Recursos Humanos da ANP em Automacao, Controle e Instrumentacao para a Industria do Petroleo e Gas, PRH-34

    2004-07-01

    The need to continuously improve the aspects of Health, Safety and Environment to operators, installation's security, optimization of oil reservoir recovery in wells operating with different artificial lift methods, subject to different secondary recovery techniques, has motivated the development of technologies in the automation and control for the upstream sector of the oil industry. While the application of control and automation techniques is well established in the downstream sector of the oil industry that is not the case in the downstream sector. One tendency in this sector is the utilization of control via Field bus Networks. This technology uses equipment that communicate with each other in a two wire digital network and can be programmed to execute function blocks algorithms designed to perform a designed control strategy. The most noticeable benefits are the improvements in the process performance and the equipment reusability and interoperability. Proprietary solutions can be replaced by systems composed of equipment supplied by different manufacturers connected in the same network. These equipment operate according to a strategy designed by automation and control engineers under the supervision of professionals working in computer terminals located in different company departments. Other gains are a better understanding about the industry processes, application of optimization techniques, fault detection, equipment maintenance follow-up, and improved operators working conditions and workers qualification. Other tendencies are: permanent well monitoring. Either with installation of down hole sensors based on fiber grating sensors or surface sensors using embedded electronic processors. Developments of instrumentation technology for low cost multiphase flow measurements. Application of control techniques for flow regime control and optimization of reservoir recovery through better identification, optimization and Model Based Predictive Control

  4. A Supervised Machine Learning Study of Online Discussion Forums about Type-2 Diabetes

    DEFF Research Database (Denmark)

    Reichert, Jonathan-Raphael; Kristensen, Klaus Langholz; Mukkamala, Raghava Rao

    2017-01-01

    supervised machine learning techniques to analyze the online conversations. In order to analyse these online textual conversations, we have chosen four domain specific models (Emotions, Sentiment, Personality Traits and Patient Journey). As part of text classification, we employed the ensemble learning...... method by using 5 different supervised machine learning algorithms to build a set of text classifiers by using the voting method to predict most probable label for a given textual conversation from the online discussion forums. Our findings show that there is a high amount of trust expressed by a subset...

  5. Using a Mixed Model to Explore Evaluation Criteria for Bank Supervision: A Banking Supervision Law Perspective.

    Directory of Open Access Journals (Sweden)

    Sang-Bing Tsai

    Full Text Available Financial supervision means that monetary authorities have the power to supervise and manage financial institutions according to laws. Monetary authorities have this power because of the requirements of improving financial services, protecting the rights of depositors, adapting to industrial development, ensuring financial fair trade, and maintaining stable financial order. To establish evaluation criteria for bank supervision in China, this study integrated fuzzy theory and the decision making trial and evaluation laboratory (DEMATEL and proposes a fuzzy-DEMATEL model. First, fuzzy theory was applied to examine bank supervision criteria and analyze fuzzy semantics. Second, the fuzzy-DEMATEL model was used to calculate the degree to which financial supervision criteria mutually influenced one another and their causal relationship. Finally, an evaluation criteria model for evaluating bank and financial supervision was established.

  6. Using a Mixed Model to Explore Evaluation Criteria for Bank Supervision: A Banking Supervision Law Perspective.

    Science.gov (United States)

    Tsai, Sang-Bing; Chen, Kuan-Yu; Zhao, Hongrui; Wei, Yu-Min; Wang, Cheng-Kuang; Zheng, Yuxiang; Chang, Li-Chung; Wang, Jiangtao

    2016-01-01

    Financial supervision means that monetary authorities have the power to supervise and manage financial institutions according to laws. Monetary authorities have this power because of the requirements of improving financial services, protecting the rights of depositors, adapting to industrial development, ensuring financial fair trade, and maintaining stable financial order. To establish evaluation criteria for bank supervision in China, this study integrated fuzzy theory and the decision making trial and evaluation laboratory (DEMATEL) and proposes a fuzzy-DEMATEL model. First, fuzzy theory was applied to examine bank supervision criteria and analyze fuzzy semantics. Second, the fuzzy-DEMATEL model was used to calculate the degree to which financial supervision criteria mutually influenced one another and their causal relationship. Finally, an evaluation criteria model for evaluating bank and financial supervision was established.

  7. Using a Mixed Model to Explore Evaluation Criteria for Bank Supervision: A Banking Supervision Law Perspective

    Science.gov (United States)

    Tsai, Sang-Bing; Chen, Kuan-Yu; Zhao, Hongrui; Wei, Yu-Min; Wang, Cheng-Kuang; Zheng, Yuxiang; Chang, Li-Chung; Wang, Jiangtao

    2016-01-01

    Financial supervision means that monetary authorities have the power to supervise and manage financial institutions according to laws. Monetary authorities have this power because of the requirements of improving financial services, protecting the rights of depositors, adapting to industrial development, ensuring financial fair trade, and maintaining stable financial order. To establish evaluation criteria for bank supervision in China, this study integrated fuzzy theory and the decision making trial and evaluation laboratory (DEMATEL) and proposes a fuzzy-DEMATEL model. First, fuzzy theory was applied to examine bank supervision criteria and analyze fuzzy semantics. Second, the fuzzy-DEMATEL model was used to calculate the degree to which financial supervision criteria mutually influenced one another and their causal relationship. Finally, an evaluation criteria model for evaluating bank and financial supervision was established. PMID:27992449

  8. Supervised Filter Learning for Representation Based Face Recognition.

    Directory of Open Access Journals (Sweden)

    Chao Bi

    Full Text Available Representation based classification methods, such as Sparse Representation Classification (SRC and Linear Regression Classification (LRC have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm.

  9. Supervising and Writing a Good Undergraduate Dissertation

    OpenAIRE

    Donnelly, Roisin; Dallat, John

    2013-01-01

    The considerable increase in numbers of students required to complete undergraduate dissertations as part of their curricula demonstrates a clear need for supporting academic staff from a wide variety of disciplines in this area. There has been limited research published in the realm of postgraduate supervision. Therefore, supervision of academic dissertations in an undergraduate setting still remains to be addressed in a comprehensive manner. The overarching theme of this reference work is t...

  10. Using a Mixed Model to Explore Evaluation Criteria for Bank Supervision: A Banking Supervision Law Perspective

    OpenAIRE

    Tsai, Sang-Bing; Chen, Kuan-Yu; Zhao, Hongrui; Wei, Yu-Min; Wang, Cheng-Kuang; Zheng, Yuxiang; Chang, Li-Chung; Wang, Jiangtao

    2016-01-01

    Financial supervision means that monetary authorities have the power to supervise and manage financial institutions according to laws. Monetary authorities have this power because of the requirements of improving financial services, protecting the rights of depositors, adapting to industrial development, ensuring financial fair trade, and maintaining stable financial order. To establish evaluation criteria for bank supervision in China, this study integrated fuzzy theory and the decision maki...

  11. Statistical-Mechanical Analysis Connecting Supervised Learning and Semi-Supervised Learning

    Science.gov (United States)

    Fujii, Takashi; Ito, Hidetaka; Miyoshi, Seiji

    2017-06-01

    The generalization performance of semi-supervised learning is analyzed in the framework of online learning using the statistical-mechanical method. We derive deterministically formed simultaneous differential equations that describe the dynamical behaviors of order parameters using the self-averaging property under the thermodynamic limit. By generalizing the number of labeled data, the derived theory connects supervised learning and semi-supervised learning.

  12. On psychoanalytic supervision as signature pedagogy.

    Science.gov (United States)

    Watkins, C Edward

    2014-04-01

    What is signature pedagogy in psychoanalytic education? This paper examines that question, considering why psychoanalytic supervision best deserves that designation. In focusing on supervision as signature pedagogy, I accentuate its role in building psychoanalytic habits of mind, habits of hand, and habits of heart, and transforming theory and self-knowledge into practical product. Other facets of supervision as signature pedagogy addressed in this paper include its features of engagement, uncertainty, formation, and pervasiveness, as well as levels of surface, deep, and implicit structure. Epistemological, ontological, and axiological in nature, psychoanalytic supervision engages trainees in learning to do, think, and value what psychoanalytic practitioners in the field do, think, and value: It is, most fundamentally, professional preparation for competent, "good work." In this paper, effort is made to shine a light on and celebrate the pivotal role of supervision in "making" or developing budding psychoanalysts and psychoanalytic psychotherapists. Now over a century old, psychoanalytic supervision remains unparalleled in (1) connecting and integrating conceptualization and practice, (2) transforming psychoanalytic theory and self-knowledge into an informed analyzing instrument, and (3) teaching, transmitting, and perpetuating the traditions, practice, and culture of psychoanalytic treatment.

  13. Automatic Algorithm Selection for Complex Simulation Problems

    CERN Document Server

    Ewald, Roland

    2012-01-01

    To select the most suitable simulation algorithm for a given task is often difficult. This is due to intricate interactions between model features, implementation details, and runtime environment, which may strongly affect the overall performance. An automated selection of simulation algorithms supports users in setting up simulation experiments without demanding expert knowledge on simulation. Roland Ewald analyzes and discusses existing approaches to solve the algorithm selection problem in the context of simulation. He introduces a framework for automatic simulation algorithm selection and

  14. Detect and Avoid (DAA) Automation Maneuver Study

    Science.gov (United States)

    2017-02-01

    not license the holder or any other person or corporation; or convey any rights or permission to manufacture , use , or sell any patented invention...adequate transparency and thus proper automation use (Parasuraman & Riley, 1997). Bihrle’s Jointly Optimal Conflict Avoidance (JOCA) algorithm is one...factorial design was used to compare the effect of display type and automation threshold on UAS pilots’ performance on maintaining Well Clear from other

  15. Intelligent Case Based Decision Support System for Online Diagnosis of Automated Production System

    Science.gov (United States)

    Ben Rabah, N.; Saddem, R.; Ben Hmida, F.; Carre-Menetrier, V.; Tagina, M.

    2017-01-01

    Diagnosis of Automated Production System (APS) is a decision-making process designed to detect, locate and identify a particular failure caused by the control law. In the literature, there are three major types of reasoning for industrial diagnosis: the first is model-based, the second is rule-based and the third is case-based. The common and major limitation of the first and the second reasonings is that they do not have automated learning ability. This paper presents an interactive and effective Case Based Decision Support System for online Diagnosis (CB-DSSD) of an APS. It offers a synergy between the Case Based Reasoning (CBR) and the Decision Support System (DSS) in order to support and assist Human Operator of Supervision (HOS) in his/her decision process. Indeed, the experimental evaluation performed on an Interactive Training System for PLC (ITS PLC) that allows the control of a Programmable Logic Controller (PLC), simulating sensors or/and actuators failures and validating the control algorithm through a real time interactive experience, showed the efficiency of our approach.

  16. Automated classification of cell morphology by coherence-controlled holographic microscopy

    Science.gov (United States)

    Strbkova, Lenka; Zicha, Daniel; Vesely, Pavel; Chmelik, Radim

    2017-08-01

    In the last few years, classification of cells by machine learning has become frequently used in biology. However, most of the approaches are based on morphometric (MO) features, which are not quantitative in terms of cell mass. This may result in poor classification accuracy. Here, we study the potential contribution of coherence-controlled holographic microscopy enabling quantitative phase imaging for the classification of cell morphologies. We compare our approach with the commonly used method based on MO features. We tested both classification approaches in an experiment with nutritionally deprived cancer tissue cells, while employing several supervised machine learning algorithms. Most of the classifiers provided higher performance when quantitative phase features were employed. Based on the results, it can be concluded that the quantitative phase features played an important role in improving the performance of the classification. The methodology could be valuable help in refining the monitoring of live cells in an automated fashion. We believe that coherence-controlled holographic microscopy, as a tool for quantitative phase imaging, offers all preconditions for the accurate automated analysis of live cell behavior while enabling noninvasive label-free imaging with sufficient contrast and high-spatiotemporal phase sensitivity.

  17. Automated computation of arbor densities: a step toward identifying neuronal cell types

    Directory of Open Access Journals (Sweden)

    Uygar eSümbül

    2014-11-01

    Full Text Available The shape and position of a neuron convey information regarding its molecular and functional identity. The identification of cell types from structure, a classic method, relies on the time-consuming step of arbor tracing. However, as genetic tools and imaging methods make data-driven approaches to neuronal circuit analysis feasible, the need for automated processing increases. Here, we first establish that mouse retinal ganglion cell types can be as precise about distributing their arbor volumes across the inner plexiform layer as they are about distributing the skeletons of the arbors. Then, we describe an automated approach to computing the spatial distribution of the dendritic arbors, or arbor density, with respect to a global depth coordinate based on this observation. Our method involves three-dimensional reconstruction of neuronal arbors by a supervised machine learning algorithm, post-processing of the enhanced stacks to remove somata and isolate the neuron of interest, and registration of neurons to each other using automatically detected arbors of the starburst amacrine interneurons as fiducial markers. In principle, this method could be generalizable to other structures of the CNS, provided that they allow sparse labeling of the cells and contain a reliable axis of spatial reference.

  18. Automated System for Early Breast Cancer Detection in Mammograms

    Science.gov (United States)

    Bankman, Isaac N.; Kim, Dong W.; Christens-Barry, William A.; Weinberg, Irving N.; Gatewood, Olga B.; Brody, William R.

    1993-01-01

    The increasing demand on mammographic screening for early breast cancer detection, and the subtlety of early breast cancer signs on mammograms, suggest an automated image processing system that can serve as a diagnostic aid in radiology clinics. We present a fully automated algorithm for detecting clusters of microcalcifications that are the most common signs of early, potentially curable breast cancer. By using the contour map of the mammogram, the algorithm circumvents some of the difficulties encountered with standard image processing methods. The clinical implementation of an automated instrument based on this algorithm is also discussed.

  19. Automated protein design: Landmarks and operational principles.

    Science.gov (United States)

    Kumar, Anil; Ranbhor, Ranjit; Patel, Kirti; Ramakrishnan, Vibin; Durani, Susheel

    2017-05-01

    Protein design has an eventful history spanning over three decades, with handful of success stories reported, and numerous failures not reported. Design practices have benefited tremendously from improvements in computer hardware and advances in scientific algorithms. Though protein folding problem still remains unsolved, the possibility of having multiple sequence solutions for a single fold makes protein design a more tractable problem than protein folding. One of the most significant advancement in this area is the implementation of automated design algorithms on pre-defined templates or completely new folds, optimized through deterministic and heuristic search algorithms. This progress report provides a succinct presentation of important landmarks in automated design attempts, followed by brief account of operational principles in automated design methods. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Semi-supervised eigenvectors for large-scale locally-biased learning

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Mahoney, Michael W.

    2014-01-01

    be interested in the clustering structure of a data graph near a prespecified seed set of nodes, or one might be interested in finding partitions in an image that are near a prespecified ground truth set of pixels. Locally-biased problems of this sort are particularly challenging for popular eigenvector...... a methodology to construct semi-supervised eigenvectors of a graph Laplacian, and we illustrate how these locally-biased eigenvectors can be used to perform locally-biased machine learning. These semi-supervised eigenvectors capture successively-orthogonalized directions of maximum variance, conditioned...... improved scaling properties. We provide several empirical examples demonstrating how these semi-supervised eigenvectors can be used to perform locally-biased learning; and we discuss the relationship between our results and recent machine learning algorithms that use global eigenvectors of the graph...

  1. An efficient flow-based botnet detection using supervised machine learning

    DEFF Research Database (Denmark)

    Stevanovic, Matija; Pedersen, Jens Myrup

    2014-01-01

    Botnet detection represents one of the most crucial prerequisites of successful botnet neutralization. This paper explores how accurate and timely detection can be achieved by using supervised machine learning as the tool of inferring about malicious botnet traffic. In order to do so, the paper...... to accurately and timely detect botnet traffic using purely flow-based traffic analysis and supervised machine learning. Additionally, the results show that in order to achieve accurate detection traffic flows need to be monitored for only a limited time period and number of packets per flow. This indicates...... introduces a novel flow-based detection system that relies on supervised machine learning for identifying botnet network traffic. For use in the system we consider eight highly regarded machine learning algorithms, indicating the best performing one. Furthermore, the paper evaluates how much traffic needs...

  2. "First generation" automated DNA sequencing technology.

    Science.gov (United States)

    Slatko, Barton E; Kieleczawa, Jan; Ju, Jingyue; Gardner, Andrew F; Hendrickson, Cynthia L; Ausubel, Frederick M

    2011-10-01

    Beginning in the 1980s, automation of DNA sequencing has greatly increased throughput, reduced costs, and enabled large projects to be completed more easily. The development of automation technology paralleled the development of other aspects of DNA sequencing: better enzymes and chemistry, separation and imaging technology, sequencing protocols, robotics, and computational advancements (including base-calling algorithms with quality scores, database developments, and sequence analysis programs). Despite the emergence of high-throughput sequencing platforms, automated Sanger sequencing technology remains useful for many applications. This unit provides background and a description of the "First-Generation" automated DNA sequencing technology. It also includes protocols for using the current Applied Biosystems (ABI) automated DNA sequencing machines. © 2011 by John Wiley & Sons, Inc.

  3. Demonstration of automated proximity and docking technology

    Science.gov (United States)

    Anderson, Robert L.; Tsugawa, Roy K.; Bryan, Thomas C.

    1991-01-01

    Automated spacecraft docking operations are being performed using a full scale motion based simulator and an optical sensor. This presentation will discuss the work in progress at TRW and MSFC facilities to study the problem of automated proximity and docking operations. The docking sensor used in the MSFC Optical Sensor and simulation runs are performed using the MSFC Flat Floor Facility. The control algorithms and six degrees of freedom (6DOF) simulation software were developed at TRW and integrated into the MSFC facility. Key issues being studied are the quantification of docking sensor requirements and operational constraints necessary to perform automated docking maneuvers, control algorithms capable of performing automated docking in the presence of sensitive and noisy sensor data, and sensor technologies for automated proximity and docking operations. As part of this study the MSFC sensor characteristics were analyzed and modeled so that off line simulation runs can be performed for control algorithm testing. Our goal is to develop and demonstrate full 6DOF docking capabilities with actual sensors on the MSFC motion based simulator. We present findings from actual docking simulation runs which show sensor and control loop performance as well as problem areas which require close attention. The evolution of various control algorithms using both phase plane and Clohessy-Wiltshire techniques are discussed. In addition, 6DOF target acquisition and control strategies are described.

  4. Technical and economical aspects of the electric power systems automation; Aspectos tecnicos e economicos da automacao dos sistemas eletricos

    Energy Technology Data Exchange (ETDEWEB)

    Siqueira, Gustavo F.G.; Achkar, Luiz Roberto; Yamanaka, Alberto S.

    1997-07-01

    This presents an overview on the technical and economical aspects of the electric power automation. The document analyses the automation necessity, anomalies in the distribution system, noise generators, systems of supervision, control and quality analysis of the energy and technical and financial aspects.

  5. Toward automated detection of malignant melanoma

    Science.gov (United States)

    Huang, Billy; Gareau, Daniel S.

    2009-02-01

    In vivo reflectance confocal microscopy shows promise for the early detection of malignant melanoma (MM). Two hallmarks of MM have been identified: the presence of pagetoid melanocytes in the epidermis and the breakdown of the dermal papillae. For detection of MM, these features must be identified qualitatively by the clinician and qualitatively through automated pattern recognition. A machine vision algorithm was developed for automated detection. The algorithm detected pagetoid melanocytes and breakdown of the dermal/epidermal junction in a pre-selected set of five MMs and five benign nevi for correct diagnosis.

  6. Group Supervision in Graduate Education: A Process of Supervision Skill Development and Text Improvement

    Science.gov (United States)

    Samara, Akylina

    2006-01-01

    This paper is an investigation of group supervision of the Master of Education thesis at the University of Bergen, Norway. Four recorded group supervision sessions are analysed. The group participants are five students and three supervisors. The sessions are analysed from a qualitative, phenomenological perspective. The results show that group…

  7. Classroom Supervision and Informal Analysis of Behavior. A Manual for Supervision.

    Science.gov (United States)

    Hull, Ray; Hansen, John

    This manual for supervision addresses itself to those with responsibility for helping teachers develop into skilled professionals through use of a rational plan of feedback and assistance. It describes the supervision cycle and outline simple and practical techniques to collect effective data that will assist the classroom teacher. The manual has…

  8. Diversifying Supervision for Maximum Professional Growth: Is a Well-Supervised Teacher a Satisfied Teacher?

    Science.gov (United States)

    Robinson, Sylvia G.

    This paper examines the relationship between various characterizations of the clinical supervision model and teacher job satisfaction. The first part of the paper describes teacher job satisfaction and looks at the history and meaning of clinical supervision. The next part of the paper describes Barbara Pavan's (1993) revised clinical supervision…

  9. Webly-supervised Fine-grained Visual Categorization via Deep Domain Adaptation.

    Science.gov (United States)

    Xu, Zhe; Huang, Shaoli; Zhang, Ya; Tao, Dacheng

    2016-12-08

    Learning visual representations from web data has recently attracted attention for object recognition. Previous studies have mainly focused on overcoming label noise and data bias and have shown promising results by learning directly from web data. However, we argue that it might be better to transfer knowledge from existing human labeling resources to improve performance at nearly no additional cost. In this paper, we propose a new semi-supervised method for learning via web data. Our method has the unique design of exploiting strong supervision, i.e., in addition to standard image-level labels, our method also utilizes detailed annotations including object bounding boxes and part landmarks. By transferring as much knowledge as possible from existing strongly supervised datasets to weakly supervised web images, our method can benefit from sophisticated object recognition algorithms and overcome several typical problems found in webly-supervised learning. We consider the problem of fine-grained visual categorization, in which existing training resources are scarce, as our main research objective. Comprehensive experimentation and extensive analysis demonstrate encouraging performance of the proposed approach, which, at the same time, delivers a new pipeline for fine-grained visual categorization that is likely to be highly effective for real-world applications.

  10. Extracting microRNA-gene relations from biomedical literature using distant supervision.

    Directory of Open Access Journals (Sweden)

    Andre Lamurias

    Full Text Available Many biomedical relation extraction approaches are based on supervised machine learning, requiring an annotated corpus. Distant supervision aims at training a classifier by combining a knowledge base with a corpus, reducing the amount of manual effort necessary. This is particularly useful for biomedicine because many databases and ontologies have been made available for many biological processes, while the availability of annotated corpora is still limited. We studied the extraction of microRNA-gene relations from text. MicroRNA regulation is an important biological process due to its close association with human diseases. The proposed method, IBRel, is based on distantly supervised multi-instance learning. We evaluated IBRel on three datasets, and the results were compared with a co-occurrence approach as well as a supervised machine learning algorithm. While supervised learning outperformed on two of those datasets, IBRel obtained an F-score 28.3 percentage points higher on the dataset for which there was no training set developed specifically. To demonstrate the applicability of IBRel, we used it to extract 27 miRNA-gene relations from recently published papers about cystic fibrosis. Our results demonstrate that our method can be successfully used to extract relations from literature about a biological process without an annotated corpus. The source code and data used in this study are available at https://github.com/AndreLamurias/IBRel.

  11. Pervasive Sound Sensing: A Weakly Supervised Training Approach.

    Science.gov (United States)

    Kelly, Daniel; Caulfield, Brian

    2016-01-01

    Modern smartphones present an ideal device for pervasive sensing of human behavior. Microphones have the potential to reveal key information about a person's behavior. However, they have been utilized to a significantly lesser extent than other smartphone sensors in the context of human behavior sensing. We postulate that, in order for microphones to be useful in behavior sensing applications, the analysis techniques must be flexible and allow easy modification of the types of sounds to be sensed. A simplification of the training data collection process could allow a more flexible sound classification framework. We hypothesize that detailed training, a prerequisite for the majority of sound sensing techniques, is not necessary and that a significantly less detailed and time consuming data collection process can be carried out, allowing even a nonexpert to conduct the collection, labeling, and training process. To test this hypothesis, we implement a diverse density-based multiple instance learning framework, to identify a target sound, and a bag trimming algorithm, which, using the target sound, automatically segments weakly labeled sound clips to construct an accurate training set. Experiments reveal that our hypothesis is a valid one and results show that classifiers, trained using the automatically segmented training sets, were able to accurately classify unseen sound samples with accuracies comparable to supervised classifiers, achieving an average F -measure of 0.969 and 0.87 for two weakly supervised datasets.

  12. "SmartMonitor"--an intelligent security system for the protection of individuals and small properties with the possibility of home automation.

    Science.gov (United States)

    Frejlichowski, Dariusz; Gościewska, Katarzyna; Forczmański, Paweł; Hofman, Radosław

    2014-06-05

    "SmartMonitor" is an intelligent security system based on image analysis that combines the advantages of alarm, video surveillance and home automation systems. The system is a complete solution that automatically reacts to every learned situation in a pre-specified way and has various applications, e.g., home and surrounding protection against unauthorized intrusion, crime detection or supervision over ill persons. The software is based on well-known and proven methods and algorithms for visual content analysis (VCA) that were appropriately modified and adopted to fit specific needs and create a video processing model which consists of foreground region detection and localization, candidate object extraction, object classification and tracking. In this paper, the "SmartMonitor" system is presented along with its architecture, employed methods and algorithms, and object analysis approach. Some experimental results on system operation are also provided. In the paper, focus is put on one of the aforementioned functionalities of the system, namely supervision over ill persons.

  13. “SmartMonitor” — An Intelligent Security System for the Protection of Individuals and Small Properties with the Possibility of Home Automation

    Science.gov (United States)

    Frejlichowski, Dariusz; Gościewska, Katarzyna; Forczmański, Paweł; Hofman, Radosław

    2014-01-01

    “SmartMonitor” is an intelligent security system based on image analysis that combines the advantages of alarm, video surveillance and home automation systems. The system is a complete solution that automatically reacts to every learned situation in a pre-specified way and has various applications, e.g., home and surrounding protection against unauthorized intrusion, crime detection or supervision over ill persons. The software is based on well-known and proven methods and algorithms for visual content analysis (VCA) that were appropriately modified and adopted to fit specific needs and create a video processing model which consists of foreground region detection and localization, candidate object extraction, object classification and tracking. In this paper, the “SmartMonitor” system is presented along with its architecture, employed methods and algorithms, and object analysis approach. Some experimental results on system operation are also provided. In the paper, focus is put on one of the aforementioned functionalities of the system, namely supervision over ill persons. PMID:24905854

  14. “SmartMonitor”— An Intelligent Security System for the Protection of Individuals and Small Properties with the Possibility of Home Automation

    Directory of Open Access Journals (Sweden)

    Dariusz Frejlichowski

    2014-06-01

    Full Text Available “SmartMonitor” is an intelligent security system based on image analysis that combines the advantages of alarm, video surveillance and home automation systems. The system is a complete solution that automatically reacts to every learned situation in a pre-specified way and has various applications, e.g., home and surrounding protection against unauthorized intrusion, crime detection or supervision over ill persons. The software is based on well-known and proven methods and algorithms for visual content analysis (VCA that were appropriately modified and adopted to fit specific needs and create a video processing model which consists of foreground region detection and localization, candidate object extraction, object classification and tracking. In this paper, the “SmartMonitor” system is presented along with its architecture, employed methods and algorithms, and object analysis approach. Some experimental results on system operation are also provided. In the paper, focus is put on one of the aforementioned functionalities of the system, namely supervision over ill persons.

  15. Intuitive expertise in ICT graduate supervision

    Directory of Open Access Journals (Sweden)

    Jill Jameson

    2002-12-01

    Full Text Available Intuitive expertise in the application of advanced interdisciplinary facilitation is the subject of this personal reflection on the graduate supervisory style of Professor David Squires in computers in education. This single-case reflective study examines the characteristics of effective supervision observed during masters and doctoral supervision at King's College in the years 1990-9. Interdisciplinarity in ICT graduate studies particularly requires a fluency of supervisory expertise in enabling supervisees to combine multiple complex perspectives from a number of fields of knowledge. Intuitive combinatory aspects of supervision are highlighted in this reflection on the role carried out by an academic expert in facilitating student success. This is examined from a perspective incorporating affective as well as intellectual elements, informed by characteristics identified in professional sports and performing arts coaching/mentoring. Key characteristics comprising a model of intuitive expertise in ICT graduate supervision were outlined. The resultant portrait aims to complement existing literature on graduate supervision, with reference to the field of ICTI computers in education relating to student hypermedia composition.

  16. Automated Budget System -

    Data.gov (United States)

    Department of Transportation — The Automated Budget System (ABS) automates management and planning of the Mike Monroney Aeronautical Center (MMAC) budget by providing enhanced capability to plan,...

  17. Automating quantum experiment control

    Science.gov (United States)

    Stevens, Kelly E.; Amini, Jason M.; Doret, S. Charles; Mohler, Greg; Volin, Curtis; Harter, Alexa W.

    2017-03-01

    The field of quantum information processing is rapidly advancing. As the control of quantum systems approaches the level needed for useful computation, the physical hardware underlying the quantum systems is becoming increasingly complex. It is already becoming impractical to manually code control for the larger hardware implementations. In this chapter, we will employ an approach to the problem of system control that parallels compiler design for a classical computer. We will start with a candidate quantum computing technology, the surface electrode ion trap, and build a system instruction language which can be generated from a simple machine-independent programming language via compilation. We incorporate compile time generation of ion routing that separates the algorithm description from the physical geometry of the hardware. Extending this approach to automatic routing at run time allows for automated initialization of qubit number and placement and additionally allows for automated recovery after catastrophic events such as qubit loss. To show that these systems can handle real hardware, we present a simple demonstration system that routes two ions around a multi-zone ion trap and handles ion loss and ion placement. While we will mainly use examples from transport-based ion trap quantum computing, many of the issues and solutions are applicable to other architectures.

  18. Automated Test Case Generation

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I would like to present the concept of automated test case generation. I work on it as part of my PhD and I think it would be interesting also for other people. It is also the topic of a workshop paper that I am introducing in Paris. (abstract below) Please note that the talk itself would be more general and not about the specifics of my PhD, but about the broad field of Automated Test Case Generation. I would introduce the main approaches (combinatorial testing, symbolic execution, adaptive random testing) and their advantages and problems. (oracle problem, combinatorial explosion, ...) Abstract of the paper: Over the last decade code-based test case generation techniques such as combinatorial testing or dynamic symbolic execution have seen growing research popularity. Most algorithms and tool implementations are based on finding assignments for input parameter values in order to maximise the execution branch coverage. Only few of them consider dependencies from outside the Code Under Test’s scope such...

  19. Oscillatory neural network for pattern recognition: trajectory based classification and supervised learning.

    Science.gov (United States)

    Miller, Vonda H; Jansen, Ben H

    2008-12-01

    Computer algorithms that match human performance in recognizing written text or spoken conversation remain elusive. The reasons why the human brain far exceeds any existing recognition scheme to date in the ability to generalize and to extract invariant characteristics relevant to category matching are not clear. However, it has been postulated that the dynamic distribution of brain activity (spatiotemporal activation patterns) is the mechanism by which stimuli are encoded and matched to categories. This research focuses on supervised learning using a trajectory based distance metric for category discrimination in an oscillatory neural network model. Classification is accomplished using a trajectory based distance metric. Since the distance metric is differentiable, a supervised learning algorithm based on gradient descent is demonstrated. Classification of spatiotemporal frequency transitions and their relation to a priori assessed categories is shown along with the improved classification results after supervised training. The results indicate that this spatiotemporal representation of stimuli and the associated distance metric is useful for simple pattern recognition tasks and that supervised learning improves classification results.

  20. Multisensor data fusion algorithm development

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  1. Automation 2017

    CERN Document Server

    Zieliński, Cezary; Kaliczyńska, Małgorzata

    2017-01-01

    This book consists of papers presented at Automation 2017, an international conference held in Warsaw from March 15 to 17, 2017. It discusses research findings associated with the concepts behind INDUSTRY 4.0, with a focus on offering a better understanding of and promoting participation in the Fourth Industrial Revolution. Each chapter presents a detailed analysis of a specific technical problem, in most cases followed by a numerical analysis, simulation and description of the results of implementing the solution in a real-world context. The theoretical results, practical solutions and guidelines presented are valuable for both researchers working in the area of engineering sciences and practitioners looking for solutions to industrial problems. .

  2. Marketing automation

    Directory of Open Access Journals (Sweden)

    TODOR Raluca Dania

    2017-01-01

    Full Text Available The automation of the marketing process seems to be nowadays, the only solution to face the major changes brought by the fast evolution of technology and the continuous increase in supply and demand. In order to achieve the desired marketing results, businessis have to employ digital marketing and communication services. These services are efficient and measurable thanks to the marketing technology used to track, score and implement each campaign. Due to the technical progress, the marketing fragmentation, demand for customized products and services on one side and the need to achieve constructive dialogue with the customers, immediate and flexible response and the necessity to measure the investments and the results on the other side, the classical marketing approached had changed continue to improve substantially.

  3. Supervised DNA Barcodes species classification: analysis, comparisons and results.

    Science.gov (United States)

    Weitschek, Emanuel; Fiscon, Giulia; Felici, Giovanni

    2014-04-11

    Specific fragments, coming from short portions of DNA (e.g., mitochondrial, nuclear, and plastid sequences), have been defined as DNA Barcode and can be used as markers for organisms of the main life kingdoms. Species classification with DNA Barcode sequences has been proven effective on different organisms. Indeed, specific gene regions have been identified as Barcode: COI in animals, rbcL and matK in plants, and ITS in fungi. The classification problem assigns an unknown specimen to a known species by analyzing its Barcode. This task has to be supported with reliable methods and algorithms. In this work the efficacy of supervised machine learning methods to classify species with DNA Barcode sequences is shown. The Weka software suite, which includes a collection of supervised classification methods, is adopted to address the task of DNA Barcode analysis. Classifier families are tested on synthetic and empirical datasets belonging to the animal, fungus, and plant kingdoms. In particular, the function-based method Support Vector Machines (SVM), the rule-based RIPPER, the decision tree C4.5, and the Naïve Bayes method are considered. Additionally, the classification results are compared with respect to ad-hoc and well-established DNA Barcode classification methods. A software that converts the DNA Barcode FASTA sequences to the Weka format is released, to adapt different input formats and to allow the execution of the classification procedure. The analysis of results on synthetic and real datasets shows that SVM and Naïve Bayes outperform on average the other considered classifiers, although they do not provide a human interpretable classification model. Rule-based methods have slightly inferior classification performances, but deliver the species specific positions and nucleotide assignments. On synthetic data the supervised machine learning methods obtain superior classification performances with respect to the traditional DNA Barcode classification methods. On

  4. Algorithm design

    CERN Document Server

    Kleinberg, Jon

    2006-01-01

    Algorithm Design introduces algorithms by looking at the real-world problems that motivate them. The book teaches students a range of design and analysis techniques for problems that arise in computing applications. The text encourages an understanding of the algorithm design process and an appreciation of the role of algorithms in the broader field of computer science.

  5. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  6. A multimedia retrieval framework based on semi-supervised ranking and relevance feedback.

    Science.gov (United States)

    Yang, Yi; Nie, Feiping; Xu, Dong; Luo, Jiebo; Zhuang, Yueting; Pan, Yunhe

    2012-04-01

    We present a new framework for multimedia content analysis and retrieval which consists of two independent algorithms. First, we propose a new semi-supervised algorithm called ranking with Local Regression and Global Alignment (LRGA) to learn a robust Laplacian matrix for data ranking. In LRGA, for each data point, a local linear regression model is used to predict the ranking scores of its neighboring points. A unified objective function is then proposed to globally align the local models from all the data points so that an optimal ranking score can be assigned to each data point. Second, we propose a semi-supervised long-term Relevance Feedback (RF) algorithm to refine the multimedia data representation. The proposed long-term RF algorithm utilizes both the multimedia data distribution in multimedia feature space and the history RF information provided by users. A trace ratio optimization problem is then formulated and solved by an efficient algorithm. The algorithms have been applied to several content-based multimedia retrieval applications, including cross-media retrieval, image retrieval, and 3D motion/pose data retrieval. Comprehensive experiments on four data sets have demonstrated its advantages in precision, robustness, scalability, and computational efficiency.

  7. Finding combinatorial histone code by semi-supervised biclustering

    Directory of Open Access Journals (Sweden)

    Teng Li

    2012-07-01

    Full Text Available Abstract Background Combinatorial histone modification is an important epigenetic mechanism for regulating chromatin state and gene expression. Given the rapid accumulation of genome-wide histone modification maps, there is a pressing need for computational methods capable of joint analysis of multiple maps to reveal combinatorial modification patterns. Results We present the Semi-Supervised Coherent and Shifted Bicluster Identification algorithm (SS-CoSBI. It uses prior knowledge of combinatorial histone modifications to guide the biclustering process. Specifically, co-occurrence frequencies of histone modifications characterized by mass spectrometry are used as probabilistic priors to adjust the similarity measure in the biclustering process. Using a high-quality set of transcriptional enhancers and associated histone marks, we demonstrate that SS-CoSBI outperforms its predecessor by finding histone modification and genomic locus biclusters with higher enrichment of enhancers. We apply SS-CoSBI to identify multiple cell-type-specific combinatorial histone modification states associated with human enhancers. We show enhancer histone modification states are correlated with the expression of nearby genes. Further, we find that enhancers with the histone mark H3K4me1 have higher levels of DNA methylation and decreased expression of nearby genes, suggesting a functional interplay between H3K4me1 and DNA methylation that can modulate enhancer activities. Conclusions The analysis presented here provides a systematic characterization of combinatorial histone codes of enhancers across three human cell types using a novel semi-supervised biclustering algorithm. As epigenomic maps accumulate, SS-CoSBI will become increasingly useful for understanding combinatorial chromatin modifications by taking advantage of existing knowledge. Availability and implementation SS-CoSBI is implemented in C. The source code is freely available at http://www.healthcare.uiowa.edu/labs/tan/SS-CoSBI.gz.

  8. Finding combinatorial histone code by semi-supervised biclustering.

    Science.gov (United States)

    Teng, Li; Tan, Kai

    2012-07-03

    Combinatorial histone modification is an important epigenetic mechanism for regulating chromatin state and gene expression. Given the rapid accumulation of genome-wide histone modification maps, there is a pressing need for computational methods capable of joint analysis of multiple maps to reveal combinatorial modification patterns. We present the Semi-Supervised Coherent and Shifted Bicluster Identification algorithm (SS-CoSBI). It uses prior knowledge of combinatorial histone modifications to guide the biclustering process. Specifically, co-occurrence frequencies of histone modifications characterized by mass spectrometry are used as probabilistic priors to adjust the similarity measure in the biclustering process. Using a high-quality set of transcriptional enhancers and associated histone marks, we demonstrate that SS-CoSBI outperforms its predecessor by finding histone modification and genomic locus biclusters with higher enrichment of enhancers. We apply SS-CoSBI to identify multiple cell-type-specific combinatorial histone modification states associated with human enhancers. We show enhancer histone modification states are correlated with the expression of nearby genes. Further, we find that enhancers with the histone mark H3K4me1 have higher levels of DNA methylation and decreased expression of nearby genes, suggesting a functional interplay between H3K4me1 and DNA methylation that can modulate enhancer activities. The analysis presented here provides a systematic characterization of combinatorial histone codes of enhancers across three human cell types using a novel semi-supervised biclustering algorithm. As epigenomic maps accumulate, SS-CoSBI will become increasingly useful for understanding combinatorial chromatin modifications by taking advantage of existing knowledge. SS-CoSBI is implemented in C. The source code is freely available at http://www.healthcare.uiowa.edu/labs/tan/SS-CoSBI.gz.

  9. Musical Instrument Classification Based on Nonlinear Recurrence Analysis and Supervised Learning

    Directory of Open Access Journals (Sweden)

    R.Rui

    2013-04-01

    Full Text Available In this paper, the phase space reconstruction of time series produced by different instruments is discussed based on the nonlinear dynamic theory. The dense ratio, a novel quantitative recurrence parameter, is proposed to describe the difference of wind instruments, stringed instruments and keyboard instruments in the phase space by analyzing the recursive property of every instrument. Furthermore, a novel supervised learning algorithm for automatic classification of individual musical instrument signals is addressed deriving from the idea of supervised non-negative matrix factorization (NMF algorithm. In our approach, the orthogonal basis matrix could be obtained without updating the matrix iteratively, which NMF is unable to do. The experimental results indicate that the accuracy of the proposed method is improved by 3% comparing with the conventional features in the individual instrument classification.

  10. EEM{sup TM} wireless supervision

    Energy Technology Data Exchange (ETDEWEB)

    Bilic, H. [Ericsson-Nikola Tesla d.d. Zagreb (Croatia)

    2000-07-01

    By adding the GSM network to the communication level of Energy Management systems, energy operating centres (EOC) can offer wireless access to the supervised equipment. Furthermore EOC can profit from rapid service development in the GSM networks. With implementation of GPRS to the GSM network EOC can instantly offer wireless access to external IP based networks such as Internet and corporate Intranets. The author describes architecture and key characteristic of Ericsson EnergyMaster{sup TM} (EEM{sup TM}) system for Energy Management, how and where to implement wireless supervision, wireless access to IP addresses and also how to implement new services provided by the GSM network. (orig.)

  11. Framing doctoral supervision as formative assessment

    DEFF Research Database (Denmark)

    Kobayashi, Sofie

    Doctoral supervision has been described through a number of models useful for understanding different aspects of supervision. None of these are all-encompassing, but each emphasizes a particular perspective, like the relationship, personal vs. structural support, process vs. product orientation....... In running courses for doctoral supervisors, one aspect that escapes attention when using these mental models is assessment, and such model can support new supervisors in reflecting on how to build autonomy. There is a body of research into the PhD examination, but this has not been translated into formative...

  12. Guidelines for clinical supervision in health service psychology.

    Science.gov (United States)

    2015-01-01

    This document outlines guidelines for supervision of students in health service psychology education and training programs. The goal was to capture optimal performance expectations for psychologists who supervise. It is based on the premises that supervisors (a) strive to achieve competence in the provision of supervision and (b) employ a competency-based, meta-theoretical approach to the supervision process. The Guidelines on Supervision were developed as a resource to inform education and training regarding the implementation of competency-based supervision. The Guidelines on Supervision build on the robust literatures on competency-based education and clinical supervision. They are organized around seven domains: supervisor competence; diversity; relationships; professionalism; assessment/evaluation/feedback; problems of professional competence, and ethical, legal, and regulatory considerations. The Guidelines on Supervision represent the collective effort of a task force convened by the American Psychological Association (APA) Board of Educational Affairs (BEA). PsycINFO Database Record (c) 2015 APA, all rights reserved.

  13. Automated test data generation for branch testing using incremental ...

    Indian Academy of Sciences (India)

    Cost of software testing can be reduced by automated test data generation to find a minimal set of data that has maximum coverage. Search-based software testing (SBST) is one of the techniques recently used for automated testing task. SBST makes use of control flow graph (CFG) and meta-heuristic search algorithms to ...

  14. Instance-specific algorithm configuration

    CERN Document Server

    Malitsky, Yuri

    2014-01-01

    This book presents a modular and expandable technique in the rapidly emerging research area of automatic configuration and selection of the best algorithm for the instance at hand. The author presents the basic model behind ISAC and then details a number of modifications and practical applications. In particular, he addresses automated feature generation, offline algorithm configuration for portfolio generation, algorithm selection, adaptive solvers, online tuning, and parallelization.    The author's related thesis was honorably mentioned (runner-up) for the ACP Dissertation Award in 2014,

  15. Vinayaka : A Semi-Supervised Projected Clustering Method Using Differential Evolution

    OpenAIRE

    Satish Gajawada; Durga Toshniwal

    2012-01-01

    Differential Evolution (DE) is an algorithm for evolutionary optimization. Clustering problems have beensolved by using DE based clustering methods but these methods may fail to find clusters hidden insubspaces of high dimensional datasets. Subspace and projected clustering methods have been proposed inliterature to find subspace clusters that are present in subspaces of dataset. In this paper we proposeVINAYAKA, a semi-supervised projected clustering method based on DE. In this method DE opt...

  16. Semi-Supervised Transductive Hot Spot Predictor Working on Multiple Assumptions

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-05-23

    Protein-protein interactions are critically dependent on just a few residues (“hot spots”) at the interfaces. Hot spots make a dominant contribution to the binding free energy and if mutated they can disrupt the interaction. As mutagenesis studies require significant experimental efforts, there exists a need for accurate and reliable computational hot spot prediction methods. Compared to the supervised hot spot prediction algorithms, the semi-supervised prediction methods can take into consideration both the labeled and unlabeled residues in the dataset during the prediction procedure. The transductive support vector machine has been utilized for this task and demonstrated a better prediction performance. To the best of our knowledge, however, none of the transductive semi-supervised algorithms takes all the three semisupervised assumptions, i.e., smoothness, cluster and manifold assumptions, together into account during learning. In this paper, we propose a novel semi-supervised method for hot spot residue prediction, by considering all the three semisupervised assumptions using nonlinear models. Our algorithm, IterPropMCS, works in an iterative manner. In each iteration, the algorithm first propagates the labels of the labeled residues to the unlabeled ones, along the shortest path between them on a graph, assuming that they lie on a nonlinear manifold. Then it selects the most confident residues as the labeled ones for the next iteration, according to the cluster and smoothness criteria, which is implemented by a nonlinear density estimator. Experiments on a benchmark dataset, using protein structure-based features, demonstrate that our approach is effective in predicting hot spots and compares favorably to other available methods. The results also show that our method outperforms the state-of-the-art transductive learning methods.

  17. Automated segmentation of thyroid gland on CT images with multi-atlas label fusion and random classification forest

    Science.gov (United States)

    Liu, Jiamin; Chang, Kevin; Kim, Lauren; Turkbey, Evrim; Lu, Le; Yao, Jianhua; Summers, Ronald

    2015-03-01

    The thyroid gland plays an important role in clinical practice, especially for radiation therapy treatment planning. For patients with head and neck cancer, radiation therapy requires a precise delineation of the thyroid gland to be spared on the pre-treatment planning CT images to avoid thyroid dysfunction. In the current clinical workflow, the thyroid gland is normally manually delineated by radiologists or radiation oncologists, which is time consuming and error prone. Therefore, a system for automated segmentation of the thyroid is desirable. However, automated segmentation of the thyroid is challenging because the thyroid is inhomogeneous and surrounded by structures that have similar intensities. In this work, the thyroid gland segmentation is initially estimated by multi-atlas label fusion algorithm. The segmentation is refined by supervised statistical learning based voxel labeling with a random forest algorithm. Multiatlas label fusion (MALF) transfers expert-labeled thyroids from atlases to a target image using deformable registration. Errors produced by label transfer are reduced by label fusion that combines the results produced by all atlases into a consensus solution. Then, random forest (RF) employs an ensemble of decision trees that are trained on labeled thyroids to recognize features. The trained forest classifier is then applied to the thyroid estimated from the MALF by voxel scanning to assign the class-conditional probability. Voxels from the expert-labeled thyroids in CT volumes are treated as positive classes; background non-thyroid voxels as negatives. We applied this automated thyroid segmentation system to CT scans of 20 patients. The results showed that the MALF achieved an overall 0.75 Dice Similarity Coefficient (DSC) and the RF classification further improved the DSC to 0.81.

  18. Automated asteroseismic peak detections

    Science.gov (United States)

    de Montellano, A. García Saravia Ortiz; Hekker, S.; Themeßl, N.

    2018-01-01

    Space observatories such as Kepler have provided data that can potentially revolutionise our understanding of stars. Through detailed asteroseismic analyses we are capable of determining fundamental stellar parameters and reveal the stellar internal structure with unprecedented accuracy. However, such detailed analyses, known as peak bagging, have so far been obtained for only a small percentage of the observed stars while most of the scientific potential of the available data remains unexplored. One of the major challenges in peak bagging is identifying how many solar-like oscillation modes are visible in a power density spectrum. Identification of oscillation modes is usually done by visual inspection which is time-consuming and has a degree of subjectivity. Here, we present a peak detection algorithm specially suited for the detection of solar-like oscillations. It reliably characterises the solar-like oscillations in a power density spectrum and estimates their parameters without human intervention. Furthermore, we provide a metric to characterise the false positive and false negative rates to provide further information about the reliability of a detected oscillation mode or the significance of a lack of detected oscillation modes. The algorithm presented here opens the possibility for detailed and automated peak bagging of the thousands of solar-like oscillators observed by Kepler.

  19. NEED OF FINANCIAL INSTITUTIONS SUPERVISION THROUGH AN SINGLE FRAMEWORK OF MACRO-PRUDENTIAL SUPERVISION

    Directory of Open Access Journals (Sweden)

    MEDAR LUCIAN-ION

    2013-12-01

    Full Text Available Joint Committee of the European Supervisory Authorities required Member States to implement new macro-prudential indicators.through national authorities of prudential supervision will be perform activities concerning the supplementary supervision of credit institutions, insurance companies or reinsurance companies, investment services firms and investment management firms, from a financial conglomerate. The most popular ways that give stability to the financial system are related to normal functioning of markets, to ensure implementation of payments in the economy and especially achieving a quality financial intermediation. Activities concerning macroprudential supervision concern, first of all, of managerial strengthening of internal control, assessment and management of risks

  20. A Graph-Based Semi-Supervised k Nearest-Neighbor Method for Nonlinear Manifold Distributed Data Classification

    OpenAIRE

    Tu, Enmei; Zhang, Yaqian; Zhu, Lin; Yang, Jie; Kasabov, Nikola

    2016-01-01

    $k$ Nearest Neighbors ($k$NN) is one of the most widely used supervised learning algorithms to classify Gaussian distributed data, but it does not achieve good results when it is applied to nonlinear manifold distributed data, especially when a very limited amount of labeled samples are available. In this paper, we propose a new graph-based $k$NN algorithm which can effectively handle both Gaussian distributed data and nonlinear manifold distributed data. To achieve this goal, we first propos...

  1. Bayesian Safety Risk Modeling of Human-Flightdeck Automation Interaction

    Science.gov (United States)

    Ancel, Ersin; Shih, Ann T.

    2015-01-01

    Usage of automatic systems in airliners has increased fuel efficiency, added extra capabilities, enhanced safety and reliability, as well as provide improved passenger comfort since its introduction in the late 80's. However, original automation benefits, including reduced flight crew workload, human errors or training requirements, were not achieved as originally expected. Instead, automation introduced new failure modes, redistributed, and sometimes increased workload, brought in new cognitive and attention demands, and increased training requirements. Modern airliners have numerous flight modes, providing more flexibility (and inherently more complexity) to the flight crew. However, the price to pay for the increased flexibility is the need for increased mode awareness, as well as the need to supervise, understand, and predict automated system behavior. Also, over-reliance on automation is linked to manual flight skill degradation and complacency in commercial pilots. As a result, recent accidents involving human errors are often caused by the interactions between humans and the automated systems (e.g., the breakdown in man-machine coordination), deteriorated manual flying skills, and/or loss of situational awareness due to heavy dependence on automated systems. This paper describes the development of the increased complexity and reliance on automation baseline model, named FLAP for FLightdeck Automation Problems. The model development process starts with a comprehensive literature review followed by the construction of a framework comprised of high-level causal factors leading to an automation-related flight anomaly. The framework was then converted into a Bayesian Belief Network (BBN) using the Hugin Software v7.8. The effects of automation on flight crew are incorporated into the model, including flight skill degradation, increased cognitive demand and training requirements along with their interactions. Besides flight crew deficiencies, automation system

  2. Spirituality and School Counselor Education and Supervision

    Science.gov (United States)

    Gallo, Laura L.

    2014-01-01

    Spirituality is an area that has not received a great deal of attention in supervision, yet it can have substantial effects on the counseling process. A definition of spirituality that allows for a variety of worldviews can be useful to both counselor and client as it helps strengthen the counseling relationship and lessen differences between…

  3. Implementation of Instructional Supervision in Secondary School ...

    African Journals Online (AJOL)

    Teachers may perceive supervision as a worthwhile activity if supervisors give teachers security by backing their judgments even though at times a teacher's judgment can be wrong. Teachers must feel that the supervisor is there to serve them and to help them become more effective teachers. Indeed in secondary schools ...

  4. Keys to Successful Community Health Worker Supervision

    Science.gov (United States)

    Duthie, Patricia; Hahn, Janet S.; Philippi, Evelyn; Sanchez, Celeste

    2012-01-01

    For many years community health workers (CHW) have been important to the implementation of many of our health system's community health interventions. Through this experience, we have recognized some unique challenges in community health worker supervision and have highlighted what we have learned in order to help other organizations effectively…

  5. Treatment and Supervision of Probated Felons.

    Science.gov (United States)

    Moore, James B.; Shearer, Robert A.

    1979-01-01

    Recently published results of a major personality study on probated felons furnish some directions for case-load treatment and supervision. Probation programing guidelines based on these results--the specific personality constructs of manipulation, self-control, and anxiety in probated felons--could enhance probation programing effectiveness.…

  6. How to Supervise a Ph.D.

    Science.gov (United States)

    Connell, R. W.

    1985-01-01

    A discussion of the process and problems of supervising a doctoral candidate examines the nature of the task and relationship, the stages of research (defining the topic, design, gathering material, writing, defense, and dissemination), criticism and intellectual growth, and references and sponsorship. (MSE)

  7. Supervision and Evaluation: The Wyoming Perspective

    Science.gov (United States)

    Range, Bret G.; Scherz, Susan; Holt, Carleton R.; Young, Suzanne

    2011-01-01

    The intent of this study was to assess the perceptions and actions of Wyoming principals concerning their role in supervising and evaluating teachers. A survey was sent to all 286 principals in the state of Wyoming, of which, 143 returned surveys, a response rate of 50%. Findings suggested that principals utilized supervisory behaviors more often…

  8. Making Supervision Relationships Accountable: Graduate Student Logs.

    Science.gov (United States)

    Yeatman, Anna

    1995-01-01

    Graduate student journals of research projects and their supervision are suggested as a means of structuring the supervisory process, making it more accountable, and facilitating students' successful completion of their academic and research tasks. However, the method also requires skill in successful thesis production on the supervisor's part.…

  9. Postgraduate supervision and academic support: students ...

    African Journals Online (AJOL)

    Among others, quality is determined by the extent to which students' expectations are met. Data about students' perceptions of supervision provides important information about their expectations and if these are satisfied. Survey research was employed to determine distance education students' perceptions of their ...

  10. Parallelprocesser og deres tilblivelse i supervision

    DEFF Research Database (Denmark)

    2006-01-01

    Kapitlet beskæftiger sig med ”Parallelprocesser og deres tilblivelse i supervision”. Først indkredses parallelprocesbegrebet i dets mange variationer. Der er tale om et nøglebegreb i psykoanalytisk supervision, der overordnet set henviser til en relationel positionering eller tematik i psykoterap...

  11. Experiencing Variation: Learning Opportunities in Doctoral Supervision

    Science.gov (United States)

    Kobayashi, Sofie; Berge, Maria; Grout, Brian W. W.; Rump, Camilla Østerberg

    2017-01-01

    This study contributes towards a better understanding of learning dynamics in doctoral supervision by analysing how learning opportunities are created in the interaction between supervisors and PhD students, using the notion of experiencing variation as a key to learning. Empirically, we have based the study on four video-recorded sessions, with…

  12. Establish Best Practices for Supervision of Instructors

    Science.gov (United States)

    2012-09-01

    directing, and controlling organizational resources” ( Daft , 2008, p. 14). More specifically, instructional supervision is focused on the improvement of...Effective supervisors are also active listeners ( Daft , 2008; Hughes et al., 2012; Thomas, 1974). Active listening involves focusing on what is being...Consulting Inc., 2012 Page 21 Technical Report [2012010632] asking questions, paraphrasing, and repeating the message back to the speaker ( Daft , 2008

  13. Clinical Supervision: Does It Make a Difference?

    Science.gov (United States)

    Pavan, Barbara Nelson

    This paper summarizes research on teacher attitudes and performance under clinical supervision. The literature for comparative studies of inservice personnel (not student teachers) in elementary and secondary schools was extensively searched. The 19 complete and 4 in-progress studies reported are grouped as follows: (1) attitudes toward…

  14. Effective School Management and Supervision: Imperative for ...

    African Journals Online (AJOL)

    DR Nneka

    The education manager is described as the individual in a school setting who directs the affairs of the school in such a way as to achieve its primary goals and objectives. He/she is involved in effective planning, organizing, supervision, controlling and evaluation. Therefore, for a school manager to successfully accomplish ...

  15. Research Supervision: The Research Management Matrix

    Science.gov (United States)

    Maxwell, T. W.; Smyth, Robyn

    2010-01-01

    We briefly make a case for re-conceptualising research project supervision/advising as the consideration of three inter-related areas: the learning and teaching process; developing the student; and producing the research project/outcome as a social practice. We use this as our theoretical base for an heuristic tool, "the research management…

  16. Mentoring and Supervision for Teacher Development.

    Science.gov (United States)

    Reiman, Alan J.; Thies-Sprinthall, Lois

    The fields of instructional supervision, adult development, teacher education and mentoring, and ongoing professional development are synthesized in this text. Examples and case studies are drawn from local systems in North Carolina as well as state, national, and international public school/university consortia to identify emerging trends in…

  17. Cybersupervision: Conducting Supervision on the Information Superhighway.

    Science.gov (United States)

    Coursol, Diane

    The internship experience is an integral part of the graduate program for counselor education students. The APA Code of Ethics and Standards of Practice and the ACPA code of ethics require that students receive regular supervision from site and faculty supervisors during the practicum and internship experiences. However, when student counselors…

  18. School Superintendent's Organizational Structures for Supervision.

    Science.gov (United States)

    Champagne, David W.; Cobbett, Allen A.

    Based on a theoretical paper on effective supervision (reproduced here), this study attempted to investigate superintendents' definitions of, knowledge of, assumptions about, and perceived effects of the supervisory structures and organization they have established in their school districts. Semistructured interviews of six superintendents in…

  19. Bilingual Supervision: Does It Make a Difference?

    Science.gov (United States)

    Farmer, Stephen S.; And Others

    The study attempted to determine whether monolingual English-speaking supervisors and bilingual-bicultural supervisors would provide markedly different supervision management to bilingual (English-Spanish) student clinicians in audiology/speech pathology. After viewing each of 10 video-tapes of Spanish language therapy sessions for 2 preschool…

  20. Response of Vocational Students to Supervision.

    Science.gov (United States)

    Stogdill, Ralph M.

    The subjects were boys in two vocational high schools who had been rated by their teachers as responding favorably or unfavorably to supervision. Small groups were shown one film daily over a period of five days and discussed the role behavior of the supervisor shown in the film. In one group of experimental subjects, positive attitudes toward…