WorldWideScience

Sample records for instance segment version

  1. System administrator's manual (SAM) for the enhanced logistics intratheater support tool (ELIST) database instance segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    This document is the System Administrator's Manual (SAM) for the Enhanced Logistics Intratheater Support Tool (ELIST) Database Instance Segment. It covers errors that can arise during the segment's installation and deinstallation, and it outlines appropriate recovery actions. It also tells how to change the password for the SYSTEM account of the database instance after the instance is created, and it discusses the creation of a suitable database instance for ELIST by means other than the installation of the segment. The latter subject is covered in more depth than its introductory discussion in the Installation Procedures (IP) for the Enhanced Logistics Intratheater Support Tool (ELIST) Global Data Segment, Database Instance Segment, Database Fill Segment, Database Segment, Database Utility Segment, Software Segment, and Reference Data Segment (referred to in portions of this document as the ELIST IP). The information in this document is expected to be of use only rarely. Other than errors arising from the failure to follow instructions, difficulties are not expected to be encountered during the installation or deinstallation of the segment. By the same token, the need to create a database instance for ELIST by means other than the installation of the segment is expected to be the exception, rather than the rule. Most administrators will only need to be aware of the help that is provided in this document and will probably not actually need to read and make use of it

  2. Software test plan/description/report (STP/STD/STR) for the enhanced logistics intratheater support tool (ELIST) global data segment. Version 8.1.0.0, Database Instance Segment Version 8.1.0.0, ...[elided] and Reference Data Segment Version 8.1.0.0 for Solaris 7; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.; Absil-Mills, M.; Jacobs, K.

    2002-01-01

    This document is the Software Test Plan/Description/Report (STP/STD/STR) for the DII COE Enhanced Logistics Intratheater Support Tool (ELIST) mission application. It combines in one document the information normally presented separately in a Software Test Plan, a Software Test Description, and a Software Test Report; it also presents this information in one place for all the segments of the ELIST mission application. The primary purpose of this document is to show that ELIST has been tested by the developer and found, by that testing, to install, deinstall, and work properly. The information presented here is detailed enough to allow the reader to repeat the testing independently. The remainder of this document is organized as follows. Section 1.1 identifies the ELIST mission application. Section 2 is the list of all documents referenced in this document. Section 3, the Software Test Plan, outlines the testing methodology and scope-the latter by way of a concise summary of the tests performed. Section 4 presents detailed descriptions of the tests, along with the expected and observed results; that section therefore combines the information normally found in a Software Test Description and a Software Test Report. The remaining small sections present supplementary information. Throughout this document, the phrase ELIST IP refers to the Installation Procedures (IP) for the Enhanced Logistics Intratheater Support Tool (ELIST) Global Data Segment, Database Instance Segment, Database Fill Segment, Database Segment, Database Utility Segment, Software Segment, and Reference Data Segment

  3. Installation procedures (IP) for the enhanced logistics intratheater support tool (ELIST) global data segment version 8.1.0.0, database instance segment version 8.1.0.0, ...[elided] and reference data segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    This document is the Installation Procedures (IP) for the DII COE Enhanced Logistics Intraheater Support Tool (ELIST) mission application. It tells how to install and deinstall the seven segments of the mission application

  4. Enhanced Sensitivity to Subphonemic Segments in Dyslexia: A New Instance of Allophonic Perception

    Science.gov (United States)

    Serniclaes, Willy; Seck, M’ballo

    2018-01-01

    Although dyslexia can be individuated in many different ways, it has only three discernable sources: a visual deficit that affects the perception of letters, a phonological deficit that affects the perception of speech sounds, and an audio-visual deficit that disturbs the association of letters with speech sounds. However, the very nature of each of these core deficits remains debatable. The phonological deficit in dyslexia, which is generally attributed to a deficit of phonological awareness, might result from a specific mode of speech perception characterized by the use of allophonic (i.e., subphonemic) units. Here we will summarize the available evidence and present new data in support of the “allophonic theory” of dyslexia. Previous studies have shown that the dyslexia deficit in the categorical perception of phonemic features (e.g., the voicing contrast between /t/ and /d/) is due to the enhanced sensitivity to allophonic features (e.g., the difference between two variants of /d/). Another consequence of allophonic perception is that it should also give rise to an enhanced sensitivity to allophonic segments, such as those that take place within a consonant cluster. This latter prediction is validated by the data presented in this paper. PMID:29587419

  5. Instance annotation for multi-instance multi-label learning

    Science.gov (United States)

    F. Briggs; X.Z. Fern; R. Raich; Q. Lou

    2013-01-01

    Multi-instance multi-label learning (MIML) is a framework for supervised classification where the objects to be classified are bags of instances associated with multiple labels. For example, an image can be represented as a bag of segments and associated with a list of objects it contains. Prior work on MIML has focused on predicting label sets for previously unseen...

  6. Unwinding focal segmental glomerulosclerosis [version 1; referees: 3 approved

    Directory of Open Access Journals (Sweden)

    Vasil Peev

    2017-04-01

    Full Text Available Focal segmental glomerulosclerosis (FSGS represents the most common primary glomerular disease responsible for the development of end-stage renal disease (ESRD in the United States (US. The disease progresses from podocyte injury to chronic kidney disease (CKD, ultimately leading to total nephron degeneration. Extensive basic science research has been conducted to unwind the mechanisms of FSGS and, with those insights, understand major contributors of CKD in general. As a result, several putative molecules and pathways have been studied, all implicated in the disease; some serve, in addition, as early biomarkers. The ongoing research is currently focusing on understanding how these molecules and pathways can interplay and be utilized as potential diagnostic and therapeutic targets. Among these molecules, the soluble urokinase plasminogen activating receptor (suPAR has been studied in detail, both clinically and from a basic science perspective. By now, it has emerged as the earliest and most robust marker of future CKD. Other circulating factors harming podocytes include anti-CD40 auto-antibody and possibly cardiotrophin-like cytokine factor-1. Understanding these factors will aid our efforts to ultimately cure FSGS and possibly treat a larger portion of CKD patients much more effectively.

  7. Introduction to the enhanced logistics intratheater support tool (ELIST) mission application and its segments : global data segment version 8.1.0.0, database instance segment version 8.1.0.0, ...[elided] and reference data segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    The ELIST mission application simulates and evaluates the feasibility of intratheater transportation logistics primarily for the theater portion of a course of action. It performs a discrete event simulation of a series of movement requirements over a constrained infrastructure network using specified transportation assets. ELIST addresses the question of whether transportation infrastructures and lift allocations are adequate to support the movements of specific force structures and supplies to their destinations on time

  8. System administrator's manual (SAM) for the enhanced logistics intratheater support tool (ELIST) database segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    This document is the System Administrator's Manual (SAM) for the Enhanced Logistics Intratheater Support Tool (ELIST) Database Segment. It covers errors that can arise during the segment's installation and deinstallation, and it outlines appropriate recovery actions. It also tells how to extend the database storage available to Oracle if a datastore becomes filled during the use of ELIST. The latter subject builds on some of the actions that must be performed when installing this segment, as documented in the Installation Procedures (IP) for the Enhanced Logistics Intratheater Support Tool (ELIST) Global Data Segment, Database Instance Segment, Database Fill Segment, Database Segment, Database Utility Segment, Software Segment, and Reference Data Segment (referred to in portions of this document as the ELIST IP). The information in this document is expected to be of use only rarely. Other than errors arising from the failure to follow instructions, difficulties are not expected to be encountered during the installation or deinstallation of the segment. The need to extend database storage likewise typically arises infrequently. Most administrators will only need to be aware of the help that is provided in this document and will probably not actually need to read and make use of it

  9. Instance-specific algorithm configuration

    CERN Document Server

    Malitsky, Yuri

    2014-01-01

    This book presents a modular and expandable technique in the rapidly emerging research area of automatic configuration and selection of the best algorithm for the instance at hand. The author presents the basic model behind ISAC and then details a number of modifications and practical applications. In particular, he addresses automated feature generation, offline algorithm configuration for portfolio generation, algorithm selection, adaptive solvers, online tuning, and parallelization.    The author's related thesis was honorably mentioned (runner-up) for the ACP Dissertation Award in 2014,

  10. Boosting instance prototypes to detect local dermoscopic features.

    Science.gov (United States)

    Situ, Ning; Yuan, Xiaojing; Zouridakis, George

    2010-01-01

    Local dermoscopic features are useful in many dermoscopic criteria for skin cancer detection. We address the problem of detecting local dermoscopic features from epiluminescence (ELM) microscopy skin lesion images. We formulate the recognition of local dermoscopic features as a multi-instance learning (MIL) problem. We employ the method of diverse density (DD) and evidence confidence (EC) function to convert MIL to a single-instance learning (SIL) problem. We apply Adaboost to improve the classification performance with support vector machines (SVMs) as the base classifier. We also propose to boost the selection of instance prototypes through changing the data weights in the DD function. We validate the methods on detecting ten local dermoscopic features from a dataset with 360 images. We compare the performance of the MIL approach, its boosting version, and a baseline method without using MIL. Our results show that boosting can provide performance improvement compared to the other two methods.

  11. Multi-instance learning based on instance consistency for image retrieval

    Science.gov (United States)

    Zhang, Miao; Wu, Zhize; Wan, Shouhong; Yue, Lihua; Yin, Bangjie

    2017-07-01

    Multiple-instance learning (MIL) has been successfully utilized in image retrieval. Existing approaches cannot select positive instances correctly from positive bags which may result in a low accuracy. In this paper, we propose a new image retrieval approach called multiple instance learning based on instance-consistency (MILIC) to mitigate such issue. First, we select potential positive instances effectively in each positive bag by ranking instance-consistency (IC) values of instances. Then, we design a feature representation scheme, which can represent the relationship among bags and instances, based on potential positive instances to convert a bag into a single instance. Finally, we can use a standard single-instance learning strategy, such as the support vector machine, for performing object-based image retrieval. Experimental results on two challenging data sets show the effectiveness of our proposal in terms of accuracy and run time.

  12. Learning concept mappings from instance similarity

    NARCIS (Netherlands)

    Wang, S.; Englebienne, G.; Schlobach, S.

    2008-01-01

    Finding mappings between compatible ontologies is an important but difficult open problem. Instance-based methods for solving this problem have the advantage of focusing on the most active parts of the ontologies and reflect concept semantics as they are actually being used. However such methods

  13. An instance theory of associative learning.

    Science.gov (United States)

    Jamieson, Randall K; Crump, Matthew J C; Hannah, Samuel D

    2012-03-01

    We present and test an instance model of associative learning. The model, Minerva-AL, treats associative learning as cued recall. Memory preserves the events of individual trials in separate traces. A probe presented to memory contacts all traces in parallel and retrieves a weighted sum of the traces, a structure called the echo. Learning of a cue-outcome relationship is measured by the cue's ability to retrieve a target outcome. The theory predicts a number of associative learning phenomena, including acquisition, extinction, reacquisition, conditioned inhibition, external inhibition, latent inhibition, discrimination, generalization, blocking, overshadowing, overexpectation, superconditioning, recovery from blocking, recovery from overshadowing, recovery from overexpectation, backward blocking, backward conditioned inhibition, and second-order retrospective revaluation. We argue that associative learning is consistent with an instance-based approach to learning and memory.

  14. Dissimilarity-based multiple instance learning

    DEFF Research Database (Denmark)

    Sørensen, Lauge; Loog, Marco; Tax, David M. J.

    2010-01-01

    In this paper, we propose to solve multiple instance learning problems using a dissimilarity representation of the objects. Once the dissimilarity space has been constructed, the problem is turned into a standard supervised learning problem that can be solved with a general purpose supervised cla...... between distributions of within- and between set point distances, thereby taking relations within and between sets into account. Experiments on five publicly available data sets show competitive performance in terms of classification accuracy compared to previously published results....

  15. Multiple Time-Instances Features of Degraded Speech for Single Ended Quality Measurement

    Directory of Open Access Journals (Sweden)

    Rajesh Kumar Dubey

    2017-01-01

    Full Text Available The use of single time-instance features, where entire speech utterance is used for feature computation, is not accurate and adequate in capturing the time localized information of short-time transient distortions and their distinction from plosive sounds of speech, particularly degraded by impulsive noise. Hence, the importance of estimating features at multiple time-instances is sought. In this, only active speech segments of degraded speech are used for features computation at multiple time-instances on per frame basis. Here, active speech means both voiced and unvoiced frames except silence. The features of different combinations of multiple contiguous active speech segments are computed and called multiple time-instances features. The joint GMM training has been done using these features along with the subjective MOS of the corresponding speech utterance to obtain the parameters of GMM. These parameters of GMM and multiple time-instances features of test speech are used to compute the objective MOS values of different combinations of multiple contiguous active speech segments. The overall objective MOS of the test speech utterance is obtained by assigning equal weight to the objective MOS values of the different combinations of multiple contiguous active speech segments. This algorithm outperforms the Recommendation ITU-T P.563 and recently published algorithms.

  16. Basics of XBRL Instance for Financial Reporting

    Directory of Open Access Journals (Sweden)

    Mihaela Enachi

    2011-10-01

    Full Text Available The development of XBRL (eXtensible Business Reporting Language for financial reporting has significantly changed the way which financial statements are presented to differentusers and implicitly the quantity and quality of information provided through such a modern format. Following a standard structure, but adaptable to the regulations from different countriesor regions of the world, we can communicate and process financial accounting information more efficient and effective. This paper tries to clarify the manner of preparation and presentation ofthe financial statements if using XBRL as reporting tool.Keywords: XML, XBRL, financial reporting, specification, taxonomy, instance

  17. Object instance recognition using motion cues and instance specific appearance models

    Science.gov (United States)

    Schumann, Arne

    2014-03-01

    In this paper we present an object instance retrieval approach. The baseline approach consists of a pool of image features which are computed on the bounding boxes of a query object track and compared to a database of tracks in order to find additional appearances of the same object instance. We improve over this simple baseline approach in multiple ways: 1) we include motion cues to achieve improved robustness to viewpoint and rotation changes, 2) we include operator feedback to iteratively re-rank the resulting retrieval lists and 3) we use operator feedback and location constraints to train classifiers and learn an instance specific appearance model. We use these classifiers to further improve the retrieval results. The approach is evaluated on two popular public datasets for two different applications. We evaluate person re-identification on the CAVIAR shopping mall surveillance dataset and vehicle instance recognition on the VIVID aerial dataset and achieve significant improvements over our baseline results.

  18. Speaker segmentation and clustering

    OpenAIRE

    Kotti, M; Moschou, V; Kotropoulos, C

    2008-01-01

    07.08.13 KB. Ok to add the accepted version to Spiral, Elsevier says ok whlile mandate not enforced. This survey focuses on two challenging speech processing topics, namely: speaker segmentation and speaker clustering. Speaker segmentation aims at finding speaker change points in an audio stream, whereas speaker clustering aims at grouping speech segments based on speaker characteristics. Model-based, metric-based, and hybrid speaker segmentation algorithms are reviewed. Concerning speaker...

  19. A critique of medicalisation: three instances.

    Science.gov (United States)

    Ryang, Sonia

    2017-12-01

    By briefly exploring three different examples where the existence of mental illness and developmental delay has been presumed, this paper sheds light on the way what Foucault calls the emergence of a regime of truth, i.e. where something that does not exist is made to exist through the construction of a system of truth around it. The first example concerns the direct marketing of pharmaceutical products to consumers in the US, the second the use of psychology in semi-post-Cold War Korea, and the third the persisting authority of psychology in the treatment of the developmentally delayed. While these instances are not innately connected, looking at these as part of the process by which the authoritative knowledge is established will help us understand, albeit partially, the mechanism by which mental illness penetrates our lives as truth, and how this regime of truth is supported by the authority of psychology, psychiatry and psychoanalysis, what Foucault calls the 'psy-function,' reinforcing the medicalisation of our lives.

  20. User's manual (UM) for the enhanced logistics intratheater support tool (ELIST) database utility segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    This document is the User's Manual (UM) for the Enhanced Logistics Intratheater Support Tool (ELIST) Database Utility Segment. It tells how to use its features to administer ELIST database user accounts

  1. Parallel multiple instance learning for extremely large histopathology image analysis.

    Science.gov (United States)

    Xu, Yan; Li, Yeshu; Shen, Zhengyang; Wu, Ziwei; Gao, Teng; Fan, Yubo; Lai, Maode; Chang, Eric I-Chao

    2017-08-03

    Histopathology images are critical for medical diagnosis, e.g., cancer and its treatment. A standard histopathology slice can be easily scanned at a high resolution of, say, 200,000×200,000 pixels. These high resolution images can make most existing imaging processing tools infeasible or less effective when operated on a single machine with limited memory, disk space and computing power. In this paper, we propose an algorithm tackling this new emerging "big data" problem utilizing parallel computing on High-Performance-Computing (HPC) clusters. Experimental results on a large-scale data set (1318 images at a scale of 10 billion pixels each) demonstrate the efficiency and effectiveness of the proposed algorithm for low-latency real-time applications. The framework proposed an effective and efficient system for extremely large histopathology image analysis. It is based on the multiple instance learning formulation for weakly-supervised learning for image classification, segmentation and clustering. When a max-margin concept is adopted for different clusters, we obtain further improvement in clustering performance.

  2. Multiple-instance learning as a classifier combining problem

    DEFF Research Database (Denmark)

    Li, Yan; Tax, David M. J.; Duin, Robert P. W.

    2013-01-01

    In multiple-instance learning (MIL), an object is represented as a bag consisting of a set of feature vectors called instances. In the training set, the labels of bags are given, while the uncertainty comes from the unknown labels of instances in the bags. In this paper, we study MIL with the ass...

  3. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification.

    Science.gov (United States)

    Bing, Lu; Wang, Wei

    2017-01-01

    We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL). After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM). Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  4. User's manual (UM) for the enhanced logistics intratheater support tool (ELIST) software segment version 8.1.0.0 for solaris 7.; TOPICAL

    International Nuclear Information System (INIS)

    Dritz, K.

    2002-01-01

    This document is the User's Manual (UM) for the Enhanced Logistics Intratheater Support Tool (ELIST) Software Segment. It tells how to use the end-user and administrative features of the segment. The instructions in Sections 4.2.1, 5.3.1, and 5.3.2 for the end-user features (Run ELIST and Run ETEdit) only cover the launching of those features in the DII COE environment; full details on the operation of ELIST and ETEdit in any environment can be found in the documents listed in Section 2.1.3 and referenced elsewhere in this document. On the other hand, complete instructions for the administrative features (Add Map Data and Delete Map Data) are presented in Sections 4.2.2, 5.3.3, and 5.3.4 of this document

  5. Experimental Matching of Instances to Heuristics for Constraint Satisfaction Problems.

    Science.gov (United States)

    Moreno-Scott, Jorge Humberto; Ortiz-Bayliss, José Carlos; Terashima-Marín, Hugo; Conant-Pablos, Santiago Enrique

    2016-01-01

    Constraint satisfaction problems are of special interest for the artificial intelligence and operations research community due to their many applications. Although heuristics involved in solving these problems have largely been studied in the past, little is known about the relation between instances and the respective performance of the heuristics used to solve them. This paper focuses on both the exploration of the instance space to identify relations between instances and good performing heuristics and how to use such relations to improve the search. Firstly, the document describes a methodology to explore the instance space of constraint satisfaction problems and evaluate the corresponding performance of six variable ordering heuristics for such instances in order to find regions on the instance space where some heuristics outperform the others. Analyzing such regions favors the understanding of how these heuristics work and contribute to their improvement. Secondly, we use the information gathered from the first stage to predict the most suitable heuristic to use according to the features of the instance currently being solved. This approach proved to be competitive when compared against the heuristics applied in isolation on both randomly generated and structured instances of constraint satisfaction problems.

  6. Multi-instance dictionary learning via multivariate performance measure optimization

    KAUST Repository

    Wang, Jim Jing-Yan

    2016-12-29

    The multi-instance dictionary plays a critical role in multi-instance data representation. Meanwhile, different multi-instance learning applications are evaluated by specific multivariate performance measures. For example, multi-instance ranking reports the precision and recall. It is not difficult to see that to obtain different optimal performance measures, different dictionaries are needed. This observation motives us to learn performance-optimal dictionaries for this problem. In this paper, we propose a novel joint framework for learning the multi-instance dictionary and the classifier to optimize a given multivariate performance measure, such as the F1 score and precision at rank k. We propose to represent the bags as bag-level features via the bag-instance similarity, and learn a classifier in the bag-level feature space to optimize the given performance measure. We propose to minimize the upper bound of a multivariate loss corresponding to the performance measure, the complexity of the classifier, and the complexity of the dictionary, simultaneously, with regard to both the dictionary and the classifier parameters. In this way, the dictionary learning is regularized by the performance optimization, and a performance-optimal dictionary is obtained. We develop an iterative algorithm to solve this minimization problem efficiently using a cutting-plane algorithm and a coordinate descent method. Experiments on multi-instance benchmark data sets show its advantage over both traditional multi-instance learning and performance optimization methods.

  7. Multi-instance dictionary learning via multivariate performance measure optimization

    KAUST Repository

    Wang, Jim Jing-Yan; Tsang, Ivor Wai-Hung; Cui, Xuefeng; Lu, Zhiwu; Gao, Xin

    2016-01-01

    The multi-instance dictionary plays a critical role in multi-instance data representation. Meanwhile, different multi-instance learning applications are evaluated by specific multivariate performance measures. For example, multi-instance ranking reports the precision and recall. It is not difficult to see that to obtain different optimal performance measures, different dictionaries are needed. This observation motives us to learn performance-optimal dictionaries for this problem. In this paper, we propose a novel joint framework for learning the multi-instance dictionary and the classifier to optimize a given multivariate performance measure, such as the F1 score and precision at rank k. We propose to represent the bags as bag-level features via the bag-instance similarity, and learn a classifier in the bag-level feature space to optimize the given performance measure. We propose to minimize the upper bound of a multivariate loss corresponding to the performance measure, the complexity of the classifier, and the complexity of the dictionary, simultaneously, with regard to both the dictionary and the classifier parameters. In this way, the dictionary learning is regularized by the performance optimization, and a performance-optimal dictionary is obtained. We develop an iterative algorithm to solve this minimization problem efficiently using a cutting-plane algorithm and a coordinate descent method. Experiments on multi-instance benchmark data sets show its advantage over both traditional multi-instance learning and performance optimization methods.

  8. Comparative analysis of instance selection algorithms for instance-based classifiers in the context of medical decision support

    International Nuclear Information System (INIS)

    Mazurowski, Maciej A; Tourassi, Georgia D; Malof, Jordan M

    2011-01-01

    When constructing a pattern classifier, it is important to make best use of the instances (a.k.a. cases, examples, patterns or prototypes) available for its development. In this paper we present an extensive comparative analysis of algorithms that, given a pool of previously acquired instances, attempt to select those that will be the most effective to construct an instance-based classifier in terms of classification performance, time efficiency and storage requirements. We evaluate seven previously proposed instance selection algorithms and compare their performance to simple random selection of instances. We perform the evaluation using k-nearest neighbor classifier and three classification problems: one with simulated Gaussian data and two based on clinical databases for breast cancer detection and diagnosis, respectively. Finally, we evaluate the impact of the number of instances available for selection on the performance of the selection algorithms and conduct initial analysis of the selected instances. The experiments show that for all investigated classification problems, it was possible to reduce the size of the original development dataset to less than 3% of its initial size while maintaining or improving the classification performance. Random mutation hill climbing emerges as the superior selection algorithm. Furthermore, we show that some previously proposed algorithms perform worse than random selection. Regarding the impact of the number of instances available for the classifier development on the performance of the selection algorithms, we confirm that the selection algorithms are generally more effective as the pool of available instances increases. In conclusion, instance selection is generally beneficial for instance-based classifiers as it can improve their performance, reduce their storage requirements and improve their response time. However, choosing the right selection algorithm is crucial.

  9. Towards End-to-End Lane Detection: an Instance Segmentation Approach

    OpenAIRE

    Neven, Davy; De Brabandere, Bert; Georgoulis, Stamatios; Proesmans, Marc; Van Gool, Luc

    2018-01-01

    Modern cars are incorporating an increasing number of driver assist features, among which automatic lane keeping. The latter allows the car to properly position itself within the road lanes, which is also crucial for any subsequent lane departure or trajectory planning decision in fully autonomous cars. Traditional lane detection methods rely on a combination of highly-specialized, hand-crafted features and heuristics, usually followed by post-processing techniques, that are computationally e...

  10. Multimodality and children's participation in classrooms: Instances of ...

    African Journals Online (AJOL)

    Multimodality and children's participation in classrooms: Instances of research. ... deficit models of children, drawing on their everyday experiences and their existing ... It outlines the theoretical framework supporting the pedagogical approach, ...

  11. Memory for Instances and Categories in Children and Adults

    Science.gov (United States)

    Tighe, Thomas J.; And Others

    1975-01-01

    Two studies of 7-year-olds and college students tested the hypothesis of a developmental difference in the degree to which subjects' memory performance was controlled by categorical properties vs. specific instance properties of test items. (GO)

  12. Feature Subset Selection and Instance Filtering for Cross-project Defect Prediction - Classification and Ranking

    Directory of Open Access Journals (Sweden)

    Faimison Porto

    2016-12-01

    Full Text Available The defect prediction models can be a good tool on organizing the project's test resources. The models can be constructed with two main goals: 1 to classify the software parts - defective or not; or 2 to rank the most defective parts in a decreasing order. However, not all companies maintain an appropriate set of historical defect data. In this case, a company can build an appropriate dataset from known external projects - called Cross-project Defect Prediction (CPDP. The CPDP models, however, present low prediction performances due to the heterogeneity of data. Recently, Instance Filtering methods were proposed in order to reduce this heterogeneity by selecting the most similar instances from the training dataset. Originally, the similarity is calculated based on all the available dataset features (or independent variables. We propose that using only the most relevant features on the similarity calculation can result in more accurate filtered datasets and better prediction performances. In this study we extend our previous work. We analyse both prediction goals - Classification and Ranking. We present an empirical evaluation of 41 different methods by associating Instance Filtering methods with Feature Selection methods. We used 36 versions of 11 open source projects on experiments. The results show similar evidences for both prediction goals. First, the defect prediction performance of CPDP models can be improved by associating Feature Selection and Instance Filtering. Second, no evaluated method presented general better performances. Indeed, the most appropriate method can vary according to the characteristics of the project being predicted.

  13. Segmentation-DrivenTomographic Reconstruction

    DEFF Research Database (Denmark)

    Kongskov, Rasmus Dalgas

    such that the segmentation subsequently can be carried out by use of a simple segmentation method, for instance just a thresholding method. We tested the advantages of going from a two-stage reconstruction method to a one stage segmentation-driven reconstruction method for the phase contrast tomography reconstruction......The tomographic reconstruction problem is concerned with creating a model of the interior of an object from some measured data, typically projections of the object. After reconstructing an object it is often desired to segment it, either automatically or manually. For computed tomography (CT...

  14. Resource Planning for Massive Number of Process Instances

    Science.gov (United States)

    Xu, Jiajie; Liu, Chengfei; Zhao, Xiaohui

    Resource allocation has been recognised as an important topic for business process execution. In this paper, we focus on planning resources for a massive number of process instances to meet the process requirements and cater for rational utilisation of resources before execution. After a motivating example, we present a model for planning resources for process instances. Then we design a set of heuristic rules that take both optimised planning at build time and instance dependencies at run time into account. Based on these rules we propose two strategies, one is called holistic and the other is called batched, for resource planning. Both strategies target a lower cost, however, the holistic strategy can achieve an earlier deadline while the batched strategy aims at rational use of resources. We discuss how to find balance between them in the paper with a comprehensive experimental study on these two approaches.

  15. Time and activity sequence prediction of business process instances

    DEFF Research Database (Denmark)

    Polato, Mirko; Sperduti, Alessandro; Burattin, Andrea

    2018-01-01

    The ability to know in advance the trend of running process instances, with respect to different features, such as the expected completion time, would allow business managers to timely counteract to undesired situations, in order to prevent losses. Therefore, the ability to accurately predict...... future features of running business process instances would be a very helpful aid when managing processes, especially under service level agreement constraints. However, making such accurate forecasts is not easy: many factors may influence the predicted features. Many approaches have been proposed...

  16. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification

    Directory of Open Access Journals (Sweden)

    Lu Bing

    2017-01-01

    Full Text Available We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL. After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM. Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  17. TNO at TRECVID 2013: Multimedia Event Detection and Instance Search

    NARCIS (Netherlands)

    Bouma, H.; Azzopardi, G.; Spitters, M.M.; Wit, J.J. de; Versloot, C.A.; Zon, R.W.L. van der; Eendebak, P.T.; Baan, J.; Hove, R.J.M. ten; Eekeren, A.W.M. van; Haar, F.B. ter; Hollander, R.J.M. den; Huis, R.J. van; Boer, M.H.T. de; Antwerpen, G. van; Broekhuijsen, B.J.; Daniele, L.M.; Brandt, P.; Schavemaker, J.G.M.; Kraaij, W.; Schutte, K.

    2013-01-01

    We describe the TNO system and the evaluation results for TRECVID 2013 Multimedia Event Detection (MED) and instance search (INS) tasks. The MED system consists of a bag-of-word (BOW) approach with spatial tiling that uses low-level static and dynamic visual features, an audio feature and high-level

  18. Locality in Generic Instance Search from One Example

    NARCIS (Netherlands)

    Tao, R.; Gavves, E.; Snoek, C.G.M.; Smeulders, A.W.M.

    2014-01-01

    This paper aims for generic instance search from a single example. Where the state-of-the-art relies on global image representation for the search, we proceed by including locality at all steps of the method. As the first novelty, we consider many boxes per database image as candidate targets to

  19. Data-aware remaining time prediction of business process instances

    NARCIS (Netherlands)

    Polato, M.; Sperduti, A.; Burattin, A.; Leoni, de M.

    2014-01-01

    Accurate prediction of the completion time of a business process instance would constitute a valuable tool when managing processes under service level agreement constraints. Such prediction, however, is a very challenging task. A wide variety of factors could influence the trend of a process

  20. First instance competence of the Higher Administrative Court

    International Nuclear Information System (INIS)

    Anon.

    1988-01-01

    (1) An interlocutory judgement can determine the admissibility of a legal action, also with regard to single procedural prerequisites (following BVerwG decision 14, 273). (2) The first instance competence for disputes about the dismantling of a decommissioned nuclear installation lies with the administrative courts and not with the higher administrative courts. Federal Administrative Court, decision of May 19, 1988 - 7 C 43.88 - (VGH Munich). (orig.) [de

  1. SIFT Meets CNN: A Decade Survey of Instance Retrieval.

    Science.gov (United States)

    Zheng, Liang; Yang, Yi; Tian, Qi

    2018-05-01

    In the early days, content-based image retrieval (CBIR) was studied with global features. Since 2003, image retrieval based on local descriptors (de facto SIFT) has been extensively studied for over a decade due to the advantage of SIFT in dealing with image transformations. Recently, image representations based on the convolutional neural network (CNN) have attracted increasing interest in the community and demonstrated impressive performance. Given this time of rapid evolution, this article provides a comprehensive survey of instance retrieval over the last decade. Two broad categories, SIFT-based and CNN-based methods, are presented. For the former, according to the codebook size, we organize the literature into using large/medium-sized/small codebooks. For the latter, we discuss three lines of methods, i.e., using pre-trained or fine-tuned CNN models, and hybrid methods. The first two perform a single-pass of an image to the network, while the last category employs a patch-based feature extraction scheme. This survey presents milestones in modern instance retrieval, reviews a broad selection of previous works in different categories, and provides insights on the connection between SIFT and CNN-based methods. After analyzing and comparing retrieval performance of different categories on several datasets, we discuss promising directions towards generic and specialized instance retrieval.

  2. Domain Adaptation for Machine Translation with Instance Selection

    Directory of Open Access Journals (Sweden)

    Biçici Ergun

    2015-04-01

    Full Text Available Domain adaptation for machine translation (MT can be achieved by selecting training instances close to the test set from a larger set of instances. We consider 7 different domain adaptation strategies and answer 7 research questions, which give us a recipe for domain adaptation in MT. We perform English to German statistical MT (SMT experiments in a setting where test and training sentences can come from different corpora and one of our goals is to learn the parameters of the sampling process. Domain adaptation with training instance selection can obtain 22% increase in target 2-gram recall and can gain up to 3:55 BLEU points compared with random selection. Domain adaptation with feature decay algorithm (FDA not only achieves the highest target 2-gram recall and BLEU performance but also perfectly learns the test sample distribution parameter with correlation 0:99. Moses SMT systems built with FDA selected 10K training sentences is able to obtain F1 results as good as the baselines that use up to 2M sentences. Moses SMT systems built with FDA selected 50K training sentences is able to obtain F1 point better results than the baselines.

  3. Instance Selection for Classifier Performance Estimation in Meta Learning

    Directory of Open Access Journals (Sweden)

    Marcin Blachnik

    2017-11-01

    Full Text Available Building an accurate prediction model is challenging and requires appropriate model selection. This process is very time consuming but can be accelerated with meta-learning–automatic model recommendation by estimating the performances of given prediction models without training them. Meta-learning utilizes metadata extracted from the dataset to effectively estimate the accuracy of the model in question. To achieve that goal, metadata descriptors must be gathered efficiently and must be informative to allow the precise estimation of prediction accuracy. In this paper, a new type of metadata descriptors is analyzed. These descriptors are based on the compression level obtained from the instance selection methods at the data-preprocessing stage. To verify their suitability, two types of experiments on real-world datasets have been conducted. In the first one, 11 instance selection methods were examined in order to validate the compression–accuracy relation for three classifiers: k-nearest neighbors (kNN, support vector machine (SVM, and random forest. From this analysis, two methods are recommended (instance-based learning type 2 (IB2, and edited nearest neighbor (ENN which are then compared with the state-of-the-art metaset descriptors. The obtained results confirm that the two suggested compression-based meta-features help to predict accuracy of the base model much more accurately than the state-of-the-art solution.

  4. Le contentieux camerounais devant les instances sportives internationales

    Directory of Open Access Journals (Sweden)

    Dikoume François Claude

    2016-01-01

    Les acteurs du sport depuis lors, utilisent donc les voies de recours au niveau international soit vers les fédérations sportives internationales ou encore et surtout vers le TAS qui s'occupe des litiges de toutes les disciplines sportives. Il est question ici de faire un inventaire casuistique descriptif non exhaustif des requêtes contentieuses camerounaises portées devant les diverses instances sportives internationales ; ceci permettra de questionner l'esprit processuel et la qualité technique de leurs réclamations juridiques en matière sportive.

  5. Clustering with Instance and Attribute Level Side Information

    Directory of Open Access Journals (Sweden)

    Jinlong Wang

    2010-12-01

    Full Text Available Selecting a suitable proximity measure is one of the fundamental tasks in clustering. How to effectively utilize all available side information, including the instance level information in the form of pair-wise constraints, and the attribute level information in the form of attribute order preferences, is an essential problem in metric learning. In this paper, we propose a learning framework in which both the pair-wise constraints and the attribute order preferences can be incorporated simultaneously. The theory behind it and the related parameter adjusting technique have been described in details. Experimental results on benchmark data sets demonstrate the effectiveness of proposed method.

  6. Dual-Layer Density Estimation for Multiple Object Instance Detection

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2016-01-01

    Full Text Available This paper introduces a dual-layer density estimation-based architecture for multiple object instance detection in robot inventory management applications. The approach consists of raw scale-invariant feature transform (SIFT feature matching and key point projection. The dominant scale ratio and a reference clustering threshold are estimated using the first layer of the density estimation. A cascade of filters is applied after feature template reconstruction and refined feature matching to eliminate false matches. Before the second layer of density estimation, the adaptive threshold is finalized by multiplying an empirical coefficient for the reference value. The coefficient is identified experimentally. Adaptive threshold-based grid voting is applied to find all candidate object instances. Error detection is eliminated using final geometric verification in accordance with Random Sample Consensus (RANSAC. The detection results of the proposed approach are evaluated on a self-built dataset collected in a supermarket. The results demonstrate that the approach provides high robustness and low latency for inventory management application.

  7. Kernel Methods for Mining Instance Data in Ontologies

    Science.gov (United States)

    Bloehdorn, Stephan; Sure, York

    The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.

  8. Entropy-Weighted Instance Matching Between Different Sourcing Points of Interest

    Directory of Open Access Journals (Sweden)

    Lin Li

    2016-01-01

    Full Text Available The crucial problem for integrating geospatial data is finding the corresponding objects (the counterpart from different sources. Most current studies focus on object matching with individual attributes such as spatial, name, or other attributes, which avoids the difficulty of integrating those attributes, but at the cost of an ineffective matching. In this study, we propose an approach for matching instances by integrating heterogeneous attributes with the allocation of suitable attribute weights via information entropy. First, a normalized similarity formula is developed, which can simplify the calculation of spatial attribute similarity. Second, sound-based and word segmentation-based methods are adopted to eliminate the semantic ambiguity when there is a lack of a normative coding standard in geospatial data to express the name attribute. Third, category mapping is established to address the heterogeneity among different classifications. Finally, to address the non-linear characteristic of attribute similarity, the weights of the attributes are calculated by the entropy of the attributes. Experiments demonstrate that the Entropy-Weighted Approach (EWA has good performance both in terms of precision and recall for instance matching from different data sets.

  9. Breaking Newton’s third law: electromagnetic instances

    International Nuclear Information System (INIS)

    Kneubil, Fabiana B

    2016-01-01

    In this work, three instances are discussed within electromagnetism which highlight failures in the validity of Newton’s third law, all of them related to moving charged particles. It is well known that electromagnetic theory paved the way for relativity and that it disclosed new phenomena which were not compatible with the laws of mechanics. However, even if widely known in its generality, this issue is not clearly approached in introductory textbooks and it is difficult for students to perceive by themselves. Three explicit concrete situations involving the breaking of Newton’s third law are presented in this paper, together with a didactical procedure to construct graphically the configurations of electric field lines, which allow pictures produced by interactive radiation simulators available in websites to be better understood. (paper)

  10. Image annotation based on positive-negative instances learning

    Science.gov (United States)

    Zhang, Kai; Hu, Jiwei; Liu, Quan; Lou, Ping

    2017-07-01

    Automatic image annotation is now a tough task in computer vision, the main sense of this tech is to deal with managing the massive image on the Internet and assisting intelligent retrieval. This paper designs a new image annotation model based on visual bag of words, using the low level features like color and texture information as well as mid-level feature as SIFT, and mixture the pic2pic, label2pic and label2label correlation to measure the correlation degree of labels and images. We aim to prune the specific features for each single label and formalize the annotation task as a learning process base on Positive-Negative Instances Learning. Experiments are performed using the Corel5K Dataset, and provide a quite promising result when comparing with other existing methods.

  11. MaMiCo: Transient multi-instance molecular-continuum flow simulation on supercomputers

    Science.gov (United States)

    Neumann, Philipp; Bian, Xin

    2017-11-01

    We present extensions of the macro-micro-coupling tool MaMiCo, which was designed to couple continuum fluid dynamics solvers with discrete particle dynamics. To enable local extraction of smooth flow field quantities especially on rather short time scales, sampling over an ensemble of molecular dynamics simulations is introduced. We provide details on these extensions including the transient coupling algorithm, open boundary forcing, and multi-instance sampling. Furthermore, we validate the coupling in Couette flow using different particle simulation software packages and particle models, i.e. molecular dynamics and dissipative particle dynamics. Finally, we demonstrate the parallel scalability of the molecular-continuum simulations by using up to 65 536 compute cores of the supercomputer Shaheen II located at KAUST. Program Files doi:http://dx.doi.org/10.17632/w7rgdrhb85.1 Licensing provisions: BSD 3-clause Programming language: C, C++ External routines/libraries: For compiling: SCons, MPI (optional) Subprograms used: ESPResSo, LAMMPS, ls1 mardyn, waLBerla For installation procedures of the MaMiCo interfaces, see the README files in the respective code directories located in coupling/interface/impl. Journal reference of previous version: P. Neumann, H. Flohr, R. Arora, P. Jarmatz, N. Tchipev, H.-J. Bungartz. MaMiCo: Software design for parallel molecular-continuum flow simulations, Computer Physics Communications 200: 324-335, 2016 Does the new version supersede the previous version?: Yes. The functionality of the previous version is completely retained in the new version. Nature of problem: Coupled molecular-continuum simulation for multi-resolution fluid dynamics: parts of the domain are resolved by molecular dynamics or another particle-based solver whereas large parts are covered by a mesh-based CFD solver, e.g. a lattice Boltzmann automaton. Solution method: We couple existing MD and CFD solvers via MaMiCo (macro-micro coupling tool). Data exchange and

  12. Active Segmentation.

    Science.gov (United States)

    Mishra, Ajay; Aloimonos, Yiannis

    2009-01-01

    The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.

  13. Multiple instance learning tracking method with local sparse representation

    KAUST Repository

    Xie, Chengjun

    2013-10-01

    When objects undergo large pose change, illumination variation or partial occlusion, most existed visual tracking algorithms tend to drift away from targets and even fail in tracking them. To address this issue, in this study, the authors propose an online algorithm by combining multiple instance learning (MIL) and local sparse representation for tracking an object in a video system. The key idea in our method is to model the appearance of an object by local sparse codes that can be formed as training data for the MIL framework. First, local image patches of a target object are represented as sparse codes with an overcomplete dictionary, where the adaptive representation can be helpful in overcoming partial occlusion in object tracking. Then MIL learns the sparse codes by a classifier to discriminate the target from the background. Finally, results from the trained classifier are input into a particle filter framework to sequentially estimate the target state over time in visual tracking. In addition, to decrease the visual drift because of the accumulative errors when updating the dictionary and classifier, a two-step object tracking method combining a static MIL classifier with a dynamical MIL classifier is proposed. Experiments on some publicly available benchmarks of video sequences show that our proposed tracker is more robust and effective than others. © The Institution of Engineering and Technology 2013.

  14. Marx, Materialism and the Brain: Determination in the Last Instance?

    Directory of Open Access Journals (Sweden)

    Joss Hands

    2018-05-01

    Full Text Available It is well acknowledged that there is not one but many Marxes, and one area where this has been most evident is in the question of technological and economic determinism. This article traces some key moments in this debate and attempts to locate their most recent iteration in disagreements over the place of the human brain in both historical agency and value creation in so called ‘cognitive’ or ‘post-Fordist’ capitalism. Of significant interest in the current configuration – or rather composition – of capital is the place of the digitisation of the labour process and its relation to, and integration with, human cognition and volition. Arguments over the attention economy and the power of post-Fordist capitalism to distract and direct is a significant variation of the question of ideology and the latest variation of the base/superstructure debate. This article will unpack the aforesaid issues to offer an articulated perspective in order to make the argument that taking a balanced view of determination will allow us to acknowledge that – drawing on the argument of determination in the last instance – we can hold both of these ‘Marxes’ to be simultaneously valid. Here, a revisiting of Marx’s concept of General Intellect will be undertaken, wherein the productive capacity of living labour is employed in both active agency and the capture of value, in which the plasticity of the living brain becomes the pivot point for both exploitation by, and resistance to, capital.

  15. Segmentation: Identification of consumer segments

    DEFF Research Database (Denmark)

    Høg, Esben

    2005-01-01

    It is very common to categorise people, especially in the advertising business. Also traditional marketing theory has taken in consumer segments as a favorite topic. Segmentation is closely related to the broader concept of classification. From a historical point of view, classification has its...... origin in other sciences as for example biology, anthropology etc. From an economic point of view, it is called segmentation when specific scientific techniques are used to classify consumers to different characteristic groupings. What is the purpose of segmentation? For example, to be able to obtain...... a basic understanding of grouping people. Advertising agencies may use segmentation totarget advertisements, while food companies may usesegmentation to develop products to various groups of consumers. MAPP has for example investigated the positioning of fish in relation to other food products...

  16. Segmental Vitiligo.

    Science.gov (United States)

    van Geel, Nanja; Speeckaert, Reinhart

    2017-04-01

    Segmental vitiligo is characterized by its early onset, rapid stabilization, and unilateral distribution. Recent evidence suggests that segmental and nonsegmental vitiligo could represent variants of the same disease spectrum. Observational studies with respect to its distribution pattern point to a possible role of cutaneous mosaicism, whereas the original stated dermatomal distribution seems to be a misnomer. Although the exact pathogenic mechanism behind the melanocyte destruction is still unknown, increasing evidence has been published on the autoimmune/inflammatory theory of segmental vitiligo. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Marketing ambulatory care to women: a segmentation approach.

    Science.gov (United States)

    Harrell, G D; Fors, M F

    1985-01-01

    Although significant changes are occurring in health care delivery, in many instances the new offerings are not based on a clear understanding of market segments being served. This exploratory study suggests that important differences may exist among women with regard to health care selection. Five major women's segments are identified for consideration by health care executives in developing marketing strategies. Additional research is suggested to confirm this segmentation hypothesis, validate segmental differences and quantify the findings.

  18. Horror Image Recognition Based on Context-Aware Multi-Instance Learning.

    Science.gov (United States)

    Li, Bing; Xiong, Weihua; Wu, Ou; Hu, Weiming; Maybank, Stephen; Yan, Shuicheng

    2015-12-01

    Horror content sharing on the Web is a growing phenomenon that can interfere with our daily life and affect the mental health of those involved. As an important form of expression, horror images have their own characteristics that can evoke extreme emotions. In this paper, we present a novel context-aware multi-instance learning (CMIL) algorithm for horror image recognition. The CMIL algorithm identifies horror images and picks out the regions that cause the sensation of horror in these horror images. It obtains contextual cues among adjacent regions in an image using a random walk on a contextual graph. Borrowing the strength of the fuzzy support vector machine (FSVM), we define a heuristic optimization procedure based on the FSVM to search for the optimal classifier for the CMIL. To improve the initialization of the CMIL, we propose a novel visual saliency model based on the tensor analysis. The average saliency value of each segmented region is set as its initial fuzzy membership in the CMIL. The advantage of the tensor-based visual saliency model is that it not only adaptively selects features, but also dynamically determines fusion weights for saliency value combination from different feature subspaces. The effectiveness of the proposed CMIL model is demonstrated by its use in horror image recognition on two large-scale image sets collected from the Internet.

  19. An Efficient Metric of Automatic Weight Generation for Properties in Instance Matching Technique

    OpenAIRE

    Seddiqui, Md. Hanif; Nath, Rudra Pratap Deb; Aono, Masaki

    2015-01-01

    The proliferation of heterogeneous data sources of semantic knowledge base intensifies the need of an automatic instance matching technique. However, the efficiency of instance matching is often influenced by the weight of a property associated to instances. Automatic weight generation is a non-trivial, however an important task in instance matching technique. Therefore, identifying an appropriate metric for generating weight for a property automatically is nevertheless a formidab...

  20. Stochastic Learning of Multi-Instance Dictionary for Earth Mover's Distance based Histogram Comparison

    OpenAIRE

    Fan, Jihong; Liang, Ru-Ze

    2016-01-01

    Dictionary plays an important role in multi-instance data representation. It maps bags of instances to histograms. Earth mover's distance (EMD) is the most effective histogram distance metric for the application of multi-instance retrieval. However, up to now, there is no existing multi-instance dictionary learning methods designed for EMD based histogram comparison. To fill this gap, we develop the first EMD-optimal dictionary learning method using stochastic optimization method. In the stoc...

  1. 28 CFR 51.46 - Reconsideration of objection at the instance of the Attorney General.

    Science.gov (United States)

    2010-07-01

    ... instance of the Attorney General. 51.46 Section 51.46 Judicial Administration DEPARTMENT OF JUSTICE... Processing of Submissions § 51.46 Reconsideration of objection at the instance of the Attorney General. (a... may be reconsidered, if it is deemed appropriate, at the instance of the Attorney General. (b) Notice...

  2. Mixed segmentation

    DEFF Research Database (Denmark)

    Hansen, Allan Grutt; Bonde, Anders; Aagaard, Morten

    content analysis and audience segmentation in a single-source perspective. The aim is to explain and understand target groups in relation to, on the one hand, emotional response to commercials or other forms of audio-visual communication and, on the other hand, living preferences and personality traits...

  3. Adapting Mask-RCNN for Automatic Nucleus Segmentation

    OpenAIRE

    Johnson, Jeremiah W.

    2018-01-01

    Automatic segmentation of microscopy images is an important task in medical image processing and analysis. Nucleus detection is an important example of this task. Mask-RCNN is a recently proposed state-of-the-art algorithm for object detection, object localization, and object instance segmentation of natural images. In this paper we demonstrate that Mask-RCNN can be used to perform highly effective and efficient automatic segmentations of a wide range of microscopy images of cell nuclei, for ...

  4. Time efficient optimization of instance based problems with application to tone onset detection

    OpenAIRE

    Bauer, Nadja; Friedrichs, Klaus; Weihs, Claus

    2016-01-01

    A time efficient optimization technique for instance based problems is proposed, where for each parameter setting the target function has to be evaluated on a large set of problem instances. Computational time is reduced by beginning with a performance estimation based on the evaluation of a representative subset of instances. Subsequently, only promising settings are evaluated on the whole data set. As application a comprehensive music onset detection algorithm is introduce...

  5. Brookhaven segment interconnect

    International Nuclear Information System (INIS)

    Morse, W.M.; Benenson, G.; Leipuner, L.B.

    1983-01-01

    We have performed a high energy physics experiment using a multisegment Brookhaven FASTBUS system. The system was composed of three crate segments and two cable segments. We discuss the segment interconnect module which permits communication between the various segments

  6. Compliance with Segment Disclosure Initiatives

    DEFF Research Database (Denmark)

    Arya, Anil; Frimor, Hans; Mittendorf, Brian

    2013-01-01

    Regulatory oversight of capital markets has intensified in recent years, with a particular emphasis on expanding financial transparency. A notable instance is efforts by the Financial Accounting Standards Board that push firms to identify and report performance of individual business units...... (segments). This paper seeks to address short-run and long-run consequences of stringent enforcement of and uniform compliance with these segment disclosure standards. To do so, we develop a parsimonious model wherein a regulatory agency promulgates disclosure standards and either permits voluntary...... by increasing transparency and leveling the playing field. However, our analysis also demonstrates that in the long run, if firms are unable to use discretion in reporting to maintain their competitive edge, they may seek more destructive alternatives. Accounting for such concerns, in the long run, voluntary...

  7. Automatic detection and recognition of multiple macular lesions in retinal optical coherence tomography images with multi-instance multilabel learning

    Science.gov (United States)

    Fang, Leyuan; Yang, Liumao; Li, Shutao; Rabbani, Hossein; Liu, Zhimin; Peng, Qinghua; Chen, Xiangdong

    2017-06-01

    Detection and recognition of macular lesions in optical coherence tomography (OCT) are very important for retinal diseases diagnosis and treatment. As one kind of retinal disease (e.g., diabetic retinopathy) may contain multiple lesions (e.g., edema, exudates, and microaneurysms) and eye patients may suffer from multiple retinal diseases, multiple lesions often coexist within one retinal image. Therefore, one single-lesion-based detector may not support the diagnosis of clinical eye diseases. To address this issue, we propose a multi-instance multilabel-based lesions recognition (MIML-LR) method for the simultaneous detection and recognition of multiple lesions. The proposed MIML-LR method consists of the following steps: (1) segment the regions of interest (ROIs) for different lesions, (2) compute descriptive instances (features) for each lesion region, (3) construct multilabel detectors, and (4) recognize each ROI with the detectors. The proposed MIML-LR method was tested on 823 clinically labeled OCT images with normal macular and macular with three common lesions: epiretinal membrane, edema, and drusen. For each input OCT image, our MIML-LR method can automatically identify the number of lesions and assign the class labels, achieving the average accuracy of 88.72% for the cases with multiple lesions, which better assists macular disease diagnosis and treatment.

  8. Scheduling jobs in the cloud using on-demand and reserved instances

    NARCIS (Netherlands)

    Shen, S.; Deng, K.; Iosup, A.; Epema, D.H.J.; Wolf, F.; Mohr, B.; Mey, an D.

    2013-01-01

    Deploying applications in leased cloud infrastructure is increasingly considered by a variety of business and service integrators. However, the challenge of selecting the leasing strategy — larger or faster instances? on-demand or reserved instances? etc.— and to configure the leasing strategy with

  9. Stochastic learning of multi-instance dictionary for earth mover’s distance-based histogram comparison

    KAUST Repository

    Fan, Jihong; Liang, Ru-Ze

    2016-01-01

    Dictionary plays an important role in multi-instance data representation. It maps bags of instances to histograms. Earth mover’s distance (EMD) is the most effective histogram distance metric for the application of multi-instance retrieval. However

  10. THE EXECUTION INSTANCE OF THE JUDICIAL JUDGEMENTS SENTENCED IN THE LITIGATIONS OF ADMINISTRATIVE CONTENTIOUS

    Directory of Open Access Journals (Sweden)

    ADRIANA ELENA BELU

    2012-05-01

    Full Text Available The instance which solved the fund of the litigation rising from an administrative contract differs depending on the material competence sanctioned by law, in contrast to the subject of the commercial law where the execution instance is the court. In this matter the High Court stated in a decision1 that in a first case the competence of solving the legal contest against the proper forced execution and of the legal contest that has in view the explanation of the meaning of spreading and applying the enforceable title which does not proceed from a jurisdiction organ is in the authority of the court. The Law of the Administrative Contentious no 554/2004 defines in Article 2 paragraph 1 letter t the notion of execution instance, providing that this is the instance which solved the fund of the litigation of administrative contentious, so even in the case of the administrative contracts the execution instance is the one which solved the litigation rising from the contract. Corroborating this disposal with the ones existing in articles 22 and 25 in the Law, it can be shown that no matter the instance which decision is an enforceable title, the execution of the law will be done by the instance which solved the fund of the litigation regarding the administrative contentious.

  11. Polarimetric Segmentation Using Wishart Test Statistic

    DEFF Research Database (Denmark)

    Skriver, Henning; Schou, Jesper; Nielsen, Allan Aasbjerg

    2002-01-01

    A newly developed test statistic for equality of two complex covariance matrices following the complex Wishart distribution and an associated asymptotic probability for the test statistic has been used in a segmentation algorithm. The segmentation algorithm is based on the MUM (merge using moments......) approach, which is a merging algorithm for single channel SAR images. The polarimetric version described in this paper uses the above-mentioned test statistic for merging. The segmentation algorithm has been applied to polarimetric SAR data from the Danish dual-frequency, airborne polarimetric SAR, EMISAR...

  12. Coarse-Grain QoS-Aware Dynamic Instance Provisioning for Interactive Workload in the Cloud

    Directory of Open Access Journals (Sweden)

    Jianxiong Wan

    2014-01-01

    Full Text Available Cloud computing paradigm renders the Internet service providers (ISPs with a new approach to deliver their service with less cost. ISPs can rent virtual machines from the Infrastructure-as-a-Service (IaaS provided by the cloud rather than purchasing them. In addition, commercial cloud providers (CPs offer diverse VM instance rental services in various time granularities, which provide another opportunity for ISPs to reduce cost. We investigate a Coarse-grain QoS-aware Dynamic Instance Provisioning (CDIP problem for interactive workload in the cloud from the perspective of ISPs. We formulate the CDIP problem as an optimization problem where the objective is to minimize the VM instance rental cost and the constraint is the percentile delay bound. Since the Internet traffic shows a strong self-similar property, it is hard to get an analytical form of the percentile delay constraint. To address this issue, we purpose a lookup table structure together with a learning algorithm to estimate the performance of the instance provisioning policy. This approach is further extended with two function approximations to enhance the scalability of the learning algorithm. We also present an efficient dynamic instance provisioning algorithm, which takes full advantage of the rental service diversity, to determine the instance rental policy. Extensive simulations are conducted to validate the effectiveness of the proposed algorithms.

  13. Improving Multi-Instance Multi-Label Learning by Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Ying Yin

    2016-05-01

    Full Text Available Multi-instance multi-label learning is a learning framework, where every object is represented by a bag of instances and associated with multiple labels simultaneously. The existing degeneration strategy-based methods often suffer from some common drawbacks: (1 the user-specific parameter for the number of clusters may incur the effective problem; (2 SVM may bring a high computational cost when utilized as the classifier builder. In this paper, we propose an algorithm, namely multi-instance multi-label (MIML-extreme learning machine (ELM, to address the problems. To our best knowledge, we are the first to utilize ELM in the MIML problem and to conduct the comparison of ELM and SVM on MIML. Extensive experiments have been conducted on real datasets and synthetic datasets. The results show that MIMLELM tends to achieve better generalization performance at a higher learning speed.

  14. Instances or Sequences? Improving the State of the Art of Qualitative Research

    Directory of Open Access Journals (Sweden)

    David Silverman

    2005-09-01

    Full Text Available Numbers apparently talk. With few numbers, qualitative researchers appear to rely on examples or instances to support their analysis. Hence research reports routinely display data extracts which serve as telling instances of some claimed phenomenon. However, the use of such an evidential base rightly provokes the charge of (possible anecdotalism, i.e. choosing just those extracts which support your argument. I suggest that this methodological problem is best addressed by returning to those features of our theoretical roots which tend to distinguish what we do from the work of quantitative social scientists. Although SAUSSURE is most cited in linguistics and structural anthropology, he provides a simple rule that applies to us all. In a rebuke to our reli­ance on instances, SAUSSURE tells us "no mean­ing exists in a single item". Everything depends upon how single items (elements are articulated. One everyday activity in which the social world is articulated is through the construction of se­quences. Just as participants attend to the se­quential placing of interactional "events", so should social scientists. Using examples drawn from focus groups, fieldnotes and audiotapes, I argue that the identification of such sequences rather than the citing of instances should constitute a prime test for the adequacy of any claim about qualitative data. URN: urn:nbn:de:0114-fqs0503301

  15. A practical approximation algorithm for solving massive instances of hybridization number

    NARCIS (Netherlands)

    Iersel, van L.J.J.; Kelk, S.M.; Lekic, N.; Scornavacca, C.; Raphael, B.; Tang, J.

    2012-01-01

    Reticulate events play an important role in determining evolutionary relationships. The problem of computing the minimum number of such events to explain discordance between two phylogenetic trees is a hard computational problem. In practice, exact solvers struggle to solve instances with

  16. Anaphoric Reference to Instances, Instantiated and Non-Instantiated Categories: A Reading Time Study.

    Science.gov (United States)

    Garnham, Alan

    1981-01-01

    Experiments using memory paradigms have shown that general terms receive context-dependent encodings. This experiment investigates the encoding of category and instance nouns. The results indicate that representations set up during reading are the product of both the linguistic input and of general knowledge. (Author/KC)

  17. Microsoft and the Court of First Instance: What Does it All Mean?

    OpenAIRE

    Renata Hesse

    2007-01-01

    As someone who has spent a considerable portion of the last five years working on issues involving Microsoft’s conduct and the competition laws, I read with interest the commentary that followed the issuance of the Court of First Instance’s decision on September 17.

  18. Feature selection is the ReliefF for multiple instance learning

    NARCIS (Netherlands)

    Zafra, A.; Pechenizkiy, M.; Ventura, S.

    2010-01-01

    Dimensionality reduction and feature selection in particular are known to be of a great help for making supervised learning more effective and efficient. Many different feature selection techniques have been proposed for the traditional settings, where each instance is expected to have a label. In

  19. HyDR-MI : A hybrid algorithm to reduce dimensionality in multiple instance learning

    NARCIS (Netherlands)

    Zafra, A.; Pechenizkiy, M.; Ventura, S.

    2013-01-01

    Feature selection techniques have been successfully applied in many applications for making supervised learning more effective and efficient. These techniques have been widely used and studied in traditional supervised learning settings, where each instance is expected to have a label. In multiple

  20. Increasing the detection of minority class instances in financial statement fraud

    CSIR Research Space (South Africa)

    Moepya, Stephen

    2017-04-01

    Full Text Available -1 Asian Conference on Intelligent Information and Database Systems, 3-5 April 2017, Kanazawa, Japan Increasing the detection of minority class instances in financial statement fraud Stephen Obakeng Moepya1,2(B), Fulufhelo V. Nelwamondo1...

  1. On Combining Multiple-Instance Learning and Active Learning for Computer-Aided Detection of Tuberculosis

    NARCIS (Netherlands)

    Melendez Rodriguez, J.C.; Ginneken, B. van; Maduskar, P.; Philipsen, R.H.H.M.; Ayles, H.; Sanchez, C.I.

    2016-01-01

    The major advantage of multiple-instance learning (MIL) applied to a computer-aided detection (CAD) system is that it allows optimizing the latter with case-level labels instead of accurate lesion outlines as traditionally required for a supervised approach. As shown in previous work, a MIL-based

  2. Incorporating Edge Information into Best Merge Region-Growing Segmentation

    Science.gov (United States)

    Tilton, James C.; Pasolli, Edoardo

    2014-01-01

    We have previously developed a best merge region-growing approach that integrates nonadjacent region object aggregation with the neighboring region merge process usually employed in region growing segmentation approaches. This approach has been named HSeg, because it provides a hierarchical set of image segmentation results. Up to this point, HSeg considered only global region feature information in the region growing decision process. We present here three new versions of HSeg that include local edge information into the region growing decision process at different levels of rigor. We then compare the effectiveness and processing times of these new versions HSeg with each other and with the original version of HSeg.

  3. Solving large instances of the quadratic cost of partition problem on dense graphs by data correcting algorithms

    NARCIS (Netherlands)

    Goldengorin, Boris; Vink, Marius de

    1999-01-01

    The Data-Correcting Algorithm (DCA) corrects the data of a hard problem instance in such a way that we obtain an instance of a well solvable special case. For a given prescribed accuracy of the solution, the DCA uses a branch and bound scheme to make sure that the solution of the corrected instance

  4. Segmented trapped vortex cavity

    Science.gov (United States)

    Grammel, Jr., Leonard Paul (Inventor); Pennekamp, David Lance (Inventor); Winslow, Jr., Ralph Henry (Inventor)

    2010-01-01

    An annular trapped vortex cavity assembly segment comprising includes a cavity forward wall, a cavity aft wall, and a cavity radially outer wall there between defining a cavity segment therein. A cavity opening extends between the forward and aft walls at a radially inner end of the assembly segment. Radially spaced apart pluralities of air injection first and second holes extend through the forward and aft walls respectively. The segment may include first and second expansion joint features at distal first and second ends respectively of the segment. The segment may include a forward subcomponent including the cavity forward wall attached to an aft subcomponent including the cavity aft wall. The forward and aft subcomponents include forward and aft portions of the cavity radially outer wall respectively. A ring of the segments may be circumferentially disposed about an axis to form an annular segmented vortex cavity assembly.

  5. Pavement management segment consolidation

    Science.gov (United States)

    1998-01-01

    Dividing roads into "homogeneous" segments has been a major problem for all areas of highway engineering. SDDOT uses Deighton Associates Limited software, dTIMS, to analyze life-cycle costs for various rehabilitation strategies on each segment of roa...

  6. AN ITERATIVE SEGMENTATION METHOD FOR REGION OF INTEREST EXTRACTION

    Directory of Open Access Journals (Sweden)

    Volkan CETIN

    2013-01-01

    Full Text Available In this paper, a method is presented for applications which include mammographic image segmentation and region of interest extraction. Segmentation is a very critical and difficult stage to accomplish in computer aided detection systems. Although the presented segmentation method is developed for mammographic images, it can be used for any medical image which resembles the same statistical characteristics with mammograms. Fundamentally, the method contains iterative automatic thresholding and masking operations which is applied to the original or enhanced mammograms. Also the effect of image enhancement to the segmentation process was observed. A version of histogram equalization was applied to the images for enhancement. Finally, the results show that enhanced version of the proposed segmentation method is preferable because of its better success rate.

  7. Spinal segmental dysgenesis

    Directory of Open Access Journals (Sweden)

    N Mahomed

    2009-06-01

    Full Text Available Spinal segmental dysgenesis is a rare congenital spinal abnormality , seen in neonates and infants in which a segment of the spine and spinal cord fails to develop normally . The condition is segmental with normal vertebrae above and below the malformation. This condition is commonly associated with various abnormalities that affect the heart, genitourinary, gastrointestinal tract and skeletal system. We report two cases of spinal segmental dysgenesis and the associated abnormalities.

  8. Efficient Algorithms for Segmentation of Item-Set Time Series

    Science.gov (United States)

    Chundi, Parvathi; Rosenkrantz, Daniel J.

    We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.

  9. Automatic Melody Segmentation

    NARCIS (Netherlands)

    Rodríguez López, Marcelo

    2016-01-01

    The work presented in this dissertation investigates music segmentation. In the field of Musicology, segmentation refers to a score analysis technique, whereby notated pieces or passages of these pieces are divided into “units” referred to as sections, periods, phrases, and so on. Segmentation

  10. Multi-Instance Quotation System (SaaS) Based on Docker Containerizing Platform

    OpenAIRE

    Shekhamis, Anass

    2016-01-01

    This thesis covers the development of a quotation system that is built as a multi-instance SaaS. Quotation systems usually come as part of customer relationship management systems, but not necessarily included. They also tend to have invoicing alongside the original functionality; creating quotations for customers. The system uses Microservices Architecture were each service is a replaceable and upgradeable component that achieve certain functionality and easily integrate with other third-par...

  11. FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule

    OpenAIRE

    Lu Si; Jie Yu; Shasha Li; Jun Ma; Lei Luo; Qingbo Wu; Yongqi Ma; Zhengji Liu

    2017-01-01

    Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rul...

  12. Extension of instance search technique by geometric coding and quantization error compensation

    OpenAIRE

    García Del Molino, Ana

    2013-01-01

    [ANGLÈS] This PFC analyzes two ways of improving the video retrieval techniques for instance search problem. In one hand, "Pairing Interest Points for a better Signature using Sparse Detector's Spatial Information", allows the Bag-of-Words model to keep some spatial information. In the other, "Study of the Hamming Embedding Signature Symmetry in Video Retrieval" provides binary signatures that refine the matching based on visual words, and aims to find the best way of matching taking into acc...

  13. Instances of Use of United States Armed Forces Abroad, 1798-2014

    Science.gov (United States)

    2014-09-15

    Garcia, and Thomas J. Nicola . Instances of Use of United States Armed Forces Abroad, 1798-2014 Congressional Research Service Contents...landing zones near the U.S. Embassy in Saigon and the Tan Son Nhut Airfield. Mayaguez incident. On May 15, 1975, President Ford reported he had ordered...Report R41989, Congressional Authority to Limit Military Operations, by Jennifer K. Elsea, Michael John Garcia and Thomas J. Nicola . CRS Report R43344

  14. A Hybrid Instance Selection Using Nearest-Neighbor for Cross-Project Defect Prediction

    Institute of Scientific and Technical Information of China (English)

    Duksan Ryu; Jong-In Jang; Jongmoon Baik; Member; ACM; IEEE

    2015-01-01

    Software defect prediction (SDP) is an active research field in software engineering to identify defect-prone modules. Thanks to SDP, limited testing resources can be effectively allocated to defect-prone modules. Although SDP requires suffcient local data within a company, there are cases where local data are not available, e.g., pilot projects. Companies without local data can employ cross-project defect prediction (CPDP) using external data to build classifiers. The major challenge of CPDP is different distributions between training and test data. To tackle this, instances of source data similar to target data are selected to build classifiers. Software datasets have a class imbalance problem meaning the ratio of defective class to clean class is far low. It usually lowers the performance of classifiers. We propose a Hybrid Instance Selection Using Nearest-Neighbor (HISNN) method that performs a hybrid classification selectively learning local knowledge (via k-nearest neighbor) and global knowledge (via na¨ıve Bayes). Instances having strong local knowledge are identified via nearest-neighbors with the same class label. Previous studies showed low PD (probability of detection) or high PF (probability of false alarm) which is impractical to use. The experimental results show that HISNN produces high overall performance as well as high PD and low PF.

  15. A medley of meanings: Insights from an instance of gameplay in League of Legends

    Directory of Open Access Journals (Sweden)

    Max Watson

    2015-06-01

    Full Text Available This article engages with the notion of insightful gameplay. It recounts debates about what, if anything, makes play meaningful. Through these, it contends that while some games are explicitly designed to foster insightful gameplay, most are not and many might even be considered utterly meaningless. It notes how discussions about what makes playing games meaningful raise concomitant questions about what playing means. It then strives to reconcile these two interrelated questions by offering the notion of a medley of meanings. A medley of meanings is the notion that each player brings their own subjective disposition to playing to a particular instance of gameplay; no participant to gameplay should be considered as in a state that is “not playing”. Because these subjective dispositions to playing can be quite divergent, players can and often do clash in instances of gameplay. This article then contends that these clashes can in turn render the most seemingly meaningless games potential hotbeds of insightful gameplay. The second half of this article discusses the ethnographic example of an instance of gameplay in the digital game League of Legends in order to explicate the notion of a medley of meanings.

  16. A practical approximation algorithm for solving massive instances of hybridization number for binary and nonbinary trees.

    Science.gov (United States)

    van Iersel, Leo; Kelk, Steven; Lekić, Nela; Scornavacca, Celine

    2014-05-05

    Reticulate events play an important role in determining evolutionary relationships. The problem of computing the minimum number of such events to explain discordance between two phylogenetic trees is a hard computational problem. Even for binary trees, exact solvers struggle to solve instances with reticulation number larger than 40-50. Here we present CycleKiller and NonbinaryCycleKiller, the first methods to produce solutions verifiably close to optimality for instances with hundreds or even thousands of reticulations. Using simulations, we demonstrate that these algorithms run quickly for large and difficult instances, producing solutions that are very close to optimality. As a spin-off from our simulations we also present TerminusEst, which is the fastest exact method currently available that can handle nonbinary trees: this is used to measure the accuracy of the NonbinaryCycleKiller algorithm. All three methods are based on extensions of previous theoretical work (SIDMA 26(4):1635-1656, TCBB 10(1):18-25, SIDMA 28(1):49-66) and are publicly available. We also apply our methods to real data.

  17. Instance selection in digital soil mapping: a study case in Rio Grande do Sul, Brazil

    Directory of Open Access Journals (Sweden)

    Elvio Giasson

    2015-09-01

    Full Text Available A critical issue in digital soil mapping (DSM is the selection of data sampling method for model training. One emerging approach applies instance selection to reduce the size of the dataset by drawing only relevant samples in order to obtain a representative subset that is still large enough to preserve relevant information, but small enough to be easily handled by learning algorithms. Although there are suggestions to distribute data sampling as a function of the soil map unit (MU boundaries location, there are still contradictions among research recommendations for locating samples either closer or more distant from soil MU boundaries. A study was conducted to evaluate instance selection methods based on spatially-explicit data collection using location in relation to soil MU boundaries as the main criterion. Decision tree analysis was performed for modeling digital soil class mapping using two different sampling schemes: a selecting sampling points located outside buffers near soil MU boundaries, and b selecting sampling points located within buffers near soil MU boundaries. Data was prepared for generating classification trees to include only data points located within or outside buffers with widths of 60, 120, 240, 360, 480, and 600m near MU boundaries. Instance selection methods using both spatial selection of methods was effective for reduced size of the dataset used for calibrating classification tree models, but failed to provide advantages to digital soil mapping because of potential reduction in the accuracy of classification tree models.

  18. Synergistic Instance-Level Subspace Alignment for Fine-Grained Sketch-Based Image Retrieval.

    Science.gov (United States)

    Li, Ke; Pang, Kaiyue; Song, Yi-Zhe; Hospedales, Timothy M; Xiang, Tao; Zhang, Honggang

    2017-08-25

    We study the problem of fine-grained sketch-based image retrieval. By performing instance-level (rather than category-level) retrieval, it embodies a timely and practical application, particularly with the ubiquitous availability of touchscreens. Three factors contribute to the challenging nature of the problem: (i) free-hand sketches are inherently abstract and iconic, making visual comparisons with photos difficult, (ii) sketches and photos are in two different visual domains, i.e. black and white lines vs. color pixels, and (iii) fine-grained distinctions are especially challenging when executed across domain and abstraction-level. To address these challenges, we propose to bridge the image-sketch gap both at the high-level via parts and attributes, as well as at the low-level, via introducing a new domain alignment method. More specifically, (i) we contribute a dataset with 304 photos and 912 sketches, where each sketch and image is annotated with its semantic parts and associated part-level attributes. With the help of this dataset, we investigate (ii) how strongly-supervised deformable part-based models can be learned that subsequently enable automatic detection of part-level attributes, and provide pose-aligned sketch-image comparisons. To reduce the sketch-image gap when comparing low-level features, we also (iii) propose a novel method for instance-level domain-alignment, that exploits both subspace and instance-level cues to better align the domains. Finally (iv) these are combined in a matching framework integrating aligned low-level features, mid-level geometric structure and high-level semantic attributes. Extensive experiments conducted on our new dataset demonstrate effectiveness of the proposed method.

  19. On the instance of misuse of unprofitable energy prices under cartel law

    International Nuclear Information System (INIS)

    Schoening, M.

    1993-01-01

    The practice of fixing prices which do not cover the costs can on principle not be considered an instance of misuse pursuant to Articles 22 Section 4 Clause 2 No. 2, 103 Section 5 Clause 2 No. 2 of the GWB (cartel laws). If the authority for the supervision of cartels takes action against companies operating with unprofitable prices, this constitutes a violation not only of cartel law, but also of the constitution. The cartel authorities have no right to dismiss a dominating company's referral to poor business prospects on the ground that its business report is theoretically manipulable. Rather, the burden of proof of concealment is on the authorities. (orig.) [de

  20. Automatic provisioning, deployment and orchestration for load-balancing THREDDS instances

    Science.gov (United States)

    Cofino, A. S.; Fernández-Tejería, S.; Kershaw, P.; Cimadevilla, E.; Petri, R.; Pryor, M.; Stephens, A.; Herrera, S.

    2017-12-01

    THREDDS is a widely used web server to provide to different scientific communities with data access and discovery. Due to THREDDS's lack of horizontal scalability and automatic configuration management and deployment, this service usually deals with service downtimes and time consuming configuration tasks, mainly when an intensive use is done as is usual within the scientific community (e.g. climate). Instead of the typical installation and configuration of a single or multiple independent THREDDS servers, manually configured, this work presents an automatic provisioning, deployment and orchestration cluster of THREDDS servers. This solution it's based on Ansible playbooks, used to control automatically the deployment and configuration setup on a infrastructure and to manage the datasets available in THREDDS instances. The playbooks are based on modules (or roles) of different backends and frontends load-balancing setups and solutions. The frontend load-balancing system enables horizontal scalability by delegating requests to backend workers, consisting in a variable number of instances for the THREDDS server. This implementation allows to configure different infrastructure and deployment scenario setups, as more workers are easily added to the cluster by simply declaring them as Ansible variables and executing the playbooks, and also provides fault-tolerance and better reliability since if any of the workers fail another instance of the cluster can take over it. In order to test the solution proposed, two real scenarios are analyzed in this contribution: The JASMIN Group Workspaces at CEDA and the User Data Gateway (UDG) at the Data Climate Service from the University of Cantabria. On the one hand, the proposed configuration has provided CEDA with a higher level and more scalable Group Workspaces (GWS) service than the previous one based on Unix permissions, improving also the data discovery and data access experience. On the other hand, the UDG has improved its

  1. Automatic analysis of online image data for law enforcement agencies by concept detection and instance search

    Science.gov (United States)

    de Boer, Maaike H. T.; Bouma, Henri; Kruithof, Maarten C.; ter Haar, Frank B.; Fischer, Noëlle M.; Hagendoorn, Laurens K.; Joosten, Bart; Raaijmakers, Stephan

    2017-10-01

    The information available on-line and off-line, from open as well as from private sources, is growing at an exponential rate and places an increasing demand on the limited resources of Law Enforcement Agencies (LEAs). The absence of appropriate tools and techniques to collect, process, and analyze the volumes of complex and heterogeneous data has created a severe information overload. If a solution is not found, the impact on law enforcement will be dramatic, e.g. because important evidence is missed or the investigation time is too long. Furthermore, there is an uneven level of capabilities to deal with the large volumes of complex and heterogeneous data that come from multiple open and private sources at national level across the EU, which hinders cooperation and information sharing. Consequently, there is a pertinent need to develop tools, systems and processes which expedite online investigations. In this paper, we describe a suite of analysis tools to identify and localize generic concepts, instances of objects and logos in images, which constitutes a significant portion of everyday law enforcement data. We describe how incremental learning based on only a few examples and large-scale indexing are addressed in both concept detection and instance search. Our search technology allows querying of the database by visual examples and by keywords. Our tools are packaged in a Docker container to guarantee easy deployment on a system and our tools exploit possibilities provided by open source toolboxes, contributing to the technical autonomy of LEAs.

  2. Towards a new procreation ethic: the exemplary instance of cleft lip and palate.

    Science.gov (United States)

    Le Dref, Gaëlle; Grollemund, Bruno; Danion-Grilliat, Anne; Weber, Jean-Christophe

    2013-08-01

    The improvement of ultrasound scan techniques is enabling ever earlier prenatal diagnosis of developmental anomalies. In France, apart from cases where the mother's life is endangered, the detection of "particularly serious" conditions, and conditions that are "incurable at the time of diagnosis" are the only instances in which a therapeutic abortion can be performed, this applying up to the 9th month of pregnancy. Thus numerous conditions, despite the fact that they cause distress or pain or are socially disabling, do not qualify for therapeutic abortion, despite sometimes pressing demands from parents aware of the difficulties in store for their child and themselves, in a society that is not very favourable towards the integration and self-fulfilment of people with a disability. Cleft lip and palate (CLP), although it can be completely treated, is one of the conditions that considerably complicates the lives of child and parents. Nevertheless, the recent scope for making very early diagnosis of CLP, before the deadline for legal voluntary abortion, has not led to any wave of abortions. CLP in France has the benefit of a exceptional care plan, targeting both the health and the integration of the individuals affected. This article sets out, via the emblematic instance of CLP, to show how present fears of an emerging "domestic" or liberal eugenic trend could become redundant if disability is addressed politically and medically, so that individuals with a disability have the same social rights as any other citizen.

  3. Reversing Kristeva's first instance of abjection: the formation of self reconsidered.

    Science.gov (United States)

    McCabe, Janet L; Holmes, Dave

    2011-03-01

    Psychoanalyst Julia Kristeva defines the theoretical concept of abjection as an unconscious defence mechanism used to protect the self against threats to one's subjectivity. Kristeva suggests that the first instance of abjection in an individual's life occurs when the child abjects the mother. However, the instance of abjection addressed within this paper is the reverse of this: the abjection of the child, with a disability, by the parent, and more broadly society. Using the contemporary example of prenatal testing, the authors explore how parents of children with disabilities may be influenced in abjecting the child. The implications of abjection of the child are then used to explore normalization, routinization of care and the development of standardized care practices within health-care. Prenatal screening practices and standardized care permeate medical obstetric care and social discourses regarding pregnancy and childbirth, thereby affecting not only healthcare professionals but also parents in their position as consumers of health-care. In a time when the focus of health-care is increasingly placed on disease prevention and broader medical and social discourses glorify normalcy and consistency, the unconscious abjection of those that do not fit within these standards must be identified and addressed. © 2011 Blackwell Publishing Ltd.

  4. Seeing is believing: video classification for computed tomographic colonography using multiple-instance learning.

    Science.gov (United States)

    Wang, Shijun; McKenna, Matthew T; Nguyen, Tan B; Burns, Joseph E; Petrick, Nicholas; Sahiner, Berkman; Summers, Ronald M

    2012-05-01

    In this paper, we present development and testing results for a novel colonic polyp classification method for use as part of a computed tomographic colonography (CTC) computer-aided detection (CAD) system. Inspired by the interpretative methodology of radiologists using 3-D fly-through mode in CTC reading, we have developed an algorithm which utilizes sequences of images (referred to here as videos) for classification of CAD marks. For each CAD mark, we created a video composed of a series of intraluminal, volume-rendered images visualizing the detection from multiple viewpoints. We then framed the video classification question as a multiple-instance learning (MIL) problem. Since a positive (negative) bag may contain negative (positive) instances, which in our case depends on the viewing angles and camera distance to the target, we developed a novel MIL paradigm to accommodate this class of problems. We solved the new MIL problem by maximizing a L2-norm soft margin using semidefinite programming, which can optimize relevant parameters automatically. We tested our method by analyzing a CTC data set obtained from 50 patients from three medical centers. Our proposed method showed significantly better performance compared with several traditional MIL methods.

  5. Emancipation trough the Artistic Experience and the Meaning of Handicap as Instance of Otherness

    Directory of Open Access Journals (Sweden)

    Robi Kroflič

    2014-03-01

    Full Text Available The key hypothesis of the article is that successful inter-mediation of art to vulnerable groups of people (including children depends on the correct identification of the nature of an artistic act and on the meaning that handicap—as an instance of otherness—has in the life of artists and spectators. A just access to the artistic experience is basically not the question of the distribution of artistic production (since if artistic object is principally accessible to all people, it will not reach vulnerable groups of spectators, but of ensuring artistic creativity and presentation. This presupposes a spectator as a competent being who is able to interact with the artistic object without our interpretative explanation and who is sensible to the instance of otherness (handicap is merely a specific form of otherness. The theory of emancipation from J. Ranciere, the theory of recognition from A. Honneth, and the theory of narration from P. Ricoeur and R. Kearney, as well as our experiences with a comprehensive inductive approach and artistic experience as one of its basic educational methods offer us a theoretical framework for such a model of art inter-mediation.

  6. Poly-Pattern Compressive Segmentation of ASTER Data for GIS

    Science.gov (United States)

    Myers, Wayne; Warner, Eric; Tutwiler, Richard

    2007-01-01

    Pattern-based segmentation of multi-band image data, such as ASTER, produces one-byte and two-byte approximate compressions. This is a dual segmentation consisting of nested coarser and finer level pattern mappings called poly-patterns. The coarser A-level version is structured for direct incorporation into geographic information systems in the manner of a raster map. GIs renderings of this A-level approximation are called pattern pictures which have the appearance of color enhanced images. The two-byte version consisting of thousands of B-level segments provides a capability for approximate restoration of the multi-band data in selected areas or entire scenes. Poly-patterns are especially useful for purposes of change detection and landscape analysis at multiple scales. The primary author has implemented the segmentation methodology in a public domain software suite.

  7. Segmentation, advertising and prices

    NARCIS (Netherlands)

    Galeotti, Andrea; Moraga González, José

    This paper explores the implications of market segmentation on firm competitiveness. In contrast to earlier work, here market segmentation is minimal in the sense that it is based on consumer attributes that are completely unrelated to tastes. We show that when the market is comprised by two

  8. Sipunculans and segmentation

    DEFF Research Database (Denmark)

    Wanninger, Andreas; Kristof, Alen; Brinkmann, Nora

    2009-01-01

    mechanisms may act on the level of gene expression, cell proliferation, tissue differentiation and organ system formation in individual segments. Accordingly, in some polychaete annelids the first three pairs of segmental peripheral neurons arise synchronously, while the metameric commissures of the ventral...

  9. Iatrogenic alterations in the biodistribution of radiotracers as a result of drug therapy: Reported instances

    International Nuclear Information System (INIS)

    Hladik, W.B. III; Ponto, J.A.; Lentle, B.C.; Laven, D.L.

    1987-01-01

    This chapter is a compilation of reported instances in which the biodistribution of a radiopharmaceutical has been (or could be) modified by the administration of a therapeutic nonradioactive drug or contrast agent in such a way as to potentially interfere with the interpretation of the nuclear medicine study in question. This type of phenomenon is commonly referred to as a drug-radiopharmaceutical interaction. In this chapter, interactions are arranged according to the radiopharmaceutical involved; each interaction is characterized by use of the following descriptors: 1. Interfering drug: the interfering nonradioactive drug that alters the kinetics of the radiopharmaceutical and thus changes the resulting diagnostic data obtained from the study. 2. Nuclear medicine study affected: the nuclear medicine study in which the interaction is likely to occur. 3. Effect on image: the appearance of the image (or the effect on diagnostic data) which results from the interaction. 4. Significance: the potential clinical significance of the interaction

  10. Prediction of Ionizing Radiation Resistance in Bacteria Using a Multiple Instance Learning Model.

    Science.gov (United States)

    Aridhi, Sabeur; Sghaier, Haïtham; Zoghlami, Manel; Maddouri, Mondher; Nguifo, Engelbert Mephu

    2016-01-01

    Ionizing-radiation-resistant bacteria (IRRB) are important in biotechnology. In this context, in silico methods of phenotypic prediction and genotype-phenotype relationship discovery are limited. In this work, we analyzed basal DNA repair proteins of most known proteome sequences of IRRB and ionizing-radiation-sensitive bacteria (IRSB) in order to learn a classifier that correctly predicts this bacterial phenotype. We formulated the problem of predicting bacterial ionizing radiation resistance (IRR) as a multiple-instance learning (MIL) problem, and we proposed a novel approach for this purpose. We provide a MIL-based prediction system that classifies a bacterium to either IRRB or IRSB. The experimental results of the proposed system are satisfactory with 91.5% of successful predictions.

  11. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    Science.gov (United States)

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  12. The Impact of Culture On Smart Community Technology: The Case of 13 Wikipedia Instances

    Directory of Open Access Journals (Sweden)

    Zinayida Petrushyna

    2014-11-01

    Full Text Available Smart communities provide technologies for monitoring social behaviors inside communities. The technologies that support knowledge building should consider the cultural background of community members. The studies of the influence of the culture on knowledge building is limited. Just a few works consider digital traces of individuals that they explain using cultural values and beliefs. In this work, we analyze 13 Wikipedia instances where users with different cultural background build knowledge in different ways. We compare edits of users. Using social network analysis we build and analyze co- authorship networks and watch the networks evolution. We explain the differences we have found using Hofstede dimensions and Schwartz cultural values and discuss implications for the design of smart community technologies. Our findings provide insights in requirements for technologies used for smart communities in different cultures.

  13. Retrieving clinically relevant diabetic retinopathy images using a multi-class multiple-instance framework

    Science.gov (United States)

    Chandakkar, Parag S.; Venkatesan, Ragav; Li, Baoxin

    2013-02-01

    Diabetic retinopathy (DR) is a vision-threatening complication from diabetes mellitus, a medical condition that is rising globally. Unfortunately, many patients are unaware of this complication because of absence of symptoms. Regular screening of DR is necessary to detect the condition for timely treatment. Content-based image retrieval, using archived and diagnosed fundus (retinal) camera DR images can improve screening efficiency of DR. This content-based image retrieval study focuses on two DR clinical findings, microaneurysm and neovascularization, which are clinical signs of non-proliferative and proliferative diabetic retinopathy. The authors propose a multi-class multiple-instance image retrieval framework which deploys a modified color correlogram and statistics of steerable Gaussian Filter responses, for retrieving clinically relevant images from a database of DR fundus image database.

  14. An anomaly detection and isolation scheme with instance-based learning and sequential analysis

    International Nuclear Information System (INIS)

    Yoo, T. S.; Garcia, H. E.

    2006-01-01

    This paper presents an online anomaly detection and isolation (FDI) technique using an instance-based learning method combined with a sequential change detection and isolation algorithm. The proposed method uses kernel density estimation techniques to build statistical models of the given empirical data (null hypothesis). The null hypothesis is associated with the set of alternative hypotheses modeling the abnormalities of the systems. A decision procedure involves a sequential change detection and isolation algorithm. Notably, the proposed method enjoys asymptotic optimality as the applied change detection and isolation algorithm is optimal in minimizing the worst mean detection/isolation delay for a given mean time before a false alarm or a false isolation. Applicability of this methodology is illustrated with redundant sensor data set and its performance. (authors)

  15. Interpreting Black-Box Classifiers Using Instance-Level Visual Explanations

    Energy Technology Data Exchange (ETDEWEB)

    Tamagnini, Paolo; Krause, Josua W.; Dasgupta, Aritra; Bertini, Enrico

    2017-05-14

    To realize the full potential of machine learning in diverse real- world domains, it is necessary for model predictions to be readily interpretable and actionable for the human in the loop. Analysts, who are the users but not the developers of machine learning models, often do not trust a model because of the lack of transparency in associating predictions with the underlying data space. To address this problem, we propose Rivelo, a visual analytic interface that enables analysts to understand the causes behind predictions of binary classifiers by interactively exploring a set of instance-level explanations. These explanations are model-agnostic, treating a model as a black box, and they help analysts in interactively probing the high-dimensional binary data space for detecting features relevant to predictions. We demonstrate the utility of the interface with a case study analyzing a random forest model on the sentiment of Yelp reviews about doctors.

  16. Automated detection of age-related macular degeneration in OCT images using multiple instance learning

    Science.gov (United States)

    Sun, Weiwei; Liu, Xiaoming; Yang, Zhou

    2017-07-01

    Age-related Macular Degeneration (AMD) is a kind of macular disease which mostly occurs in old people,and it may cause decreased vision or even lead to permanent blindness. Drusen is an important clinical indicator for AMD which can help doctor diagnose disease and decide the strategy of treatment. Optical Coherence Tomography (OCT) is widely used in the diagnosis of ophthalmic diseases, include AMD. In this paper, we propose a classification method based on Multiple Instance Learning (MIL) to detect AMD. Drusen can exist in a few slices of OCT images, and MIL is utilized in our method. We divided the method into two phases: training phase and testing phase. We train the initial features and clustered to create a codebook, and employ the trained classifier in the test set. Experiment results show that our method achieved high accuracy and effectiveness.

  17. [External cephalic version].

    Science.gov (United States)

    Navarro-Santana, B; Duarez-Coronado, M; Plaza-Arranz, J

    2016-08-01

    To analyze the rate of successful external cephalic versions in our center and caesarean sections that would be avoided with the use of external cephalic versions. From January 2012 to March 2016 external cephalic versions carried out at our center, which were a total of 52. We collected data about female age, gestational age at the time of the external cephalic version, maternal body mass index (BMI), fetal variety and situation, fetal weight, parity, location of the placenta, amniotic fluid index (ILA), tocolysis, analgesia, and newborn weight at birth, minor adverse effects (dizziness, hypotension and maternal pain) and major adverse effects (tachycardia, bradycardia, decelerations and emergency cesarean section). 45% of the versions were unsuccessful and 55% were successful. The percentage of successful vaginal delivery in versions was 84% (4% were instrumental) and 15% of caesarean sections. With respect to the variables studied, only significant differences in birth weight were found; suggesting that birth weight it is related to the outcome of external cephalic version. Probably we did not find significant differences due to the number of patients studied. For women with breech presentation, we recommend external cephalic version before the expectant management or performing a cesarean section. The external cephalic version increases the proportion of fetuses in cephalic presentation and also decreases the rate of caesarean sections.

  18. Multidimensional Brain MRI segmentation using graph cuts

    International Nuclear Information System (INIS)

    Lecoeur, Jeremy

    2010-01-01

    This thesis deals with the segmentation of multimodal brain MRIs by graph cuts method. First, we propose a method that utilizes three MRI modalities by merging them. The border information given by the spectral gradient is then challenged by a region information, given by the seeds selected by the user, using a graph cut algorithm. Then, we propose three enhancements of this method. The first consists in finding an optimal spectral space because the spectral gradient is based on natural images and then inadequate for multimodal medical images. This results in a learning based segmentation method. We then explore the automation of the graph cut method. Here, the various pieces of information usually given by the user are inferred from a robust expectation-maximization algorithm. We show the performance of these two enhanced versions on multiple sclerosis lesions. Finally, we integrate atlases for the automatic segmentation of deep brain structures. These three new techniques show the adaptability of our method to various problems. Our different segmentation methods are better than most of nowadays techniques, speaking of computation time or segmentation accuracy. (authors)

  19. Quantitative Comparison of SPM, FSL, and Brainsuite for Brain MR Image Segmentation

    Directory of Open Access Journals (Sweden)

    Kazemi K

    2014-03-01

    Full Text Available Background: Accurate brain tissue segmentation from magnetic resonance (MR images is an important step in analysis of cerebral images. There are software packages which are used for brain segmentation. These packages usually contain a set of skull stripping, intensity non-uniformity (bias correction and segmentation routines. Thus, assessment of the quality of the segmented gray matter (GM, white matter (WM and cerebrospinal fluid (CSF is needed for the neuroimaging applications. Methods: In this paper, performance evaluation of three widely used brain segmentation software packages SPM8, FSL and Brainsuite is presented. Segmentation with SPM8 has been performed in three frameworks: i default segmentation, ii SPM8 New-segmentation and iii modified version using hidden Markov random field as implemented in SPM8-VBM toolbox. Results: The accuracy of the segmented GM, WM and CSF and the robustness of the tools against changes of image quality has been assessed using Brainweb simulated MR images and IBSR real MR images. The calculated similarity between the segmented tissues using different tools and corresponding ground truth shows variations in segmentation results. Conclusion: A few studies has investigated GM, WM and CSF segmentation. In these studies, the skull stripping and bias correction are performed separately and they just evaluated the segmentation. Thus, in this study, assessment of complete segmentation framework consisting of pre-processing and segmentation of these packages is performed. The obtained results can assist the users in choosing an appropriate segmentation software package for the neuroimaging application of interest.

  20. Stochastic learning of multi-instance dictionary for earth mover’s distance-based histogram comparison

    KAUST Repository

    Fan, Jihong

    2016-09-17

    Dictionary plays an important role in multi-instance data representation. It maps bags of instances to histograms. Earth mover’s distance (EMD) is the most effective histogram distance metric for the application of multi-instance retrieval. However, up to now, there is no existing multi-instance dictionary learning methods designed for EMD-based histogram comparison. To fill this gap, we develop the first EMD-optimal dictionary learning method using stochastic optimization method. In the stochastic learning framework, we have one triplet of bags, including one basic bag, one positive bag, and one negative bag. These bags are mapped to histograms using a multi-instance dictionary. We argue that the EMD between the basic histogram and the positive histogram should be smaller than that between the basic histogram and the negative histogram. Base on this condition, we design a hinge loss. By minimizing this hinge loss and some regularization terms of the dictionary, we update the dictionary instances. The experiments over multi-instance retrieval applications shows its effectiveness when compared to other dictionary learning methods over the problems of medical image retrieval and natural language relation classification. © 2016 The Natural Computing Applications Forum

  1. Versioning Complex Data

    Energy Technology Data Exchange (ETDEWEB)

    Macduff, Matt C.; Lee, Benno; Beus, Sherman J.

    2014-06-29

    Using the history of ARM data files, we designed and demonstrated a data versioning paradigm that is feasible. Assigning versions to sets of files that are modified with some special assumptions and domain specific rules was effective in the case of ARM data, which has more than 5000 datastreams and 500TB of data.

  2. Pancreas and cyst segmentation

    Science.gov (United States)

    Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie

    2016-03-01

    Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.

  3. Segmentation of consumer's markets and evaluation of market's segments

    OpenAIRE

    ŠVECOVÁ, Iveta

    2013-01-01

    The goal of this bachelor thesis was to explain a possibly segmentation of consumer´s markets for a chosen company, and to present a suitable goods offer, so it would be suitable to the needs of selected segments. The work is divided into theoretical and practical part. First part describes marketing, segmentation, segmentation of consumer's markets, consumer's market, market's segments a other terms. Second part describes an evaluation of questionnaire survey, discovering of market's segment...

  4. Utilising Tree-Based Ensemble Learning for Speaker Segmentation

    DEFF Research Database (Denmark)

    Abou-Zleikha, Mohamed; Tan, Zheng-Hua; Christensen, Mads Græsbøll

    2014-01-01

    In audio and speech processing, accurate detection of the changing points between multiple speakers in speech segments is an important stage for several applications such as speaker identification and tracking. Bayesian Information Criteria (BIC)-based approaches are the most traditionally used...... for a certain condition, the model becomes biased to the data used for training limiting the model’s generalisation ability. In this paper, we propose a BIC-based tuning-free approach for speaker segmentation through the use of ensemble-based learning. A forest of segmentation trees is constructed in which each...... tree is trained using a sampled version of the speech segment. During the tree construction process, a set of randomly selected points in the input sequence is examined as potential segmentation points. The point that yields the highest ΔBIC is chosen and the same process is repeated for the resultant...

  5. Segmental tuberculosis verrucosa cutis

    Directory of Open Access Journals (Sweden)

    Hanumanthappa H

    1994-01-01

    Full Text Available A case of segmental Tuberculosis Verrucosa Cutis is reported in 10 year old boy. The condition was resembling the ascending lymphangitic type of sporotrichosis. The lesions cleared on treatment with INH 150 mg daily for 6 months.

  6. Chromosome condensation and segmentation

    International Nuclear Information System (INIS)

    Viegas-Pequignot, E.M.

    1981-01-01

    Some aspects of chromosome condensation in mammalians -humans especially- were studied by means of cytogenetic techniques of chromosome banding. Two further approaches were adopted: a study of normal condensation as early as prophase, and an analysis of chromosome segmentation induced by physical (temperature and γ-rays) or chemical agents (base analogues, antibiotics, ...) in order to show out the factors liable to affect condensation. Here 'segmentation' means an abnormal chromosome condensation appearing systematically and being reproducible. The study of normal condensation was made possible by the development of a technique based on cell synchronization by thymidine and giving prophasic and prometaphasic cells. Besides, the possibility of inducing R-banding segmentations on these cells by BrdU (5-bromodeoxyuridine) allowed a much finer analysis of karyotypes. Another technique was developed using 5-ACR (5-azacytidine), it allowed to induce a segmentation similar to the one obtained using BrdU and identify heterochromatic areas rich in G-C bases pairs [fr

  7. International EUREKA: Initialization Segment

    International Nuclear Information System (INIS)

    1982-02-01

    The Initialization Segment creates the starting description of the uranium market. The starting description includes the international boundaries of trade, the geologic provinces, resources, reserves, production, uranium demand forecasts, and existing market transactions. The Initialization Segment is designed to accept information of various degrees of detail, depending on what is known about each region. It must transform this information into a specific data structure required by the Market Segment of the model, filling in gaps in the information through a predetermined sequence of defaults and built in assumptions. A principal function of the Initialization Segment is to create diagnostic messages indicating any inconsistencies in data and explaining which assumptions were used to organize the data base. This permits the user to manipulate the data base until such time the user is satisfied that all the assumptions used are reasonable and that any inconsistencies are resolved in a satisfactory manner

  8. Landmark-based deep multi-instance learning for brain disease diagnosis.

    Science.gov (United States)

    Liu, Mingxia; Zhang, Jun; Adeli, Ehsan; Shen, Dinggang

    2018-01-01

    In conventional Magnetic Resonance (MR) image based methods, two stages are often involved to capture brain structural information for disease diagnosis, i.e., 1) manually partitioning each MR image into a number of regions-of-interest (ROIs), and 2) extracting pre-defined features from each ROI for diagnosis with a certain classifier. However, these pre-defined features often limit the performance of the diagnosis, due to challenges in 1) defining the ROIs and 2) extracting effective disease-related features. In this paper, we propose a landmark-based deep multi-instance learning (LDMIL) framework for brain disease diagnosis. Specifically, we first adopt a data-driven learning approach to discover disease-related anatomical landmarks in the brain MR images, along with their nearby image patches. Then, our LDMIL framework learns an end-to-end MR image classifier for capturing both the local structural information conveyed by image patches located by landmarks and the global structural information derived from all detected landmarks. We have evaluated our proposed framework on 1526 subjects from three public datasets (i.e., ADNI-1, ADNI-2, and MIRIAD), and the experimental results show that our framework can achieve superior performance over state-of-the-art approaches. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    Directory of Open Access Journals (Sweden)

    Nadia Said

    Full Text Available Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  10. When palliative treatment achieves more than palliation: Instances of long-term survival after palliative radiotherapy

    Directory of Open Access Journals (Sweden)

    Madhup Rastogi

    2012-01-01

    Full Text Available Context: Palliative radiotherapy aims at symptom alleviation and improvement of quality of life. It may be effective in conferring a reasonable quantum of local control, as well as possibly prolonging survival on the short term. However, there can be rare instances where long-term survival, or even cure, results from palliative radiotherapy, which mostly uses sub-therapeutic doses. Aim: To categorize and characterize the patients with long-term survival and/or cure after palliative radiotherapy. Materials and Methods: This study is a retrospective analysis of hospital records of patients treated with palliative radiotherapy from 2001 to 2006 at the Regional Cancer Centre, Shimla. Results: Of the analyzed 963 patients who received palliative radiotherapy, 2.4% (n = 23 survived at least 5 years, with a large majority of these surviving patients (73.9%, n = 17 being free of disease. Conclusions: In addition to providing valuable symptom relief, palliative radiotherapy utilizing sub-therapeutic doses may, in a small proportion of patients, bestow long-term survival, and possibly cure. Rationally, such a favorable, but rare outcome cannot be expected with supportive care alone.

  11. Mass detection in digital breast tomosynthesis data using convolutional neural networks and multiple instance learning.

    Science.gov (United States)

    Yousefi, Mina; Krzyżak, Adam; Suen, Ching Y

    2018-05-01

    Digital breast tomosynthesis (DBT) was developed in the field of breast cancer screening as a new tomographic technique to minimize the limitations of conventional digital mammography breast screening methods. A computer-aided detection (CAD) framework for mass detection in DBT has been developed and is described in this paper. The proposed framework operates on a set of two-dimensional (2D) slices. With plane-to-plane analysis on corresponding 2D slices from each DBT, it automatically learns complex patterns of 2D slices through a deep convolutional neural network (DCNN). It then applies multiple instance learning (MIL) with a randomized trees approach to classify DBT images based on extracted information from 2D slices. This CAD framework was developed and evaluated using 5040 2D image slices derived from 87 DBT volumes. The empirical results demonstrate that this proposed CAD framework achieves much better performance than CAD systems that use hand-crafted features and deep cardinality-restricted Bolzmann machines to detect masses in DBTs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. The role of inertia in modeling decisions from experience with instance-based learning.

    Science.gov (United States)

    Dutt, Varun; Gonzalez, Cleotilde

    2012-01-01

    One form of inertia is the tendency to repeat the last decision irrespective of the obtained outcomes while making decisions from experience (DFE). A number of computational models based upon the Instance-Based Learning Theory, a theory of DFE, have included different inertia implementations and have shown to simultaneously account for both risk-taking and alternations between alternatives. The role that inertia plays in these models, however, is unclear as the same model without inertia is also able to account for observed risk-taking quite well. This paper demonstrates the predictive benefits of incorporating one particular implementation of inertia in an existing IBL model. We use two large datasets, estimation and competition, from the Technion Prediction Tournament involving a repeated binary-choice task to show that incorporating an inertia mechanism in an IBL model enables it to account for the observed average risk-taking and alternations. Including inertia, however, does not help the model to account for the trends in risk-taking and alternations over trials compared to the IBL model without the inertia mechanism. We generalize the two IBL models, with and without inertia, to the competition set by using the parameters determined in the estimation set. The generalization process demonstrates both the advantages and disadvantages of including inertia in an IBL model.

  13. A Fisher Kernel Approach for Multiple Instance Based Object Retrieval in Video Surveillance

    Directory of Open Access Journals (Sweden)

    MIRONICA, I.

    2015-11-01

    Full Text Available This paper presents an automated surveillance system that exploits the Fisher Kernel representation in the context of multiple-instance object retrieval task. The proposed algorithm has the main purpose of tracking a list of persons in several video sources, using only few training examples. In the first step, the Fisher Kernel representation describes a set of features as the derivative with respect to the log-likelihood of the generative probability distribution that models the feature distribution. Then, we learn the generative probability distribution over all features extracted from a reduced set of relevant frames. The proposed approach shows significant improvements and we demonstrate that Fisher kernels are well suited for this task. We demonstrate the generality of our approach in terms of features by conducting an extensive evaluation with a broad range of keypoints features. Also, we evaluate our method on two standard video surveillance datasets attaining superior results comparing to state-of-the-art object recognition algorithms.

  14. The boundaries of instance-based learning theory for explaining decisions from experience.

    Science.gov (United States)

    Gonzalez, Cleotilde

    2013-01-01

    Most demonstrations of how people make decisions in risky situations rely on decisions from description, where outcomes and their probabilities are explicitly stated. But recently, more attention has been given to decisions from experience where people discover these outcomes and probabilities through exploration. More importantly, risky behavior depends on how decisions are made (from description or experience), and although prospect theory explains decisions from description, a comprehensive model of decisions from experience is yet to be found. Instance-based learning theory (IBLT) explains how decisions are made from experience through interactions with dynamic environments (Gonzalez et al., 2003). The theory has shown robust explanations of behavior across multiple tasks and contexts, but it is becoming unclear what the theory is able to explain and what it does not. The goal of this chapter is to start addressing this problem. I will introduce IBLT and a recent cognitive model based on this theory: the IBL model of repeated binary choice; then I will discuss the phenomena that the IBL model explains and those that the model does not. The argument is for the theory's robustness but also for clarity in terms of concrete effects that the theory can or cannot account for. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Multiple-instance learning for computer-aided detection of tuberculosis

    Science.gov (United States)

    Melendez, J.; Sánchez, C. I.; Philipsen, R. H. H. M.; Maduskar, P.; van Ginneken, B.

    2014-03-01

    Detection of tuberculosis (TB) on chest radiographs (CXRs) is a hard problem. Therefore, to help radiologists or even take their place when they are not available, computer-aided detection (CAD) systems are being developed. In order to reach a performance comparable to that of human experts, the pattern recognition algorithms of these systems are typically trained on large CXR databases that have been manually annotated to indicate the abnormal lung regions. However, manually outlining those regions constitutes a time-consuming process that, besides, is prone to inconsistencies and errors introduced by interobserver variability and the absence of an external reference standard. In this paper, we investigate an alternative pattern classi cation method, namely multiple-instance learning (MIL), that does not require such detailed information for a CAD system to be trained. We have applied this alternative approach to a CAD system aimed at detecting textural lesions associated with TB. Only the case (or image) condition (normal or abnormal) was provided in the training stage. We compared the resulting performance with those achieved by several variations of a conventional system trained with detailed annotations. A database of 917 CXRs was constructed for experimentation. It was divided into two roughly equal parts that were used as training and test sets. The area under the receiver operating characteristic curve was utilized as a performance measure. Our experiments show that, by applying the investigated MIL approach, comparable results as with the aforementioned conventional systems are obtained in most cases, without requiring condition information at the lesion level.

  16. Event recognition in personal photo collections via multiple instance learning-based classification of multiple images

    Science.gov (United States)

    Ahmad, Kashif; Conci, Nicola; Boato, Giulia; De Natale, Francesco G. B.

    2017-11-01

    Over the last few years, a rapid growth has been witnessed in the number of digital photos produced per year. This rapid process poses challenges in the organization and management of multimedia collections, and one viable solution consists of arranging the media on the basis of the underlying events. However, album-level annotation and the presence of irrelevant pictures in photo collections make event-based organization of personal photo albums a more challenging task. To tackle these challenges, in contrast to conventional approaches relying on supervised learning, we propose a pipeline for event recognition in personal photo collections relying on a multiple instance-learning (MIL) strategy. MIL is a modified form of supervised learning and fits well for such applications with weakly labeled data. The experimental evaluation of the proposed approach is carried out on two large-scale datasets including a self-collected and a benchmark dataset. On both, our approach significantly outperforms the existing state-of-the-art.

  17. Fluence map segmentation

    International Nuclear Information System (INIS)

    Rosenwald, J.-C.

    2008-01-01

    The lecture addressed the following topics: 'Interpreting' the fluence map; The sequencer; Reasons for difference between desired and actual fluence map; Principle of 'Step and Shoot' segmentation; Large number of solutions for given fluence map; Optimizing 'step and shoot' segmentation; The interdigitation constraint; Main algorithms; Conclusions on segmentation algorithms (static mode); Optimizing intensity levels and monitor units; Sliding window sequencing; Synchronization to avoid the tongue-and-groove effect; Accounting for physical characteristics of MLC; Importance of corrections for leaf transmission and offset; Accounting for MLC mechanical constraints; The 'complexity' factor; Incorporating the sequencing into optimization algorithm; Data transfer to the treatment machine; Interface between R and V and accelerator; and Conclusions on fluence map segmentation (Segmentation is part of the overall inverse planning procedure; 'Step and Shoot' and 'Dynamic' options are available for most TPS (depending on accelerator model; The segmentation phase tends to come into the optimization loop; The physical characteristics of the MLC have a large influence on final dose distribution; The IMRT plans (MU and relative dose distribution) must be carefully validated). (P.A.)

  18. Gamifying Video Object Segmentation.

    Science.gov (United States)

    Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela

    2017-10-01

    Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

  19. Strategic market segmentation

    Directory of Open Access Journals (Sweden)

    Maričić Branko R.

    2015-01-01

    Full Text Available Strategic planning of marketing activities is the basis of business success in modern business environment. Customers are not homogenous in their preferences and expectations. Formulating an adequate marketing strategy, focused on realization of company's strategic objectives, requires segmented approach to the market that appreciates differences in expectations and preferences of customers. One of significant activities in strategic planning of marketing activities is market segmentation. Strategic planning imposes a need to plan marketing activities according to strategically important segments on the long term basis. At the same time, there is a need to revise and adapt marketing activities on the short term basis. There are number of criteria based on which market segmentation is performed. The paper will consider effectiveness and efficiency of different market segmentation criteria based on empirical research of customer expectations and preferences. The analysis will include traditional criteria and criteria based on behavioral model. The research implications will be analyzed from the perspective of selection of the most adequate market segmentation criteria in strategic planning of marketing activities.

  20. A Streamlined Artificial Variable Free Version of Simplex Method

    OpenAIRE

    Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad

    2015-01-01

    This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new ...

  1. EU adoption of the IFRS 8 standard on operating segments

    OpenAIRE

    Nicolas Véron

    2007-01-01

    In this paper, presented to the Economic and Monetary Affairs Committee of the European Parliament, Nicolas Véron discusses whether the EU should adopt the controversial IFRS 8 standard, a convergence project on how companies should report the performance of their individual business segments. Vérons recommendation is for the European Union not to adopt the current version of IFRS 8.

  2. Segmented block copolymers with monodisperse aramide end-segments

    NARCIS (Netherlands)

    Araichimani, A.; Gaymans, R.J.

    2008-01-01

    Segmented block copolymers were synthesized using monodisperse diaramide (TT) as hard segments and PTMO with a molecular weight of 2 900 g · mol-1 as soft segments. The aramide: PTMO segment ratio was increased from 1:1 to 2:1 thereby changing the structure from a high molecular weight multi-block

  3. Salience Assignment for Multiple-Instance Data and Its Application to Crop Yield Prediction

    Science.gov (United States)

    Wagstaff, Kiri L.; Lane, Terran

    2010-01-01

    An algorithm was developed to generate crop yield predictions from orbital remote sensing observations, by analyzing thousands of pixels per county and the associated historical crop yield data for those counties. The algorithm determines which pixels contain which crop. Since each known yield value is associated with thousands of individual pixels, this is a multiple instance learning problem. Because individual crop growth is related to the resulting yield, this relationship has been leveraged to identify pixels that are individually related to corn, wheat, cotton, and soybean yield. Those that have the strongest relationship to a given crop s yield values are most likely to contain fields with that crop. Remote sensing time series data (a new observation every 8 days) was examined for each pixel, which contains information for that pixel s growth curve, peak greenness, and other relevant features. An alternating-projection (AP) technique was used to first estimate the "salience" of each pixel, with respect to the given target (crop yield), and then those estimates were used to build a regression model that relates input data (remote sensing observations) to the target. This is achieved by constructing an exemplar for each crop in each county that is a weighted average of all the pixels within the county; the pixels are weighted according to the salience values. The new regression model estimate then informs the next estimate of the salience values. By iterating between these two steps, the algorithm converges to a stable estimate of both the salience of each pixel and the regression model. The salience values indicate which pixels are most relevant to each crop under consideration.

  4. Cyber situation awareness: modeling detection of cyber attacks with instance-based learning theory.

    Science.gov (United States)

    Dutt, Varun; Ahn, Young-Suk; Gonzalez, Cleotilde

    2013-06-01

    To determine the effects of an adversary's behavior on the defender's accurate and timely detection of network threats. Cyber attacks cause major work disruption. It is important to understand how a defender's behavior (experience and tolerance to threats), as well as adversarial behavior (attack strategy), might impact the detection of threats. In this article, we use cognitive modeling to make predictions regarding these factors. Different model types representing a defender, based on Instance-Based Learning Theory (IBLT), faced different adversarial behaviors. A defender's model was defined by experience of threats: threat-prone (90% threats and 10% nonthreats) and nonthreat-prone (10% threats and 90% nonthreats); and different tolerance levels to threats: risk-averse (model declares a cyber attack after perceiving one threat out of eight total) and risk-seeking (model declares a cyber attack after perceiving seven threats out of eight total). Adversarial behavior is simulated by considering different attack strategies: patient (threats occur late) and impatient (threats occur early). For an impatient strategy, risk-averse models with threat-prone experiences show improved detection compared with risk-seeking models with nonthreat-prone experiences; however, the same is not true for a patient strategy. Based upon model predictions, a defender's prior threat experiences and his or her tolerance to threats are likely to predict detection accuracy; but considering the nature of adversarial behavior is also important. Decision-support tools that consider the role of a defender's experience and tolerance to threats along with the nature of adversarial behavior are likely to improve a defender's overall threat detection.

  5. Instance-based Policy Learning by Real-coded Genetic Algorithms and Its Application to Control of Nonholonomic Systems

    Science.gov (United States)

    Miyamae, Atsushi; Sakuma, Jun; Ono, Isao; Kobayashi, Shigenobu

    The stabilization control of nonholonomic systems have been extensively studied because it is essential for nonholonomic robot control problems. The difficulty in this problem is that the theoretical derivation of control policy is not necessarily guaranteed achievable. In this paper, we present a reinforcement learning (RL) method with instance-based policy (IBP) representation, in which control policies for this class are optimized with respect to user-defined cost functions. Direct policy search (DPS) is an approach for RL; the policy is represented by parametric models and the model parameters are directly searched by optimization techniques including genetic algorithms (GAs). In IBP representation an instance consists of a state and an action pair; a policy consists of a set of instances. Several DPSs with IBP have been previously proposed. In these methods, sometimes fail to obtain optimal control policies when state-action variables are continuous. In this paper, we present a real-coded GA for DPSs with IBP. Our method is specifically designed for continuous domains. Optimization of IBP has three difficulties; high-dimensionality, epistasis, and multi-modality. Our solution is designed for overcoming these difficulties. The policy search with IBP representation appears to be high-dimensional optimization; however, instances which can improve the fitness are often limited to active instances (instances used for the evaluation). In fact, the number of active instances is small. Therefore, we treat the search problem as a low dimensional problem by restricting search variables only to active instances. It has been commonly known that functions with epistasis can be efficiently optimized with crossovers which satisfy the inheritance of statistics. For efficient search of IBP, we propose extended crossover-like mutation (extended XLM) which generates a new instance around an instance with satisfying the inheritance of statistics. For overcoming multi-modality, we

  6. Rediscovering market segmentation.

    Science.gov (United States)

    Yankelovich, Daniel; Meer, David

    2006-02-01

    In 1964, Daniel Yankelovich introduced in the pages of HBR the concept of nondemographic segmentation, by which he meant the classification of consumers according to criteria other than age, residence, income, and such. The predictive power of marketing studies based on demographics was no longer strong enough to serve as a basis for marketing strategy, he argued. Buying patterns had become far better guides to consumers' future purchases. In addition, properly constructed nondemographic segmentations could help companies determine which products to develop, which distribution channels to sell them in, how much to charge for them, and how to advertise them. But more than 40 years later, nondemographic segmentation has become just as unenlightening as demographic segmentation had been. Today, the technique is used almost exclusively to fulfill the needs of advertising, which it serves mainly by populating commercials with characters that viewers can identify with. It is true that psychographic types like "High-Tech Harry" and "Joe Six-Pack" may capture some truth about real people's lifestyles, attitudes, self-image, and aspirations. But they are no better than demographics at predicting purchase behavior. Thus they give corporate decision makers very little idea of how to keep customers or capture new ones. Now, Daniel Yankelovich returns to these pages, with consultant David Meer, to argue the case for a broad view of nondemographic segmentation. They describe the elements of a smart segmentation strategy, explaining how segmentations meant to strengthen brand identity differ from those capable of telling a company which markets it should enter and what goods to make. And they introduce their "gravity of decision spectrum", a tool that focuses on the form of consumer behavior that should be of the greatest interest to marketers--the importance that consumers place on a product or product category.

  7. Scorpion image segmentation system

    Science.gov (United States)

    Joseph, E.; Aibinu, A. M.; Sadiq, B. A.; Bello Salau, H.; Salami, M. J. E.

    2013-12-01

    Death as a result of scorpion sting has been a major public health problem in developing countries. Despite the high rate of death as a result of scorpion sting, little report exists in literature of intelligent device and system for automatic detection of scorpion. This paper proposed a digital image processing approach based on the floresencing characteristics of Scorpion under Ultra-violet (UV) light for automatic detection and identification of scorpion. The acquired UV-based images undergo pre-processing to equalize uneven illumination and colour space channel separation. The extracted channels are then segmented into two non-overlapping classes. It has been observed that simple thresholding of the green channel of the acquired RGB UV-based image is sufficient for segmenting Scorpion from other background components in the acquired image. Two approaches to image segmentation have also been proposed in this work, namely, the simple average segmentation technique and K-means image segmentation. The proposed algorithm has been tested on over 40 UV scorpion images obtained from different part of the world and results obtained show an average accuracy of 97.7% in correctly classifying the pixel into two non-overlapping clusters. The proposed 1system will eliminate the problem associated with some of the existing manual approaches presently in use for scorpion detection.

  8. Version pressure feedback mechanisms for speculative versioning caches

    Science.gov (United States)

    Eichenberger, Alexandre E.; Gara, Alan; O& #x27; Brien, Kathryn M.; Ohmacht, Martin; Zhuang, Xiaotong

    2013-03-12

    Mechanisms are provided for controlling version pressure on a speculative versioning cache. Raw version pressure data is collected based on one or more threads accessing cache lines of the speculative versioning cache. One or more statistical measures of version pressure are generated based on the collected raw version pressure data. A determination is made as to whether one or more modifications to an operation of a data processing system are to be performed based on the one or more statistical measures of version pressure, the one or more modifications affecting version pressure exerted on the speculative versioning cache. An operation of the data processing system is modified based on the one or more determined modifications, in response to a determination that one or more modifications to the operation of the data processing system are to be performed, to affect the version pressure exerted on the speculative versioning cache.

  9. Unsupervised motion-based object segmentation refined by color

    Science.gov (United States)

    Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris

    2003-06-01

    For various applications, such as data compression, structure from motion, medical imaging and video enhancement, there is a need for an algorithm that divides video sequences into independently moving objects. Because our focus is on video enhancement and structure from motion for consumer electronics, we strive for a low complexity solution. For still images, several approaches exist based on colour, but these lack in both speed and segmentation quality. For instance, colour-based watershed algorithms produce a so-called oversegmentation with many segments covering each single physical object. Other colour segmentation approaches exist which somehow limit the number of segments to reduce this oversegmentation problem. However, this often results in inaccurate edges or even missed objects. Most likely, colour is an inherently insufficient cue for real world object segmentation, because real world objects can display complex combinations of colours. For video sequences, however, an additional cue is available, namely the motion of objects. When different objects in a scene have different motion, the motion cue alone is often enough to reliably distinguish objects from one another and the background. However, because of the lack of sufficient resolution of efficient motion estimators, like the 3DRS block matcher, the resulting segmentation is not at pixel resolution, but at block resolution. Existing pixel resolution motion estimators are more sensitive to noise, suffer more from aperture problems or have less correspondence to the true motion of objects when compared to block-based approaches or are too computationally expensive. From its tendency to oversegmentation it is apparent that colour segmentation is particularly effective near edges of homogeneously coloured areas. On the other hand, block-based true motion estimation is particularly effective in heterogeneous areas, because heterogeneous areas improve the chance a block is unique and thus decrease the

  10. Multi scales based sparse matrix spectral clustering image segmentation

    Science.gov (United States)

    Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin

    2018-04-01

    In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.

  11. Japanese migration in contemporary Japan: economic segmentation and interprefectural migration.

    Science.gov (United States)

    Fukurai, H

    1991-01-01

    This paper examines the economic segmentation model in explaining 1985-86 Japanese interregional migration. The analysis takes advantage of statistical graphic techniques to illustrate the following substantive issues of interregional migration: (1) to examine whether economic segmentation significantly influences Japanese regional migration and (2) to explain socioeconomic characteristics of prefectures for both in- and out-migration. Analytic techniques include a latent structural equation (LISREL) methodology and statistical residual mapping. The residual dispersion patterns, for instance, suggest the extent to which socioeconomic and geopolitical variables explain migration differences by showing unique clusters of unexplained residuals. The analysis further points out that extraneous factors such as high residential land values, significant commuting populations, and regional-specific cultures and traditions need to be incorporated in the economic segmentation model in order to assess the extent of the model's reliability in explaining the pattern of interprefectural migration.

  12. Segmentation of complex document

    Directory of Open Access Journals (Sweden)

    Souad Oudjemia

    2014-06-01

    Full Text Available In this paper we present a method for segmentation of documents image with complex structure. This technique based on GLCM (Grey Level Co-occurrence Matrix used to segment this type of document in three regions namely, 'graphics', 'background' and 'text'. Very briefly, this method is to divide the document image, in block size chosen after a series of tests and then applying the co-occurrence matrix to each block in order to extract five textural parameters which are energy, entropy, the sum entropy, difference entropy and standard deviation. These parameters are then used to classify the image into three regions using the k-means algorithm; the last step of segmentation is obtained by grouping connected pixels. Two performance measurements are performed for both graphics and text zones; we have obtained a classification rate of 98.3% and a Misclassification rate of 1.79%.

  13. Connecting textual segments

    DEFF Research Database (Denmark)

    Brügger, Niels

    2017-01-01

    history than just the years of the emergence of the web, the chapter traces the history of how segments of text have deliberately been connected to each other by the use of specific textual and media features, from clay tablets, manuscripts on parchment, and print, among others, to hyperlinks on stand......In “Connecting textual segments: A brief history of the web hyperlink” Niels Brügger investigates the history of one of the most fundamental features of the web: the hyperlink. Based on the argument that the web hyperlink is best understood if it is seen as another step in a much longer and broader...

  14. Media Memories in Focus Group Discussions - Methodological Reflections Instancing the Global Media Generations Project

    Directory of Open Access Journals (Sweden)

    Theo Hug

    2010-06-01

    Full Text Available Medienereignisse wie auch die Einführung und Verbreitung neuer Medientechnologien und Formate bringen mannigfaltige Wege des „Eintretens von Medien ins Leben“ mit sich. Im Projekt Globale Mediengenerationen (GMG wurden Medienerinnerungen aus der Kindheit im Kontext von Gruppendiskussionen am Beispiel dreier Generationen aus verschiedenen Ländern aller Kontinente untersucht. Dabei wurden medienbezogene Wissensbestände von drei Alterskohorten globaler Generationen analysiert. Der Artikel diskutiert methodologische Aspekte des Projekts und komplexe und selektive Prozesse des Erinnerns vergangener Ereignisse. Er untersucht Gemeinsamkeiten und Unterschiede des GMG-Ansatzes mit dem dokumentarischen Ansatz von Ralf Bohnsack, die beide in der Wissenssoziologie von Karl Mannheim verwurzelt sind. Darüber hinaus wird Medialität als basale methodologische Kategorie in Erwägung gezogen, nicht nur im Hinblick auf die Klärung begrifflicher Grundlagen, sondern auch als inhärente Dimension von Forschungsprozessen. Media events in general and the introduction and divulgence of new media technologies and formats in particular implicate various (new ways of “media entering life.” In the Global Media Generations (GMG research project, articulation of individuals’ memories of childhood experiences with the media was afforded by context of focus groups of three generations in different countries of six continents. In this project media related knowledge segments of different age cohorts have been analyzed and interpreted. The article deals with methodological questions of the project and complex processes of ‘remembering’ past events. It explores commonalities and differences of the GMG approach with Ralf Bohnsack’s documentary approach, both rooted in the sociology of knowledge of Karl Mannheim. Furthermore, mediality is taken into consideration as a basic methodological category, which means that it is perceived not only as subject matter to

  15. Gaussian Multiple Instance Learning Approach for Mapping the Slums of the World Using Very High Resolution Imagery

    Energy Technology Data Exchange (ETDEWEB)

    Vatsavai, Raju [ORNL

    2013-01-01

    In this paper, we present a computationally efficient algo- rithm based on multiple instance learning for mapping infor- mal settlements (slums) using very high-resolution remote sensing imagery. From remote sensing perspective, infor- mal settlements share unique spatial characteristics that dis- tinguish them from other urban structures like industrial, commercial, and formal residential settlements. However, regular pattern recognition and machine learning methods, which are predominantly single-instance or per-pixel classi- fiers, often fail to accurately map the informal settlements as they do not capture the complex spatial patterns. To overcome these limitations we employed a multiple instance based machine learning approach, where groups of contigu- ous pixels (image patches) are modeled as generated by a Gaussian distribution. We have conducted several experi- ments on very high-resolution satellite imagery, represent- ing four unique geographic regions across the world. Our method showed consistent improvement in accurately iden- tifying informal settlements.

  16. Determining Optimal Decision Version

    Directory of Open Access Journals (Sweden)

    Olga Ioana Amariei

    2014-06-01

    Full Text Available In this paper we start from the calculation of the product cost, applying the method of calculating the cost of hour- machine (THM, on each of the three cutting machines, namely: the cutting machine with plasma, the combined cutting machine (plasma and water jet and the cutting machine with a water jet. Following the calculation of cost and taking into account the precision of manufacturing of each machine, as well as the quality of the processed surface, the optimal decisional version needs to be determined regarding the product manufacturing. To determine the optimal decisional version, we resort firstly to calculating the optimal version on each criterion, and then overall using multiattribute decision methods.

  17. Version 2 of RSXMULTI

    International Nuclear Information System (INIS)

    Heinicke, P.; Berg, D.; Constanta-Fanourakis, P.; Quigg, E.K.

    1985-01-01

    MULTI is a general purpose, high speed, high energy physics interface to data acquisition and data investigation system that runs on PDP-11 and VAX architecture. This paper describes the latest version of MULTI, which runs under RSX-11M version 4.1 and supports a modular approach to the separate tasks that interface to it, allowing the same system to be used in single CPU test beam experiments as well as multiple interconnected CPU, large scale experiments. MULTI uses CAMAC (IEE-583) for control and monitoring of an experiment, and is written in FORTRAN-77 and assembler. The design of this version, which simplified the interface between tasks, and eliminated the need for a hard to maintain homegrown I/O system is also discussed

  18. Segmentation in cinema perception.

    Science.gov (United States)

    Carroll, J M; Bever, T G

    1976-03-12

    Viewers perceptually segment moving picture sequences into their cinematically defined units: excerpts that follow short film sequences are recognized faster when the excerpt originally came after a structural cinematic break (a cut or change in the action) than when it originally came before the break.

  19. Dictionary Based Image Segmentation

    DEFF Research Database (Denmark)

    Dahl, Anders Bjorholm; Dahl, Vedrana Andersen

    2015-01-01

    We propose a method for weakly supervised segmentation of natural images, which may contain both textured or non-textured regions. Our texture representation is based on a dictionary of image patches. To divide an image into separated regions with similar texture we use an implicit level sets...

  20. Unsupervised Image Segmentation

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Mikeš, Stanislav

    2014-01-01

    Roč. 36, č. 4 (2014), s. 23-23 R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : unsupervised image segmentation Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2014/RO/haindl-0434412.pdf

  1. Metrics for image segmentation

    Science.gov (United States)

    Rees, Gareth; Greenway, Phil; Morray, Denise

    1998-07-01

    An important challenge in mapping image-processing techniques onto applications is the lack of quantitative performance measures. From a systems engineering perspective these are essential if system level requirements are to be decomposed into sub-system requirements which can be understood in terms of algorithm selection and performance optimization. Nowhere in computer vision is this more evident than in the area of image segmentation. This is a vigorous and innovative research activity, but even after nearly two decades of progress, it remains almost impossible to answer the question 'what would the performance of this segmentation algorithm be under these new conditions?' To begin to address this shortcoming, we have devised a well-principled metric for assessing the relative performance of two segmentation algorithms. This allows meaningful objective comparisons to be made between their outputs. It also estimates the absolute performance of an algorithm given ground truth. Our approach is an information theoretic one. In this paper, we describe the theory and motivation of our method, and present practical results obtained from a range of state of the art segmentation methods. We demonstrate that it is possible to measure the objective performance of these algorithms, and to use the information so gained to provide clues about how their performance might be improved.

  2. User-assisted Object Detection by Segment Based Similarity Measures in Mobile Laser Scanner Data

    NARCIS (Netherlands)

    Oude Elberink, S.J.; Kemboi, B.J.

    2014-01-01

    This paper describes a method that aims to find all instances of a certain object in Mobile Laser Scanner (MLS) data. In a userassisted approach, a sample segment of an object is selected, and all similar objects are to be found. By selecting samples from multiple classes, a classification can be

  3. Versioning of printed products

    Science.gov (United States)

    Tuijn, Chris

    2005-01-01

    During the definition of a printed product in an MIS system, a lot of attention is paid to the production process. The MIS systems typically gather all process-related parameters at such a level of detail that they can determine what the exact cost will be to make a specific product. This information can then be used to make a quote for the customer. Considerably less attention is paid to the content of the products since this does not have an immediate impact on the production costs (assuming that the number of inks or plates is known in advance). The content management is typically carried out either by the prepress systems themselves or by dedicated workflow servers uniting all people that contribute to the manufacturing of a printed product. Special care must be taken when considering versioned products. With versioned products we here mean distinct products that have a number of pages or page layers in common. Typical examples are comic books that have to be printed in different languages. In this case, the color plates can be shared over the different versions and the black plate will be different. Other examples are nation-wide magazines or newspapers that have an area with regional pages or advertising leaflets in different languages or currencies. When considering versioned products, the content will become an important cost factor. First of all, the content management (and associated proofing and approval cycles) becomes much more complex and, therefore, the risk that mistakes will be made increases considerably. Secondly, the real production costs are very much content-dependent because the content will determine whether plates can be shared across different versions or not and how many press runs will be needed. In this paper, we will present a way to manage different versions of a printed product. First, we will introduce a data model for version management. Next, we will show how the content of the different versions can be supplied by the customer

  4. COSY INFINITY Version 9

    International Nuclear Information System (INIS)

    Makino, Kyoko; Berz, Martin

    2006-01-01

    In this paper, we review the features in the newly released version of COSY INFINITY, which currently has a base of more than 1000 registered users, focusing on the topics which are new and some topics which became available after the first release of the previous versions 8 and 8.1. The recent main enhancements of the code are devoted to reliability and efficiency of the computation, to verified integration, and to rigorous global optimization. There are various data types available in COSY INFINITY to support these goals, and the paper also reviews the feature and usage of those data types

  5. A Pareto-based Ensemble with Feature and Instance Selection for Learning from Multi-Class Imbalanced Datasets.

    Science.gov (United States)

    Fernández, Alberto; Carmona, Cristobal José; José Del Jesus, María; Herrera, Francisco

    2017-09-01

    Imbalanced classification is related to those problems that have an uneven distribution among classes. In addition to the former, when instances are located into the overlapped areas, the correct modeling of the problem becomes harder. Current solutions for both issues are often focused on the binary case study, as multi-class datasets require an additional effort to be addressed. In this research, we overcome these problems by carrying out a combination between feature and instance selections. Feature selection will allow simplifying the overlapping areas easing the generation of rules to distinguish among the classes. Selection of instances from all classes will address the imbalance itself by finding the most appropriate class distribution for the learning task, as well as possibly removing noise and difficult borderline examples. For the sake of obtaining an optimal joint set of features and instances, we embedded the searching for both parameters in a Multi-Objective Evolutionary Algorithm, using the C4.5 decision tree as baseline classifier in this wrapper approach. The multi-objective scheme allows taking a double advantage: the search space becomes broader, and we may provide a set of different solutions in order to build an ensemble of classifiers. This proposal has been contrasted versus several state-of-the-art solutions on imbalanced classification showing excellent results in both binary and multi-class problems.

  6. Guide en matière d'évaluation à l'intention des instances qui ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Les évaluations sont un élément clé de la recherche pour le développement, puisqu'elles ... Comprendre le rôle des instances qui commandent des évaluations ... Avec l'aide du CRDI, l'Instituto de la Salud, Medio Ambiente, Economia y ...

  7. Status of the segment interconnect, cable segment ancillary logic, and the cable segment hybrid driver projects

    International Nuclear Information System (INIS)

    Swoboda, C.; Barsotti, E.; Chappa, S.; Downing, R.; Goeransson, G.; Lensy, D.; Moore, G.; Rotolo, C.; Urish, J.

    1985-01-01

    The FASTBUS Segment Interconnect (SI) provides a communication path between two otherwise independent, asynchronous bus segments. In particular, the Segment Interconnect links a backplane crate segment to a cable segment. All standard FASTBUS address and data transactions can be passed through the SI or any number of SIs and segments in a path. Thus systems of arbitrary connection complexity can be formed, allowing simultaneous independent processing, yet still permitting devices associated with one segment to be accessed from others. The model S1 Segment Interconnect and the Cable Segment Ancillary Logic covered in this report comply with all the mandatory features stated in the FASTBUS specification document DOE/ER-0189. A block diagram of the SI is shown

  8. Jansen-MIDAS: A multi-level photomicrograph segmentation software based on isotropic undecimated wavelets.

    Science.gov (United States)

    de Siqueira, Alexandre Fioravante; Cabrera, Flávio Camargo; Nakasuga, Wagner Massayuki; Pagamisse, Aylton; Job, Aldo Eloizo

    2018-01-01

    Image segmentation, the process of separating the elements within a picture, is frequently used for obtaining information from photomicrographs. Segmentation methods should be used with reservations, since incorrect results can mislead when interpreting regions of interest (ROI). This decreases the success rate of extra procedures. Multi-Level Starlet Segmentation (MLSS) and Multi-Level Starlet Optimal Segmentation (MLSOS) were developed to be an alternative for general segmentation tools. These methods gave rise to Jansen-MIDAS, an open-source software. A scientist can use it to obtain several segmentations of hers/his photomicrographs. It is a reliable alternative to process different types of photomicrographs: previous versions of Jansen-MIDAS were used to segment ROI in photomicrographs of two different materials, with an accuracy superior to 89%. © 2017 Wiley Periodicals, Inc.

  9. Market segmentation: Venezuelan ADRs

    Directory of Open Access Journals (Sweden)

    Urbi Garay

    2012-12-01

    Full Text Available The control on foreign exchange imposed by Venezuela in 2003 constitute a natural experiment that allows researchers to observe the effects of exchange controls on stock market segmentation. This paper provides empirical evidence that although the Venezuelan capital market as a whole was highly segmented before the controls were imposed, the shares in the firm CANTV were, through their American Depositary Receipts (ADRs, partially integrated with the global market. Following the imposition of the exchange controls this integration was lost. Research also documents the spectacular and apparently contradictory rise experienced by the Caracas Stock Exchange during the serious economic crisis of 2003. It is argued that, as it happened in Argentina in 2002, the rise in share prices occurred because the depreciation of the Bolívar in the parallel currency market increased the local price of the stocks that had associated ADRs, which were negotiated in dollars.

  10. Scintillation counter, segmented shield

    International Nuclear Information System (INIS)

    Olson, R.E.; Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  11. Head segmentation in vertebrates

    OpenAIRE

    Kuratani, Shigeru; Schilling, Thomas

    2008-01-01

    Classic theories of vertebrate head segmentation clearly exemplify the idealistic nature of comparative embryology prior to the 20th century. Comparative embryology aimed at recognizing the basic, primary structure that is shared by all vertebrates, either as an archetype or an ancestral developmental pattern. Modern evolutionary developmental (Evo-Devo) studies are also based on comparison, and therefore have a tendency to reduce complex embryonic anatomy into overly simplified patterns. Her...

  12. Video segmentation using keywords

    Science.gov (United States)

    Ton-That, Vinh; Vong, Chi-Tai; Nguyen-Dao, Xuan-Truong; Tran, Minh-Triet

    2018-04-01

    At DAVIS-2016 Challenge, many state-of-art video segmentation methods achieve potential results, but they still much depend on annotated frames to distinguish between background and foreground. It takes a lot of time and efforts to create these frames exactly. In this paper, we introduce a method to segment objects from video based on keywords given by user. First, we use a real-time object detection system - YOLOv2 to identify regions containing objects that have labels match with the given keywords in the first frame. Then, for each region identified from the previous step, we use Pyramid Scene Parsing Network to assign each pixel as foreground or background. These frames can be used as input frames for Object Flow algorithm to perform segmentation on entire video. We conduct experiments on a subset of DAVIS-2016 dataset in half the size of its original size, which shows that our method can handle many popular classes in PASCAL VOC 2012 dataset with acceptable accuracy, about 75.03%. We suggest widely testing by combining other methods to improve this result in the future.

  13. Market segmentation in behavioral perspective.

    OpenAIRE

    Wells, V.K.; Chang, S.W.; Oliveira-Castro, J.M.; Pallister, J.

    2010-01-01

    A segmentation approach is presented using both traditional demographic segmentation bases (age, social class/occupation, and working status) and a segmentation by benefits sought. The benefits sought in this case are utilitarian and informational reinforcement, variables developed from the Behavioral Perspective Model (BPM). Using data from 1,847 consumers and from a total of 76,682 individual purchases, brand choice and price and reinforcement responsiveness were assessed for each segment a...

  14. Segmenting the Adult Education Market.

    Science.gov (United States)

    Aurand, Tim

    1994-01-01

    Describes market segmentation and how the principles of segmentation can be applied to the adult education market. Indicates that applying segmentation techniques to adult education programs results in programs that are educationally and financially satisfying and serve an appropriate population. (JOW)

  15. Market Segmentation for Information Services.

    Science.gov (United States)

    Halperin, Michael

    1981-01-01

    Discusses the advantages and limitations of market segmentation as strategy for the marketing of information services made available by nonprofit organizations, particularly libraries. Market segmentation is defined, a market grid for libraries is described, and the segmentation of information services is outlined. A 16-item reference list is…

  16. Version control with Git

    CERN Document Server

    Loeliger, Jon

    2012-01-01

    Get up to speed on Git for tracking, branching, merging, and managing code revisions. Through a series of step-by-step tutorials, this practical guide takes you quickly from Git fundamentals to advanced techniques, and provides friendly yet rigorous advice for navigating the many functions of this open source version control system. This thoroughly revised edition also includes tips for manipulating trees, extended coverage of the reflog and stash, and a complete introduction to the GitHub repository. Git lets you manage code development in a virtually endless variety of ways, once you understand how to harness the system's flexibility. This book shows you how. Learn how to use Git for several real-world development scenarios ; Gain insight into Git's common-use cases, initial tasks, and basic functions ; Use the system for both centralized and distributed version control ; Learn how to manage merges, conflicts, patches, and diffs ; Apply advanced techniques such as rebasing, hooks, and ways to handle submodu...

  17. Global Historical Climatology Network (GHCN), Version 1 (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Please note, this dataset has been superseded by a newer version (see below). Users should not use this version except in rare cases (e.g., when reproducing previous...

  18. COSY INFINITY version 8

    International Nuclear Information System (INIS)

    Makino, Kyoko; Berz, Martin

    1999-01-01

    The latest version of the particle optics code COSY INFINITY is presented. Using Differential Algebraic (DA) methods, the code allows the computation of aberrations of arbitrary field arrangements to in principle unlimited order. Besides providing a general overview of the code, several recent techniques developed for specific applications are highlighted. These include new features for the direct utilization of detailed measured fields as well as rigorous treatment of remainder bounds

  19. EASI graphics - Version II

    International Nuclear Information System (INIS)

    Allensworth, J.A.

    1984-04-01

    EASI (Estimate of Adversary Sequence Interruption) is an analytical technique for measuring the effectiveness of physical protection systems. EASI Graphics is a computer graphics extension of EASI which provides a capability for performing sensitivity and trade-off analyses of the parameters of a physical protection system. This document reports on the implementation of the Version II of EASI Graphics and illustrates its application with some examples. 5 references, 15 figures, 6 tables

  20. Albedo estimation for scene segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C H; Rosenfeld, A

    1983-03-01

    Standard methods of image segmentation do not take into account the three-dimensional nature of the underlying scene. For example, histogram-based segmentation tacitly assumes that the image intensity is piecewise constant, and this is not true when the scene contains curved surfaces. This paper introduces a method of taking 3d information into account in the segmentation process. The image intensities are adjusted to compensate for the effects of estimated surface orientation; the adjusted intensities can be regarded as reflectivity estimates. When histogram-based segmentation is applied to these new values, the image is segmented into parts corresponding to surfaces of constant reflectivity in the scene. 7 references.

  1. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  2. Instance Analysis for the Error of Three-pivot Pressure Transducer Static Balancing Method for Hydraulic Turbine Runner

    Science.gov (United States)

    Weng, Hanli; Li, Youping

    2017-04-01

    The working principle, process device and test procedure of runner static balancing test method by weighting with three-pivot pressure transducers are introduced in this paper. Based on an actual instance of a V hydraulic turbine runner, the error and sensitivity of the three-pivot pressure transducer static balancing method are analysed. Suggestions about improving the accuracy and the application of the method are also proposed.

  3. Correspondence of Concept Hierarchies in Semantic Web Based upon Global Instances and its Application to Facility Management Database

    Science.gov (United States)

    Takahashi, Hiroki; Nishi, Yuusuke; Gion, Tomohiro; Minami, Shinichi; Fukunaga, Tatsuya; Ogata, Jiro; Yoshie, Osamu

    Semantic Web is the technology which determines the relevance of data over the Web using meta-data and which enables advanced search of global information. It is now desired to develop and apply this technology to many situations of facility management. In facility management, vocabulary should be unified to share the database of facilities for generating optimal maintenance schedule and so on. Under such situations, ontology databases are usually used to describe composition or hierarchy of facility parts. However, these vocabularies used in databases are not unified even between factories of same company, and this situation causes communication hazard between them. Moreover, concept involved in the hierarchy cannot be corresponded each other. There are some methods to correspond concepts of different hierarchy. But these methods have some defects, because they only attend target hierarchy itself and the number of instances. We propose improved method for corresponding concepts between different concepts' hierarchies, which uses other hierarchies all over the world of Web and the distance of instances to identify their relations. Our method can work even if the sets of instances belonging to the concepts are not identical.

  4. Multi-View Multi-Instance Learning Based on Joint Sparse Representation and Multi-View Dictionary Learning.

    Science.gov (United States)

    Li, Bing; Yuan, Chunfeng; Xiong, Weihua; Hu, Weiming; Peng, Houwen; Ding, Xinmiao; Maybank, Steve

    2017-12-01

    In multi-instance learning (MIL), the relations among instances in a bag convey important contextual information in many applications. Previous studies on MIL either ignore such relations or simply model them with a fixed graph structure so that the overall performance inevitably degrades in complex environments. To address this problem, this paper proposes a novel multi-view multi-instance learning algorithm (MIL) that combines multiple context structures in a bag into a unified framework. The novel aspects are: (i) we propose a sparse -graph model that can generate different graphs with different parameters to represent various context relations in a bag, (ii) we propose a multi-view joint sparse representation that integrates these graphs into a unified framework for bag classification, and (iii) we propose a multi-view dictionary learning algorithm to obtain a multi-view graph dictionary that considers cues from all views simultaneously to improve the discrimination of the MIL. Experiments and analyses in many practical applications prove the effectiveness of the M IL.

  5. Reproducibility of myelin content-based human habenula segmentation at 3 Tesla.

    Science.gov (United States)

    Kim, Joo-Won; Naidich, Thomas P; Joseph, Joshmi; Nair, Divya; Glasser, Matthew F; O'halloran, Rafael; Doucet, Gaelle E; Lee, Won Hee; Krinsky, Hannah; Paulino, Alejandro; Glahn, David C; Anticevic, Alan; Frangou, Sophia; Xu, Junqian

    2018-03-26

    In vivo morphological study of the human habenula, a pair of small epithalamic nuclei adjacent to the dorsomedial thalamus, has recently gained significant interest for its role in reward and aversion processing. However, segmenting the habenula from in vivo magnetic resonance imaging (MRI) is challenging due to the habenula's small size and low anatomical contrast. Although manual and semi-automated habenula segmentation methods have been reported, the test-retest reproducibility of the segmented habenula volume and the consistency of the boundaries of habenula segmentation have not been investigated. In this study, we evaluated the intra- and inter-site reproducibility of in vivo human habenula segmentation from 3T MRI (0.7-0.8 mm isotropic resolution) using our previously proposed semi-automated myelin contrast-based method and its fully-automated version, as well as a previously published manual geometry-based method. The habenula segmentation using our semi-automated method showed consistent boundary definition (high Dice coefficient, low mean distance, and moderate Hausdorff distance) and reproducible volume measurement (low coefficient of variation). Furthermore, the habenula boundary in our semi-automated segmentation from 3T MRI agreed well with that in the manual segmentation from 7T MRI (0.5 mm isotropic resolution) of the same subjects. Overall, our proposed semi-automated habenula segmentation showed reliable and reproducible habenula localization, while its fully-automated version offers an efficient way for large sample analysis. © 2018 Wiley Periodicals, Inc.

  6. The Unified Extensional Versioning Model

    DEFF Research Database (Denmark)

    Asklund, U.; Bendix, Lars Gotfred; Christensen, H. B.

    1999-01-01

    Versioning of components in a system is a well-researched field where various adequate techniques have already been established. In this paper, we look at how versioning can be extended to cover also the structural aspects of a system. There exist two basic techniques for versioning - intentional...

  7. Optimally segmented magnetic structures

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bahl, Christian; Bjørk, Rasmus

    We present a semi-analytical algorithm for magnet design problems, which calculates the optimal way to subdivide a given design region into uniformly magnetized segments.The availability of powerful rare-earth magnetic materials such as Nd-Fe-B has broadened the range of applications of permanent...... is not available.We will illustrate the results for magnet design problems from different areas, such as electric motors/generators (as the example in the picture), beam focusing for particle accelerators and magnetic refrigeration devices.......We present a semi-analytical algorithm for magnet design problems, which calculates the optimal way to subdivide a given design region into uniformly magnetized segments.The availability of powerful rare-earth magnetic materials such as Nd-Fe-B has broadened the range of applications of permanent...... magnets[1][2]. However, the powerful rare-earth magnets are generally expensive, so both the scientific and industrial communities have devoted a lot of effort into developing suitable design methods. Even so, many magnet optimization algorithms either are based on heuristic approaches[3...

  8. Proportional crosstalk correction for the segmented clover at iThemba LABS

    International Nuclear Information System (INIS)

    Bucher, T D; Noncolela, S P; Lawrie, E A; Dinoko, T R S; Easton, J L; Erasmus, N; Lawrie, J J; Mthembu, S H; Mtshali, W X; Shirinda, O; Orce, J N

    2017-01-01

    Reaching new depths in nuclear structure investigations requires new experimental equipment and new techniques of data analysis. The modern γ -ray spectrometers, like AGATA and GRETINA are now built of new-generation segmented germanium detectors. These most advanced detectors are able to reconstruct the trajectory of a γ -ray inside the detector. These are powerful detectors, but they need careful characterization, since their output signals are more complex. For instance for each γ -ray interaction that occurs in a segment of such a detector additional output signals (called proportional crosstalk), falsely appearing as an independent (often negative) energy depositions, are registered on the non-interacting segments. A failure to implement crosstalk correction results in incorrectly measured energies on the segments for two- and higher-fold events. It affects all experiments which rely on the recorded segment energies. Furthermore incorrectly recorded energies on the segments cause a failure to reconstruct the γ -ray trajectories using Compton scattering analysis. The proportional crosstalk for the iThemba LABS segmented clover was measured and a crosstalk correction was successfully implemented. The measured crosstalk-corrected energies show good agreement with the true γ -ray energies independent on the number of hit segments and an improved energy resolution for the segment sum energy was obtained. (paper)

  9. PVWatts Version 5 Manual

    Energy Technology Data Exchange (ETDEWEB)

    Dobos, A. P.

    2014-09-01

    The NREL PVWatts calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes includes several built-in parameters that are hidden from the user. This technical reference describes the sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimate. This reference is applicable to the significantly revised version of PVWatts released by NREL in 2014.

  10. Segmentation of the Infant Food Market

    OpenAIRE

    Hrůzová, Daniela

    2015-01-01

    The theoretical part covers general market segmentation, namely the marketing importance of differences among consumers, the essence of market segmentation, its main conditions and the process of segmentation, which consists of four consecutive phases - defining the market, determining important criteria, uncovering segments and developing segment profiles. The segmentation criteria, segmentation approaches, methods and techniques for the process of market segmentation are also described in t...

  11. URGENCES NOUVELLE VERSION

    CERN Multimedia

    Medical Service

    2002-01-01

    The table of emergency numbers that appeared in Bulletin 10/2002 is out of date. The updated version provided by the Medical Service appears on the following page. Please disregard the previous version. URGENT NEED OF A DOCTOR GENEVAPATIENT NOT FIT TO BE MOVED: Call your family doctor Or SOS MEDECINS (24H/24H) 748 49 50 Or ASSOC. OF GENEVA DOCTORS (7H-23H) 322 20 20 PATIENT CAN BE MOVED: HOPITAL CANTONAL 24 Micheli du Crest 372 33 11 382 33 11 CHILDREN'S HOSPITAL 6 rue Willy Donzé 382 68 18 382 45 55 MATERNITY 24 Micheli du Crest 382 68 16 382 33 11 OPHTALMOLOGY 22 Alcide Jentzer 382 84 00 HOPITAL DE LA TOUR Meyrin 719 61 11 CENTRE MEDICAL DE MEYRIN Champs Fréchets 719 74 00 URGENCES : FIRE BRIGADE 118 FIRE BRIGADE CERN 767 44 44 BESOIN URGENT D'AMBULANCE (GENEVE ET VAUD) : 144 POLICE 117 ANTI-POISON CENTRE 24H/24H 01 251 51 510 EUROPEAN EMERGENCY CALL: 112 FRANCE PATIENT NOT FIT TO BE MOVED: call your family doctor PATIENT CAN BE MOVED: ST. JULIE...

  12. Deformable segmentation via sparse representation and dictionary learning.

    Science.gov (United States)

    Zhang, Shaoting; Zhan, Yiqiang; Metaxas, Dimitris N

    2012-10-01

    "Shape" and "appearance", the two pillars of a deformable model, complement each other in object segmentation. In many medical imaging applications, while the low-level appearance information is weak or mis-leading, shape priors play a more important role to guide a correct segmentation, thanks to the strong shape characteristics of biological structures. Recently a novel shape prior modeling method has been proposed based on sparse learning theory. Instead of learning a generative shape model, shape priors are incorporated on-the-fly through the sparse shape composition (SSC). SSC is robust to non-Gaussian errors and still preserves individual shape characteristics even when such characteristics is not statistically significant. Although it seems straightforward to incorporate SSC into a deformable segmentation framework as shape priors, the large-scale sparse optimization of SSC has low runtime efficiency, which cannot satisfy clinical requirements. In this paper, we design two strategies to decrease the computational complexity of SSC, making a robust, accurate and efficient deformable segmentation system. (1) When the shape repository contains a large number of instances, which is often the case in 2D problems, K-SVD is used to learn a more compact but still informative shape dictionary. (2) If the derived shape instance has a large number of vertices, which often appears in 3D problems, an affinity propagation method is used to partition the surface into small sub-regions, on which the sparse shape composition is performed locally. Both strategies dramatically decrease the scale of the sparse optimization problem and hence speed up the algorithm. Our method is applied on a diverse set of biomedical image analysis problems. Compared to the original SSC, these two newly-proposed modules not only significant reduce the computational complexity, but also improve the overall accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Just how literal is the King James Version?

    OpenAIRE

    Jan (JH) Kroeze; Manie (CM) van den Heever; Bertus (AJ) van Rooy

    2010-01-01

    Many scholars have the perception that the King James Version (KJV) is a literal translation. However, it is not so easy to define the concept of "literal translation". The simplest definition may be to regard it as word-for-word translation. However, when one compares the KJV carefully with the original Hebrew Bible, there are numerous instances where lexical items are changed to adapt the idiom to that of the target language. In this article, a measuring instrument will be proposed and u...

  14. Phasing multi-segment undulators

    International Nuclear Information System (INIS)

    Chavanne, J.; Elleaume, P.; Vaerenbergh, P. Van

    1996-01-01

    An important issue in the manufacture of multi-segment undulators as a source of synchrotron radiation or as a free-electron laser (FEL) is the phasing between successive segments. The state of the art is briefly reviewed, after which a novel pure permanent magnet phasing section that is passive and does not require any current is presented. The phasing section allows the introduction of a 6 mm longitudinal gap between each segment, resulting in complete mechanical independence and reduced magnetic interaction between segments. The tolerance of the longitudinal positioning of one segment with respect to the next is found to be 2.8 times lower than that of conventional phasing. The spectrum at all gaps and useful harmonics is almost unchanged when compared with a single-segment undulator of the same total length. (au) 3 refs

  15. The LOFT Ground Segment

    DEFF Research Database (Denmark)

    Bozzo, E.; Antonelli, A.; Argan, A.

    2014-01-01

    targets per orbit (~90 minutes), providing roughly ~80 GB of proprietary data per day (the proprietary period will be 12 months). The WFM continuously monitors about 1/3 of the sky at a time and provides data for about ~100 sources a day, resulting in a total of ~20 GB of additional telemetry. The LOFT...... Burst alert System additionally identifies on-board bright impulsive events (e.g., Gamma-ray Bursts, GRBs) and broadcasts the corresponding position and trigger time to the ground using a dedicated system of ~15 VHF receivers. All WFM data are planned to be made public immediately. In this contribution...... we summarize the planned organization of the LOFT ground segment (GS), as established in the mission Yellow Book 1 . We describe the expected GS contributions from ESA and the LOFT consortium. A review is provided of the planned LOFT data products and the details of the data flow, archiving...

  16. Segmented heat exchanger

    Science.gov (United States)

    Baldwin, Darryl Dean; Willi, Martin Leo; Fiveland, Scott Byron; Timmons, Kristine Ann

    2010-12-14

    A segmented heat exchanger system for transferring heat energy from an exhaust fluid to a working fluid. The heat exchanger system may include a first heat exchanger for receiving incoming working fluid and the exhaust fluid. The working fluid and exhaust fluid may travel through at least a portion of the first heat exchanger in a parallel flow configuration. In addition, the heat exchanger system may include a second heat exchanger for receiving working fluid from the first heat exchanger and exhaust fluid from a third heat exchanger. The working fluid and exhaust fluid may travel through at least a portion of the second heat exchanger in a counter flow configuration. Furthermore, the heat exchanger system may include a third heat exchanger for receiving working fluid from the second heat exchanger and exhaust fluid from the first heat exchanger. The working fluid and exhaust fluid may travel through at least a portion of the third heat exchanger in a parallel flow configuration.

  17. International EUREKA: Market Segment

    International Nuclear Information System (INIS)

    1982-03-01

    The purpose of the Market Segment of the EUREKA model is to simultaneously project uranium market prices, uranium supply and purchasing activities. The regional demands are extrinsic. However, annual forward contracting activities to meet these demands as well as inventory requirements are calculated. The annual price forecast is based on relatively short term, forward balances between available supply and desired purchases. The forecasted prices and extrapolated price trends determine decisions related to exploration and development, new production operations, and the operation of existing capacity. Purchasing and inventory requirements are also adjusted based on anticipated prices. The calculation proceeds one year at a time. Conditions calculated at the end of one year become the starting conditions for the calculation in the subsequent year

  18. Probabilistic retinal vessel segmentation

    Science.gov (United States)

    Wu, Chang-Hua; Agam, Gady

    2007-03-01

    Optic fundus assessment is widely used for diagnosing vascular and non-vascular pathology. Inspection of the retinal vasculature may reveal hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. Due to various imaging conditions retinal images may be degraded. Consequently, the enhancement of such images and vessels in them is an important task with direct clinical applications. We propose a novel technique for vessel enhancement in retinal images that is capable of enhancing vessel junctions in addition to linear vessel segments. This is an extension of vessel filters we have previously developed for vessel enhancement in thoracic CT scans. The proposed approach is based on probabilistic models which can discern vessels and junctions. Evaluation shows the proposed filter is better than several known techniques and is comparable to the state of the art when evaluated on a standard dataset. A ridge-based vessel tracking process is applied on the enhanced image to demonstrate the effectiveness of the enhancement filter.

  19. Segmented rail linear induction motor

    Science.gov (United States)

    Cowan, Jr., Maynard; Marder, Barry M.

    1996-01-01

    A segmented rail linear induction motor has a segmented rail consisting of a plurality of nonferrous electrically conductive segments aligned along a guideway. The motor further includes a carriage including at least one pair of opposed coils fastened to the carriage for moving the carriage. A power source applies an electric current to the coils to induce currents in the conductive surfaces to repel the coils from adjacent edges of the conductive surfaces.

  20. Automated medical image segmentation techniques

    Directory of Open Access Journals (Sweden)

    Sharma Neeraj

    2010-01-01

    Full Text Available Accurate segmentation of medical images is a key step in contouring during radiotherapy planning. Computed topography (CT and Magnetic resonance (MR imaging are the most widely used radiographic techniques in diagnosis, clinical studies and treatment planning. This review provides details of automated segmentation methods, specifically discussed in the context of CT and MR images. The motive is to discuss the problems encountered in segmentation of CT and MR images, and the relative merits and limitations of methods currently available for segmentation of medical images.

  1. ADVANCED CLUSTER BASED IMAGE SEGMENTATION

    Directory of Open Access Journals (Sweden)

    D. Kesavaraja

    2011-11-01

    Full Text Available This paper presents efficient and portable implementations of a useful image segmentation technique which makes use of the faster and a variant of the conventional connected components algorithm which we call parallel Components. In the Modern world majority of the doctors are need image segmentation as the service for various purposes and also they expect this system is run faster and secure. Usually Image segmentation Algorithms are not working faster. In spite of several ongoing researches in Conventional Segmentation and its Algorithms might not be able to run faster. So we propose a cluster computing environment for parallel image Segmentation to provide faster result. This paper is the real time implementation of Distributed Image Segmentation in Clustering of Nodes. We demonstrate the effectiveness and feasibility of our method on a set of Medical CT Scan Images. Our general framework is a single address space, distributed memory programming model. We use efficient techniques for distributing and coalescing data as well as efficient combinations of task and data parallelism. The image segmentation algorithm makes use of an efficient cluster process which uses a novel approach for parallel merging. Our experimental results are consistent with the theoretical analysis and practical results. It provides the faster execution time for segmentation, when compared with Conventional method. Our test data is different CT scan images from the Medical database. More efficient implementations of Image Segmentation will likely result in even faster execution times.

  2. Multiple Instance Fuzzy Inference

    Science.gov (United States)

    2015-12-02

    Fashion models 5.19 14 Sunset scenes 3.52 15 Cars 4.93 16 Waterfalls 2.56 17 Antique furniture 2.30 18 Battle ships 4.32 19 Skiing 3.34 20 Desserts ...1000, in addition 10% of the Desserts category images were confused with Beach and 20.9% of Mountains and glaciers images were misclassified as

  3. Eurochemic: failure or instance

    International Nuclear Information System (INIS)

    Busekist, O. von

    1982-01-01

    In this article, the author draws up a balance sheet of twenty-three years of good and bad luck of the European venture Eurochemic. It turns out that an important number of errors of judgement are the source of the present difficult situation of the European market for the nuclear reprocessing. (AF)

  4. Theory, For Instance

    DEFF Research Database (Denmark)

    Balle, Søren Hattesen

    This paper takes its starting point in a short poem by Wallace Stevens from 1917, which incidentally bears the title “Theory”. The poem can be read as a parable of theory, i.e., as something literally ’thrown beside’ theory (cf. OED: “...“). In the philosophical tradition this is also how the style of theory has been figured, that is to say: as something that is incidental to it or just happens to be around as so much paraphernalia. In my reading of Stevens’ poem I shall argue that this is exactly the position from which Stevens takes off when he assumes...... the task of writing a personified portrait of theory. Theory emerges as always beside(s) itself in what constitutes its style, but the poem also suggests that theory’s style is what gives theory both its power and its contingency. Figured as a duchess Theoria is only capable of retaining her power...

  5. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (IBM PC VERSION)

    Science.gov (United States)

    Donnell, B.

    1994-01-01

    COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the

  6. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (UNIX VERSION)

    Science.gov (United States)

    Donnell, B.

    1994-01-01

    COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the

  7. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (MACINTOSH VERSION)

    Science.gov (United States)

    Riley, G.

    1994-01-01

    COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the

  8. Constrained Deep Weak Supervision for Histopathology Image Segmentation.

    Science.gov (United States)

    Jia, Zhipeng; Huang, Xingyi; Chang, Eric I-Chao; Xu, Yan

    2017-11-01

    In this paper, we develop a new weakly supervised learning algorithm to learn to segment cancerous regions in histopathology images. This paper is under a multiple instance learning (MIL) framework with a new formulation, deep weak supervision (DWS); we also propose an effective way to introduce constraints to our neural networks to assist the learning process. The contributions of our algorithm are threefold: 1) we build an end-to-end learning system that segments cancerous regions with fully convolutional networks (FCNs) in which image-to-image weakly-supervised learning is performed; 2) we develop a DWS formulation to exploit multi-scale learning under weak supervision within FCNs; and 3) constraints about positive instances are introduced in our approach to effectively explore additional weakly supervised information that is easy to obtain and enjoy a significant boost to the learning process. The proposed algorithm, abbreviated as DWS-MIL, is easy to implement and can be trained efficiently. Our system demonstrates the state-of-the-art results on large-scale histopathology image data sets and can be applied to various applications in medical imaging beyond histopathology images, such as MRI, CT, and ultrasound images.

  9. Region segmentation along image sequence

    International Nuclear Information System (INIS)

    Monchal, L.; Aubry, P.

    1995-01-01

    A method to extract regions in sequence of images is proposed. Regions are not matched from one image to the following one. The result of a region segmentation is used as an initialization to segment the following and image to track the region along the sequence. The image sequence is exploited as a spatio-temporal event. (authors). 12 refs., 8 figs

  10. Market segmentation using perceived constraints

    Science.gov (United States)

    Jinhee Jun; Gerard Kyle; Andrew Mowen

    2008-01-01

    We examined the practical utility of segmenting potential visitors to Cleveland Metroparks using their constraint profiles. Our analysis identified three segments based on their scores on the dimensions of constraints: Other priorities--visitors who scored the highest on 'other priorities' dimension; Highly Constrained--visitors who scored relatively high on...

  11. Market Segmentation: An Instructional Module.

    Science.gov (United States)

    Wright, Peter H.

    A concept-based introduction to market segmentation is provided in this instructional module for undergraduate and graduate transportation-related courses. The material can be used in many disciplines including engineering, business, marketing, and technology. The concept of market segmentation is primarily a transportation planning technique by…

  12. IFRS 8 – OPERATING SEGMENTS

    Directory of Open Access Journals (Sweden)

    BOCHIS LEONICA

    2009-05-01

    Full Text Available Segment reporting in accordance with IFRS 8 will be mandatory for annual financial statements covering periods beginning on or after 1 January 2009. The standards replaces IAS 14, Segment Reporting, from that date. The objective of IFRS 8 is to require

  13. Reduplication Facilitates Early Word Segmentation

    Science.gov (United States)

    Ota, Mitsuhiko; Skarabela, Barbora

    2018-01-01

    This study explores the possibility that early word segmentation is aided by infants' tendency to segment words with repeated syllables ("reduplication"). Twenty-four nine-month-olds were familiarized with passages containing one novel reduplicated word and one novel non-reduplicated word. Their central fixation times in response to…

  14. The Importance of Marketing Segmentation

    Science.gov (United States)

    Martin, Gillian

    2011-01-01

    The rationale behind marketing segmentation is to allow businesses to focus on their consumers' behaviors and purchasing patterns. If done effectively, marketing segmentation allows an organization to achieve its highest return on investment (ROI) in turn for its marketing and sales expenses. If an organization markets its products or services to…

  15. Essays in international market segmentation

    NARCIS (Netherlands)

    Hofstede, ter F.

    1999-01-01

    The primary objective of this thesis is to develop and validate new methodologies to improve the effectiveness of international segmentation strategies. The current status of international market segmentation research is reviewed in an introductory chapter, which provided a number of

  16. Orthographic Transparency Enhances Morphological Segmentation in Children Reading Hebrew Words

    Science.gov (United States)

    Haddad, Laurice; Weiss, Yael; Katzir, Tami; Bitan, Tali

    2018-01-01

    Morphological processing of derived words develops simultaneously with reading acquisition. However, the reader’s engagement in morphological segmentation may depend on the language morphological richness and orthographic transparency, and the readers’ reading skills. The current study tested the common idea that morphological segmentation is enhanced in non-transparent orthographies to compensate for the absence of phonological information. Hebrew’s rich morphology and the dual version of the Hebrew script (with and without diacritic marks) provides an opportunity to study the interaction of orthographic transparency and morphological segmentation on the development of reading skills in a within-language design. Hebrew speaking 2nd (N = 27) and 5th (N = 29) grade children read aloud 96 noun words. Half of the words were simple mono-morphemic words and half were bi-morphemic derivations composed of a productive root and a morphemic pattern. In each list half of the words were presented in the transparent version of the script (with diacritic marks), and half in the non-transparent version (without diacritic marks). Our results show that in both groups, derived bi-morphemic words were identified more accurately than mono-morphemic words, but only for the transparent, pointed, script. For the un-pointed script the reverse was found, namely, that bi-morphemic words were read less accurately than mono-morphemic words, especially in second grade. Second grade children also read mono-morphemic words faster than bi-morphemic words. Finally, correlations with a standardized measure of morphological awareness were found only for second grade children, and only in bi-morphemic words. These results, showing greater morphological effects in second grade compared to fifth grade children suggest that for children raised in a language with a rich morphology, common and easily segmented morphemic units may be more beneficial for younger compared to older readers. Moreover

  17. Orthographic Transparency Enhances Morphological Segmentation in Children Reading Hebrew Words

    Directory of Open Access Journals (Sweden)

    Laurice Haddad

    2018-01-01

    Full Text Available Morphological processing of derived words develops simultaneously with reading acquisition. However, the reader’s engagement in morphological segmentation may depend on the language morphological richness and orthographic transparency, and the readers’ reading skills. The current study tested the common idea that morphological segmentation is enhanced in non-transparent orthographies to compensate for the absence of phonological information. Hebrew’s rich morphology and the dual version of the Hebrew script (with and without diacritic marks provides an opportunity to study the interaction of orthographic transparency and morphological segmentation on the development of reading skills in a within-language design. Hebrew speaking 2nd (N = 27 and 5th (N = 29 grade children read aloud 96 noun words. Half of the words were simple mono-morphemic words and half were bi-morphemic derivations composed of a productive root and a morphemic pattern. In each list half of the words were presented in the transparent version of the script (with diacritic marks, and half in the non-transparent version (without diacritic marks. Our results show that in both groups, derived bi-morphemic words were identified more accurately than mono-morphemic words, but only for the transparent, pointed, script. For the un-pointed script the reverse was found, namely, that bi-morphemic words were read less accurately than mono-morphemic words, especially in second grade. Second grade children also read mono-morphemic words faster than bi-morphemic words. Finally, correlations with a standardized measure of morphological awareness were found only for second grade children, and only in bi-morphemic words. These results, showing greater morphological effects in second grade compared to fifth grade children suggest that for children raised in a language with a rich morphology, common and easily segmented morphemic units may be more beneficial for younger compared to older

  18. GEODESIC RECONSTRUCTION, SADDLE ZONES & HIERARCHICAL SEGMENTATION

    Directory of Open Access Journals (Sweden)

    Serge Beucher

    2011-05-01

    Full Text Available The morphological reconstruction based on geodesic operators, is a powerful tool in mathematical morphology. The general definition of this reconstruction supposes the use of a marker function f which is not necessarily related to the function g to be built. However, this paper deals with operations where the marker function is defined from given characteristic regions of the initial function f, as it is the case, for instance, for the extrema (maxima or minima but also for the saddle zones. Firstly, we show that the intuitive definition of a saddle zone is not easy to handle, especially when digitised images are involved. However, some of these saddle zones (regional ones also called overflow zones can be defined, this definition providing a simple algorithm to extract them. The second part of the paper is devoted to the use of these overflow zones as markers in image reconstruction. This reconstruction provides a new function which exhibits a new hierarchy of extrema. This hierarchy is equivalent to the hierarchy produced by the so-called waterfall algorithm. We explain why the waterfall algorithm can be achieved by performing a watershed transform of the function reconstructed by its initial watershed lines. Finally, some examples of use of this hierarchical segmentation are described.

  19. Human-Like Room Segmentation for Domestic Cleaning Robots

    Directory of Open Access Journals (Sweden)

    David Fleer

    2017-11-01

    Full Text Available Autonomous mobile robots have recently become a popular solution for automating cleaning tasks. In one application, the robot cleans a floor space by traversing and covering it completely. While fulfilling its task, such a robot may create a map of its surroundings. For domestic indoor environments, these maps often consist of rooms connected by passageways. Segmenting the map into these rooms has several uses, such as hierarchical planning of cleaning runs by the robot, or the definition of cleaning plans by the user. Especially in the latter application, the robot-generated room segmentation should match the human understanding of rooms. Here, we present a novel method that solves this problem for the graph of a topo-metric map: first, a classifier identifies those graph edges that cross a border between rooms. This classifier utilizes data from multiple robot sensors, such as obstacle measurements and camera images. Next, we attempt to segment the map at these room–border edges using graph clustering. By training the classifier on user-annotated data, this produces a human-like room segmentation. We optimize and test our method on numerous realistic maps generated by our cleaning-robot prototype and its simulated version. Overall, we find that our method produces more human-like room segmentations compared to mere graph clustering. However, unusual room borders that differ from the training data remain a challenge.

  20. Semantic Segmentation of Real-time Sensor Data Stream for Complex Activity Recognition

    OpenAIRE

    Triboan, Darpan; Chen, Liming; Chen, Feng; Wang, Zumin

    2016-01-01

    Department of Information Engineering, Dalian University, China The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link. Data segmentation plays a critical role in performing human activity recognition (HAR) in the ambient assistant living (AAL) systems. It is particularly important for complex activity recognition when the events occur in short bursts with attributes of multiple sub-tasks. Althou...

  1. ERRATUM - French version only

    CERN Multimedia

    Le texte suivant remplace la version française de l'encadré paru en page 2 du Bulletin 28/2003 : Le 1er juillet 1953, les représentants des douze Etats Membres fondateurs du CERN signèrent la convention de l'Organisation. Aujourd'hui, le CERN compte vingt Etats Membres Européens : l'Allemagne, l'Autriche, la Belgique, la Bulgarie, le Danemark, l'Espagne, la Finlande, la France, la Grèce, la Hongrie, l'Italie, la Norvège, les Pays-Bas, la Pologne, le Portugal, la République Slovaque, la République Tchèque, le Royaume-Uni, la Suède, et la Suisse. Les Etats-Unis, l'Inde, l'Israël, le Japon, la Fédération Russe, la Turquie, la Commission Européenne et l'UNESCO ont un statut d'Etat observateur.

  2. Segmental vitiligo with segmental morphea: An autoimmune link?

    Directory of Open Access Journals (Sweden)

    Pravesh Yadav

    2014-01-01

    Full Text Available An 18-year old girl with segmental vitiligo involving the left side of the trunk and left upper limb with segmental morphea involving the right side of trunk and right upper limb without any deeper involvement is illustrated. There was no history of preceding drug intake, vaccination, trauma, radiation therapy, infection, or hormonal therapy. Family history of stable vitiligo in her brother and a history of type II diabetes mellitus in the father were elicited. Screening for autoimmune diseases and antithyroid antibody was negative. An autoimmune link explaining the co-occurrence has been proposed. Cutaneous mosiacism could explain the presence of both the pathologies in a segmental distribution.

  3. Topology and robustness in the Drosophila segment polarity network.

    Directory of Open Access Journals (Sweden)

    Nicholas T Ingolia

    2004-06-01

    Full Text Available A complex hierarchy of genetic interactions converts a single-celled Drosophila melanogaster egg into a multicellular embryo with 14 segments. Previously, von Dassow et al. reported that a mathematical model of the genetic interactions that defined the polarity of segments (the segment polarity network was robust (von Dassow et al. 2000. As quantitative information about the system was unavailable, parameters were sampled randomly. A surprisingly large fraction of these parameter sets allowed the model to maintain and elaborate on the segment polarity pattern. This robustness is due to the positive feedback of gene products on their own expression, which induces individual cells in a model segment to adopt different stable expression states (bistability corresponding to different cell types in the segment polarity pattern. A positive feedback loop will only yield multiple stable states when the parameters that describe it satisfy a particular inequality. By testing which random parameter sets satisfy these inequalities, I show that bistability is necessary to form the segment polarity pattern and serves as a strong predictor of which parameter sets will succeed in forming the pattern. Although the original model was robust to parameter variation, it could not reproduce the observed effects of cell division on the pattern of gene expression. I present a modified version that incorporates recent experimental evidence and does successfully mimic the consequences of cell division. The behavior of this modified model can also be understood in terms of bistability in positive feedback of gene expression. I discuss how this topological property of networks provides robust pattern formation and how large changes in parameters can change the specific pattern produced by a network.

  4. Using Predictability for Lexical Segmentation.

    Science.gov (United States)

    Çöltekin, Çağrı

    2017-09-01

    This study investigates a strategy based on predictability of consecutive sub-lexical units in learning to segment a continuous speech stream into lexical units using computational modeling and simulations. Lexical segmentation is one of the early challenges during language acquisition, and it has been studied extensively through psycholinguistic experiments as well as computational methods. However, despite strong empirical evidence, the explicit use of predictability of basic sub-lexical units in models of segmentation is underexplored. This paper presents an incremental computational model of lexical segmentation for exploring the usefulness of predictability for lexical segmentation. We show that the predictability cue is a strong cue for segmentation. Contrary to earlier reports in the literature, the strategy yields state-of-the-art segmentation performance with an incremental computational model that uses only this particular cue in a cognitively plausible setting. The paper also reports an in-depth analysis of the model, investigating the conditions affecting the usefulness of the strategy. Copyright © 2016 Cognitive Science Society, Inc.

  5. The Hierarchy of Segment Reports

    Directory of Open Access Journals (Sweden)

    Danilo Dorović

    2015-05-01

    Full Text Available The article presents an attempt to find the connection between reports created for managers responsible for different business segments. With this purpose, the hierarchy of the business reporting segments is proposed. This can lead to better understanding of the expenses under common responsibility of more than one manager since these expenses should be in more than one report. The structure of cost defined per business segment hierarchy with the aim of new, unusual but relevant cost structure for management can be established. Both could potentially bring new information benefits for management in the context of profit reporting.

  6. Segmental dilatation of the ileum

    Directory of Open Access Journals (Sweden)

    Tune-Yie Shih

    2017-01-01

    Full Text Available A 2-year-old boy was sent to the emergency department with the chief problem of abdominal pain for 1 day. He was just discharged from the pediatric ward with the diagnosis of mycoplasmal pneumonia and paralytic ileus. After initial examinations and radiographic investigations, midgut volvulus was impressed. An emergency laparotomy was performed. Segmental dilatation of the ileum with volvulus was found. The operative procedure was resection of the dilated ileal segment with anastomosis. The postoperative recovery was uneventful. The unique abnormality of gastrointestinal tract – segmental dilatation of the ileum, is described in details and the literature is reviewed.

  7. Accounting for segment correlations in segmented gamma-ray scans

    International Nuclear Information System (INIS)

    Sheppard, G.A.; Prettyman, T.H.; Piquette, E.C.

    1994-01-01

    In a typical segmented gamma-ray scanner (SGS), the detector's field of view is collimated so that a complete horizontal slice or segment of the desired thickness is visible. Ordinarily, the collimator is not deep enough to exclude gamma rays emitted from sample volumes above and below the segment aligned with the collimator. This can lead to assay biases, particularly for certain radioactive-material distributions. Another consequence of the collimator's low aspect ratio is that segment assays at the top and bottom of the sample are biased low because the detector's field of view is not filled. This effect is ordinarily countered by placing the sample on a low-Z pedestal and scanning one or more segment thicknesses below and above the sample. This takes extra time, however, We have investigated a number of techniques that both account for correlated segments and correct for end effects in SGS assays. Also, we have developed an algorithm that facilitates estimates of assay precision. Six calculation methods have been compared by evaluating the results of thousands of simulated, assays for three types of gamma-ray source distribution and ten masses. We will report on these computational studies and their experimental verification

  8. What are Segments in Google Analytics

    Science.gov (United States)

    Segments find all sessions that meet a specific condition. You can then apply this segment to any report in Google Analytics (GA). Segments are a way of identifying sessions and users while filters identify specific events, like pageviews.

  9. CLG for Automatic Image Segmentation

    OpenAIRE

    Christo Ananth; S.Santhana Priya; S.Manisha; T.Ezhil Jothi; M.S.Ramasubhaeswari

    2017-01-01

    This paper proposes an automatic segmentation method which effectively combines Active Contour Model, Live Wire method and Graph Cut approach (CLG). The aim of Live wire method is to provide control to the user on segmentation process during execution. Active Contour Model provides a statistical model of object shape and appearance to a new image which are built during a training phase. In the graph cut technique, each pixel is represented as a node and the distance between those nodes is rep...

  10. Market segmentation, targeting and positioning

    OpenAIRE

    Camilleri, Mark Anthony

    2017-01-01

    Businesses may not be in a position to satisfy all of their customers, every time. It may prove difficult to meet the exact requirements of each individual customer. People do not have identical preferences, so rarely does one product completely satisfy everyone. Many companies may usually adopt a strategy that is known as target marketing. This strategy involves dividing the market into segments and developing products or services to these segments. A target marketing strategy is focused on ...

  11. Enigma Version 12

    Science.gov (United States)

    Shores, David; Goza, Sharon P.; McKeegan, Cheyenne; Easley, Rick; Way, Janet; Everett, Shonn; Guerra, Mark; Kraesig, Ray; Leu, William

    2013-01-01

    Enigma Version 12 software combines model building, animation, and engineering visualization into one concise software package. Enigma employs a versatile user interface to allow average users access to even the most complex pieces of the application. Using Enigma eliminates the need to buy and learn several software packages to create an engineering visualization. Models can be created and/or modified within Enigma down to the polygon level. Textures and materials can be applied for additional realism. Within Enigma, these models can be combined to create systems of models that have a hierarchical relationship to one another, such as a robotic arm. Then these systems can be animated within the program or controlled by an external application programming interface (API). In addition, Enigma provides the ability to use plug-ins. Plugins allow the user to create custom code for a specific application and access the Enigma model and system data, but still use the Enigma drawing functionality. CAD files can be imported into Enigma and combined to create systems of computer graphics models that can be manipulated with constraints. An API is available so that an engineer can write a simulation and drive the computer graphics models with no knowledge of computer graphics. An animation editor allows an engineer to set up sequences of animations generated by simulations or by conceptual trajectories in order to record these to highquality media for presentation. Enigma Version 12 Lyndon B. Johnson Space Center, Houston, Texas 28 NASA Tech Briefs, September 2013 Planetary Protection Bioburden Analysis Program NASA's Jet Propulsion Laboratory, Pasadena, California This program is a Microsoft Access program that performed statistical analysis of the colony counts from assays performed on the Mars Science Laboratory (MSL) spacecraft to determine the bioburden density, 3-sigma biodensity, and the total bioburdens required for the MSL prelaunch reports. It also contains numerous

  12. Recognition Using Classification and Segmentation Scoring

    National Research Council Canada - National Science Library

    Kimball, Owen; Ostendorf, Mari; Rohlicek, Robin

    1992-01-01

    .... We describe an approach to connected word recognition that allows the use of segmental information through an explicit decomposition of the recognition criterion into classification and segmentation scoring...

  13. An Investigation into the Factors Affecting the Design of Nature-Compatible Recreational-Residential Complexes- Instance Analysis

    Directory of Open Access Journals (Sweden)

    Seyyedeh Fatemeh Safavi Mirmahalleh

    2017-02-01

    Full Text Available Explaining tourism concepts, standards of recreational-residential complexes and the methods for accurate treating with nature, this paper tries to study the suitable instances of recreational-residential complexes and to adopt their positive aspects as a design strategy. SWOT matrix was built based on the weaknesses and strengths of the project site and design principles were derived by observing the extracted influential factors. Considering land topography, for example, different areas of buildings were shifted and combined with the nature. - Residential and public zones gained a nice view towards the nature by keeping their orientation and extending them in east-west direction - Service section connected more appropriately with residential and public areas - Sections which do not need extra light such as W.Cs, storages and installation rooms, were considered in a side of the building which was adjacent to soil - In residential section, rhythm was implemented in ceilings and windows - Golden values and proportions were used to design the plane and façade of the complex

  14. Impact of extracorporeal shock waves on the human skin with cellulite: A case study of an unique instance

    Science.gov (United States)

    Kuhn, Christoph; Angehrn, Fiorenzo; Sonnabend, Ortrud; Voss, Axel

    2008-01-01

    In this case study of an unique instance, effects of medium-energy, high-focused extracorporeal generated shock waves (ESW) onto the skin and the underlying fat tissue of a cellulite afflicted, 50-year-old woman were investigated. The treatment consisted of four ESW applications within 21 days. Diagnostic high-resolution ultrasound (Collagenoson) was performed before and after treatment. Directly after the last ESW application, skin samples were taken for histopathological analysis from the treated and from the contra-lateral untreated area of skin with cellulite. No damage to the treated skin tissue, in particular no mechanical destruction to the subcutaneous fat, could be demonstrated by histopathological analysis. However an astounding induction of neocollageno- and neoelastino-genesis within the scaffolding fabric of the dermis and subcutis was observed. The dermis increased in thickness as well as the scaffolding within the subcutaneous fat-tissue. Optimization of critical application parameters may turn ESW into a noninvasive cellulite therapy. PMID:18488890

  15. Predicting Multiple Functions of Sustainable Flood Retention Basins under Uncertainty via Multi-Instance Multi-Label Learning

    Directory of Open Access Journals (Sweden)

    Qinli Yang

    2015-03-01

    Full Text Available The ambiguity of diverse functions of sustainable flood retention basins (SFRBs may lead to conflict and risk in water resources planning and management. How can someone provide an intuitive yet efficient strategy to uncover and distinguish the multiple potential functions of SFRBs under uncertainty? In this study, by exploiting both input and output uncertainties of SFRBs, the authors developed a new data-driven framework to automatically predict the multiple functions of SFRBs by using multi-instance multi-label (MIML learning. A total of 372 sustainable flood retention basins, characterized by 40 variables associated with confidence levels, were surveyed in Scotland, UK. A Gaussian model with Monte Carlo sampling was used to capture the variability of variables (i.e., input uncertainty, and the MIML-support vector machine (SVM algorithm was subsequently applied to predict the potential functions of SFRBs that have not yet been assessed, allowing for one basin belonging to different types (i.e., output uncertainty. Experiments demonstrated that the proposed approach enables effective automatic prediction of the potential functions of SFRBs (e.g., accuracy >93%. The findings suggest that the functional uncertainty of SFRBs under investigation can be better assessed in a more comprehensive and cost-effective way, and the proposed data-driven approach provides a promising method of doing so for water resources management.

  16. TYPOLOGY OF THE REALITY STATUS CATEGORY IN SELECTED LANGUAGES. IS THE HABITUAL IN POLISH AN INSTANCE OF (IRREALIS OR MODALITY?

    Directory of Open Access Journals (Sweden)

    Paulina Pietras

    2017-10-01

    Full Text Available The present article is aimed at examining the category of the reality status by discussing the dichotomy “realis / irrealis” in the context of the categories of modality, habituality and futurity. Prototype analysis is juxtaposed with scope analysis, and the category of the habitual is discussed from the typological perspective as well as from the perspective of its connection with the category of futurity. The paper presents aspect diversity of habituals (perfective and imperfective aspect and its contextual implications as well as the differentiation between the habitual and modality. A special focus is on the prototype analysis and its application instances in Polish, English and Hebrew. The primary objective of the paper is to show that, although it is possible to treat irrealis as notional category, the habituals in Polish and many other Slavic languages (e.g. Czech should be identified with the modality domain rather than irrealis category. The paper is also an attempt to provide an insight into the distinction between (irrealis and encoding systems of modalities as the habitual aspect displays modal category features in many languages (including Polish.

  17. Memory-Efficient Onboard Rock Segmentation

    Science.gov (United States)

    Burl, Michael C.; Thompson, David R.; Bornstein, Benjamin J.; deGranville, Charles K.

    2013-01-01

    Rockster-MER is an autonomous perception capability that was uploaded to the Mars Exploration Rover Opportunity in December 2009. This software provides the vision front end for a larger software system known as AEGIS (Autonomous Exploration for Gathering Increased Science), which was recently named 2011 NASA Software of the Year. As the first step in AEGIS, Rockster-MER analyzes an image captured by the rover, and detects and automatically identifies the boundary contours of rocks and regions of outcrop present in the scene. This initial segmentation step reduces the data volume from millions of pixels into hundreds (or fewer) of rock contours. Subsequent stages of AEGIS then prioritize the best rocks according to scientist- defined preferences and take high-resolution, follow-up observations. Rockster-MER has performed robustly from the outset on the Mars surface under challenging conditions. Rockster-MER is a specially adapted, embedded version of the original Rockster algorithm ("Rock Segmentation Through Edge Regrouping," (NPO- 44417) Software Tech Briefs, September 2008, p. 25). Although the new version performs the same basic task as the original code, the software has been (1) significantly upgraded to overcome the severe onboard re source limitations (CPU, memory, power, time) and (2) "bulletproofed" through code reviews and extensive testing and profiling to avoid the occurrence of faults. Because of the limited computational power of the RAD6000 flight processor on Opportunity (roughly two orders of magnitude slower than a modern workstation), the algorithm was heavily tuned to improve its speed. Several functional elements of the original algorithm were removed as a result of an extensive cost/benefit analysis conducted on a large set of archived rover images. The algorithm was also required to operate below a stringent 4MB high-water memory ceiling; hence, numerous tricks and strategies were introduced to reduce the memory footprint. Local filtering

  18. Simplified Model Surgery Technique for Segmental Maxillary Surgeries

    Directory of Open Access Journals (Sweden)

    Namit Nagar

    2011-01-01

    Full Text Available Model surgery is the dental cast version of cephalometric prediction of surgical results. Patients having vertical maxillary excess with prognathism invariably require Lefort I osteotomy with maxillary segmentation and maxillary first premolar extractions during surgery. Traditionally, model surgeries in these cases have been done by sawing the model through the first premolar interproximal area and removing that segment. This clinical innovation employed the use of X-ray film strips as separators in maxillary first premolar interproximal area. The method advocated is a time-saving procedure where no special clinical or laboratory tools, such as plaster saw (with accompanying plaster dust, were required and reusable separators were made from old and discarded X-ray films.

  19. Methods of evaluating segmentation characteristics and segmentation of major faults

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kie Hwa; Chang, Tae Woo; Kyung, Jai Bok [Seoul National Univ., Seoul (Korea, Republic of)] (and others)

    2000-03-15

    Seismological, geological, and geophysical studies were made for reasonable segmentation of the Ulsan fault and the results are as follows. One- and two- dimensional electrical surveys revealed clearly the fault fracture zone enlarges systematically northward and southward from the vicinity of Mohwa-ri, indicating Mohwa-ri is at the seismic segment boundary. Field Geological survey and microscope observation of fault gouge indicates that the Quaternary faults in the area are reactivated products of the preexisting faults. Trench survey of the Chonbuk fault Galgok-ri revealed thrust faults and cumulative vertical displacement due to faulting during the late Quaternary with about 1.1-1.9 m displacement per event; the latest event occurred from 14000 to 25000 yrs. BP. The seismic survey showed the basement surface os cut by numerous reverse faults and indicated the possibility that the boundary between Kyeongsangbukdo and Kyeongsannamdo may be segment boundary.

  20. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus

    2010-01-01

    a microscope and we show how the method can handle transparent particles with significant glare point. The method generalizes to other problems. THis is illustrated by applying the method to camera calibration images and MRI of the midsagittal plane for gray and white matter separation and segmentation......We propose a novel and efficient way of performing local image segmentation. For many applications a threshold of pixel intensities is sufficient but determine the appropriate threshold value can be difficult. In cases with large global intensity variation the threshold value has to be adapted...... locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...

  1. Methods of evaluating segmentation characteristics and segmentation of major faults

    International Nuclear Information System (INIS)

    Lee, Kie Hwa; Chang, Tae Woo; Kyung, Jai Bok

    2000-03-01

    Seismological, geological, and geophysical studies were made for reasonable segmentation of the Ulsan fault and the results are as follows. One- and two- dimensional electrical surveys revealed clearly the fault fracture zone enlarges systematically northward and southward from the vicinity of Mohwa-ri, indicating Mohwa-ri is at the seismic segment boundary. Field Geological survey and microscope observation of fault gouge indicates that the Quaternary faults in the area are reactivated products of the preexisting faults. Trench survey of the Chonbuk fault Galgok-ri revealed thrust faults and cumulative vertical displacement due to faulting during the late Quaternary with about 1.1-1.9 m displacement per event; the latest event occurred from 14000 to 25000 yrs. BP. The seismic survey showed the basement surface os cut by numerous reverse faults and indicated the possibility that the boundary between Kyeongsangbukdo and Kyeongsannamdo may be segment boundary

  2. A streamlined artificial variable free version of simplex method.

    Directory of Open Access Journals (Sweden)

    Syed Inayatullah

    Full Text Available This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.

  3. A streamlined artificial variable free version of simplex method.

    Science.gov (United States)

    Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad

    2015-01-01

    This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.

  4. Automatic segmentation of psoriasis lesions

    Science.gov (United States)

    Ning, Yang; Shi, Chenbo; Wang, Li; Shu, Chang

    2014-10-01

    The automatic segmentation of psoriatic lesions is widely researched these years. It is an important step in Computer-aid methods of calculating PASI for estimation of lesions. Currently those algorithms can only handle single erythema or only deal with scaling segmentation. In practice, scaling and erythema are often mixed together. In order to get the segmentation of lesions area - this paper proposes an algorithm based on Random forests with color and texture features. The algorithm has three steps. The first step, the polarized light is applied based on the skin's Tyndall-effect in the imaging to eliminate the reflection and Lab color space are used for fitting the human perception. The second step, sliding window and its sub windows are used to get textural feature and color feature. In this step, a feature of image roughness has been defined, so that scaling can be easily separated from normal skin. In the end, Random forests will be used to ensure the generalization ability of the algorithm. This algorithm can give reliable segmentation results even the image has different lighting conditions, skin types. In the data set offered by Union Hospital, more than 90% images can be segmented accurately.

  5. GPU-based relative fuzzy connectedness image segmentation

    International Nuclear Information System (INIS)

    Zhuge Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.

    2013-01-01

    Purpose:Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an ℓ ∞ -based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA’s Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.

  6. GPU-based relative fuzzy connectedness image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhuge Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W. [Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland 20892 (United States); Department of Mathematics, West Virginia University, Morgantown, West Virginia 26506 (United States) and Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States); Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States); Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland 20892 (United States)

    2013-01-15

    Purpose:Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an Script-Small-L {sub {infinity}}-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8 Multiplication-Sign , 22.9 Multiplication-Sign , 20.9 Multiplication-Sign , and 17.5 Multiplication-Sign , correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.

  7. GPU-based relative fuzzy connectedness image segmentation.

    Science.gov (United States)

    Zhuge, Ying; Ciesielski, Krzysztof C; Udupa, Jayaram K; Miller, Robert W

    2013-01-01

    Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. The most common FC segmentations, optimizing an [script-l](∞)-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.

  8. GPU-based relative fuzzy connectedness image segmentation

    Science.gov (United States)

    Zhuge, Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.

    2013-01-01

    Purpose: Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an ℓ∞-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA’s Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology. PMID:23298094

  9. Skip segment Hirschsprung disease and Waardenburg syndrome

    Directory of Open Access Journals (Sweden)

    Erica R. Gross

    2015-04-01

    Full Text Available Skip segment Hirschsprung disease describes a segment of ganglionated bowel between two segments of aganglionated bowel. It is a rare phenomenon that is difficult to diagnose. We describe a recent case of skip segment Hirschsprung disease in a neonate with a family history of Waardenburg syndrome and the genetic profile that was identified.

  10. U.S. Army Custom Segmentation System

    Science.gov (United States)

    2007-06-01

    segmentation is individual or intergroup differences in response to marketing - mix variables. Presumptions about segments: •different demands in a...product or service category, •respond differently to changes in the marketing mix Criteria for segments: •The segments must exist in the environment

  11. Skip segment Hirschsprung disease and Waardenburg syndrome

    OpenAIRE

    Gross, Erica R.; Geddes, Gabrielle C.; McCarrier, Julie A.; Jarzembowski, Jason A.; Arca, Marjorie J.

    2015-01-01

    Skip segment Hirschsprung disease describes a segment of ganglionated bowel between two segments of aganglionated bowel. It is a rare phenomenon that is difficult to diagnose. We describe a recent case of skip segment Hirschsprung disease in a neonate with a family history of Waardenburg syndrome and the genetic profile that was identified.

  12. A Novel Iris Segmentation Scheme

    Directory of Open Access Journals (Sweden)

    Chen-Chung Liu

    2014-01-01

    Full Text Available One of the key steps in the iris recognition system is the accurate iris segmentation from its surrounding noises including pupil, sclera, eyelashes, and eyebrows of a captured eye-image. This paper presents a novel iris segmentation scheme which utilizes the orientation matching transform to outline the outer and inner iris boundaries initially. It then employs Delogne-Kåsa circle fitting (instead of the traditional Hough transform to further eliminate the outlier points to extract a more precise iris area from an eye-image. In the extracted iris region, the proposed scheme further utilizes the differences in the intensity and positional characteristics of the iris, eyelid, and eyelashes to detect and delete these noises. The scheme is then applied on iris image database, UBIRIS.v1. The experimental results show that the presented scheme provides a more effective and efficient iris segmentation than other conventional methods.

  13. Adaptation of the Maracas algorithm for carotid artery segmentation and stenosis quantification on CT images

    International Nuclear Information System (INIS)

    Maria A Zuluaga; Maciej Orkisz; Edgar J F Delgado; Vincent Dore; Alfredo Morales Pinzon; Marcela Hernandez Hoyos

    2010-01-01

    This paper describes the adaptations of Maracas algorithm to the segmentation and quantification of vascular structures in CTA images of the carotid artery. The maracas algorithm, which is based on an elastic model and on a multi-scale Eigen-analysis of the inertia matrix, was originally designed to segment a single artery in MRA images. The modifications are primarily aimed at addressing the specificities of CT images and the bifurcations. The algorithms implemented in this new version are classified into two levels. 1. The low-level processing (filtering of noise and directional artifacts, enhancement and pre-segmentation) to improve the quality of the image and to pre-segment it. These techniques are based on a priori information about noise, artifacts and typical gray levels ranges of lumen, background and calcifications. 2. The high-level processing to extract the centerline of the artery, to segment the lumen and to quantify the stenosis. At this level, we apply a priori knowledge of shape and anatomy of vascular structures. The method was evaluated on 31 datasets from the carotid lumen segmentation and stenosis grading grand challenge 2009. The segmentation results obtained an average of 80:4% dice similarity score, compared to reference segmentation, and the mean stenosis quantification error was 14.4%.

  14. Document segmentation via oblique cuts

    Science.gov (United States)

    Svendsen, Jeremy; Branzan-Albu, Alexandra

    2013-01-01

    This paper presents a novel solution for the layout segmentation of graphical elements in Business Intelligence documents. We propose a generalization of the recursive X-Y cut algorithm, which allows for cutting along arbitrary oblique directions. An intermediate processing step consisting of line and solid region removal is also necessary due to presence of decorative elements. The output of the proposed segmentation is a hierarchical structure which allows for the identification of primitives in pie and bar charts. The algorithm was tested on a database composed of charts from business documents. Results are very promising.

  15. Optimally segmented permanent magnet structures

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Bjørk, Rasmus; Smith, Anders

    2016-01-01

    We present an optimization approach which can be employed to calculate the globally optimal segmentation of a two-dimensional magnetic system into uniformly magnetized pieces. For each segment the algorithm calculates the optimal shape and the optimal direction of the remanent flux density vector......, with respect to a linear objective functional. We illustrate the approach with results for magnet design problems from different areas, such as a permanent magnet electric motor, a beam focusing quadrupole magnet for particle accelerators and a rotary device for magnetic refrigeration....

  16. Intercalary bone segment transport in treatment of segmental tibial defects

    International Nuclear Information System (INIS)

    Iqbal, A.; Amin, M.S.

    2002-01-01

    Objective: To evaluate the results and complications of intercalary bone segment transport in the treatment of segmental tibial defects. Design: This is a retrospective analysis of patients with segmental tibial defects who were treated with intercalary bone segment transport method. Place and Duration of Study: The study was carried out at Combined Military Hospital, Rawalpindi from September 1997 to April 2001. Subjects and methods: Thirteen patients were included in the study who had developed tibial defects either due to open fractures with bone loss or subsequent to bone debridement of infected non unions. The mean bone defect was 6.4 cms and there were eight associated soft tissue defects. Locally made unilateral 'Naseer-Awais' (NA) fixator was used for bone segment transport. The distraction was done at the rate of 1mm/day after 7-10 days of osteotomy. The patients were followed-up fortnightly during distraction and monthly thereafter. The mean follow-up duration was 18 months. Results: The mean time in external fixation was 9.4 months. The m ean healing index' was 1.47 months/cm. Satisfactory union was achieved in all cases. Six cases (46.2%) required bone grafting at target site and in one of them grafting was required at the level of regeneration as well. All the wounds healed well with no residual infection. There was no residual leg length discrepancy of more than 20 mm nd one angular deformity of more than 5 degrees. The commonest complication encountered was pin track infection seen in 38% of Shanz Screws applied. Loosening occurred in 6.8% of Shanz screws, requiring re-adjustment. Ankle joint contracture with equinus deformity and peroneal nerve paresis occurred in one case each. The functional results were graded as 'good' in seven, 'fair' in four, and 'poor' in two patients. Overall, thirteen patients had 31 (minor/major) complications with a ratio of 2.38 complications per patient. To treat the bone defects and associated complications, a mean of

  17. GENII Version 2 Users’ Guide

    Energy Technology Data Exchange (ETDEWEB)

    Napier, Bruce A.

    2004-03-08

    The GENII Version 2 computer code was developed for the Environmental Protection Agency (EPA) at Pacific Northwest National Laboratory (PNNL) to incorporate the internal dosimetry models recommended by the International Commission on Radiological Protection (ICRP) and the radiological risk estimating procedures of Federal Guidance Report 13 into updated versions of existing environmental pathway analysis models. The resulting environmental dosimetry computer codes are compiled in the GENII Environmental Dosimetry System. The GENII system was developed to provide a state-of-the-art, technically peer-reviewed, documented set of programs for calculating radiation dose and risk from radionuclides released to the environment. The codes were designed with the flexibility to accommodate input parameters for a wide variety of generic sites. Operation of a new version of the codes, GENII Version 2, is described in this report. Two versions of the GENII Version 2 code system are available, a full-featured version and a version specifically designed for demonstrating compliance with the dose limits specified in 40 CFR 61.93(a), the National Emission Standards for Hazardous Air Pollutants (NESHAPS) for radionuclides. The only differences lie in the limitation of the capabilities of the user to change specific parameters in the NESHAPS version. This report describes the data entry, accomplished via interactive, menu-driven user interfaces. Default exposure and consumption parameters are provided for both the average (population) and maximum individual; however, these may be modified by the user. Source term information may be entered as radionuclide release quantities for transport scenarios, or as basic radionuclide concentrations in environmental media (air, water, soil). For input of basic or derived concentrations, decay of parent radionuclides and ingrowth of radioactive decay products prior to the start of the exposure scenario may be considered. A single code run can

  18. Hydrophilic segmented block copolymers based on poly(ethylene oxide) and monodisperse amide segments

    NARCIS (Netherlands)

    Husken, D.; Feijen, Jan; Gaymans, R.J.

    2007-01-01

    Segmented block copolymers based on poly(ethylene oxide) (PEO) flexible segments and monodisperse crystallizable bisester tetra-amide segments were made via a polycondensation reaction. The molecular weight of the PEO segments varied from 600 to 4600 g/mol and a bisester tetra-amide segment (T6T6T)

  19. ELIPGRID-PC: Upgraded version

    International Nuclear Information System (INIS)

    Davidson, J.R.

    1995-12-01

    Evaluating the need for and the effectiveness of remedial cleanup at waste sites often includes finding average contaminant concentrations and identifying pockets of contamination called hot spots. The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID code of singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM reg-sign personal computer (PC) or compatible. A new version of ELIPGRID-PC, incorporating Monte Carlo test results and simple graphics, is herein described. Various examples of how to use the program for both single and multiple hot spot cases are given. The code for an American National Standards Institute C version of the ELIPGRID algorithm is provided, and limitations and further work are noted. This version of ELIPGRID-PC reliably meets the goal of moving Singer's ELIPGRID algorithm to the PC

  20. [Fetal version as ambulatory intervention].

    Science.gov (United States)

    Nohe, G; Hartmann, W; Klapproth, C E

    1996-06-01

    The external cephalic version (ECV) of the fetus at term reduces the maternal and fetal risks of intrapartum breech presentation and Caesarean delivery. Since 1986 over 800 external cephalic versions were performed in the outpatient Department of Obstetrics and Gynaecology of the Städtische Frauenklinik Stuttgart. 60.5% were successful. NO severe complications occurred. Sufficient amniotic fluid as well as the mobility of the fetal breech is a major criterion for the success of the ECV. Management requires a safe technique for mother and fetus. This includes ultrasonography, elektronic fetal monitoring and the ability to perform immediate caesarean delivery as well as the performance of ECV without analgesicas and sedatives. More than 70% of the ECV were successful without tocolysis. In unsuccessful cases the additional use of tocolysis improves the success rate only slightly. Therefore routine use of tocolysis does not appear necessary. External cephalic version can be recommended as an outpatient treatment without tocolysis.

  1. Phenotype Instance Verification and Evaluation Tool (PIVET): A Scaled Phenotype Evidence Generation Framework Using Web-Based Medical Literature

    Science.gov (United States)

    Ke, Junyuan; Ho, Joyce C; Ghosh, Joydeep; Wallace, Byron C

    2018-01-01

    Background Researchers are developing methods to automatically extract clinically relevant and useful patient characteristics from raw healthcare datasets. These characteristics, often capturing essential properties of patients with common medical conditions, are called computational phenotypes. Being generated by automated or semiautomated, data-driven methods, such potential phenotypes need to be validated as clinically meaningful (or not) before they are acceptable for use in decision making. Objective The objective of this study was to present Phenotype Instance Verification and Evaluation Tool (PIVET), a framework that uses co-occurrence analysis on an online corpus of publically available medical journal articles to build clinical relevance evidence sets for user-supplied phenotypes. PIVET adopts a conceptual framework similar to the pioneering prototype tool PheKnow-Cloud that was developed for the phenotype validation task. PIVET completely refactors each part of the PheKnow-Cloud pipeline to deliver vast improvements in speed without sacrificing the quality of the insights PheKnow-Cloud achieved. Methods PIVET leverages indexing in NoSQL databases to efficiently generate evidence sets. Specifically, PIVET uses a succinct representation of the phenotypes that corresponds to the index on the corpus database and an optimized co-occurrence algorithm inspired by the Aho-Corasick algorithm. We compare PIVET’s phenotype representation with PheKnow-Cloud’s by using PheKnow-Cloud’s experimental setup. In PIVET’s framework, we also introduce a statistical model trained on domain expert–verified phenotypes to automatically classify phenotypes as clinically relevant or not. Additionally, we show how the classification model can be used to examine user-supplied phenotypes in an online, rather than batch, manner. Results PIVET maintains the discriminative power of PheKnow-Cloud in terms of identifying clinically relevant phenotypes for the same corpus with

  2. Phenotype Instance Verification and Evaluation Tool (PIVET): A Scaled Phenotype Evidence Generation Framework Using Web-Based Medical Literature.

    Science.gov (United States)

    Henderson, Jette; Ke, Junyuan; Ho, Joyce C; Ghosh, Joydeep; Wallace, Byron C

    2018-05-04

    Researchers are developing methods to automatically extract clinically relevant and useful patient characteristics from raw healthcare datasets. These characteristics, often capturing essential properties of patients with common medical conditions, are called computational phenotypes. Being generated by automated or semiautomated, data-driven methods, such potential phenotypes need to be validated as clinically meaningful (or not) before they are acceptable for use in decision making. The objective of this study was to present Phenotype Instance Verification and Evaluation Tool (PIVET), a framework that uses co-occurrence analysis on an online corpus of publically available medical journal articles to build clinical relevance evidence sets for user-supplied phenotypes. PIVET adopts a conceptual framework similar to the pioneering prototype tool PheKnow-Cloud that was developed for the phenotype validation task. PIVET completely refactors each part of the PheKnow-Cloud pipeline to deliver vast improvements in speed without sacrificing the quality of the insights PheKnow-Cloud achieved. PIVET leverages indexing in NoSQL databases to efficiently generate evidence sets. Specifically, PIVET uses a succinct representation of the phenotypes that corresponds to the index on the corpus database and an optimized co-occurrence algorithm inspired by the Aho-Corasick algorithm. We compare PIVET's phenotype representation with PheKnow-Cloud's by using PheKnow-Cloud's experimental setup. In PIVET's framework, we also introduce a statistical model trained on domain expert-verified phenotypes to automatically classify phenotypes as clinically relevant or not. Additionally, we show how the classification model can be used to examine user-supplied phenotypes in an online, rather than batch, manner. PIVET maintains the discriminative power of PheKnow-Cloud in terms of identifying clinically relevant phenotypes for the same corpus with which PheKnow-Cloud was originally developed, but

  3. Dictionary Based Segmentation in Volumes

    DEFF Research Database (Denmark)

    Emerson, Monica Jane; Jespersen, Kristine Munk; Jørgensen, Peter Stanley

    Method for supervised segmentation of volumetric data. The method is trained from manual annotations, and these annotations make the method very flexible, which we demonstrate in our experiments. Our method infers label information locally by matching the pattern in a neighborhood around a voxel ...... to a dictionary, and hereby accounts for the volume texture....

  4. Multiple Segmentation of Image Stacks

    DEFF Research Database (Denmark)

    Smets, Jonathan; Jaeger, Manfred

    2014-01-01

    We propose a method for the simultaneous construction of multiple image segmentations by combining a recently proposed “convolution of mixtures of Gaussians” model with a multi-layer hidden Markov random field structure. The resulting method constructs for a single image several, alternative...

  5. Segmenting Trajectories by Movement States

    NARCIS (Netherlands)

    Buchin, M.; Kruckenberg, H.; Kölzsch, A.; Timpf, S.; Laube, P.

    2013-01-01

    Dividing movement trajectories according to different movement states of animals has become a challenge in movement ecology, as well as in algorithm development. In this study, we revisit and extend a framework for trajectory segmentation based on spatio-temporal criteria for this purpose. We adapt

  6. Segmental Colitis Complicating Diverticular Disease

    Directory of Open Access Journals (Sweden)

    Guido Ma Van Rosendaal

    1996-01-01

    Full Text Available Two cases of idiopathic colitis affecting the sigmoid colon in elderly patients with underlying diverticulosis are presented. Segmental resection has permitted close review of the histopathology in this syndrome which demonstrates considerable similarity to changes seen in idiopathic ulcerative colitis. The reported experience with this syndrome and its clinical features are reviewed.

  7. Leaf segmentation in plant phenotyping

    NARCIS (Netherlands)

    Scharr, Hanno; Minervini, Massimo; French, Andrew P.; Klukas, Christian; Kramer, David M.; Liu, Xiaoming; Luengo, Imanol; Pape, Jean Michel; Polder, Gerrit; Vukadinovic, Danijela; Yin, Xi; Tsaftaris, Sotirios A.

    2016-01-01

    Image-based plant phenotyping is a growing application area of computer vision in agriculture. A key task is the segmentation of all individual leaves in images. Here we focus on the most common rosette model plants, Arabidopsis and young tobacco. Although leaves do share appearance and shape

  8. Joint shape segmentation with linear programming

    KAUST Repository

    Huang, Qixing

    2011-01-01

    We present an approach to segmenting shapes in a heterogenous shape database. Our approach segments the shapes jointly, utilizing features from multiple shapes to improve the segmentation of each. The approach is entirely unsupervised and is based on an integer quadratic programming formulation of the joint segmentation problem. The program optimizes over possible segmentations of individual shapes as well as over possible correspondences between segments from multiple shapes. The integer quadratic program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape segmentation significantly outperforms single-shape segmentation techniques. © 2011 ACM.

  9. EOS MLS Level 2 Data Processing Software Version 3

    Science.gov (United States)

    Livesey, Nathaniel J.; VanSnyder, Livesey W.; Read, William G.; Schwartz, Michael J.; Lambert, Alyn; Santee, Michelle L.; Nguyen, Honghanh T.; Froidevaux, Lucien; wang, Shuhui; Manney, Gloria L.; hide

    2011-01-01

    This software accepts the EOS MLS calibrated measurements of microwave radiances products and operational meteorological data, and produces a set of estimates of atmospheric temperature and composition. This version has been designed to be as flexible as possible. The software is controlled by a Level 2 Configuration File that controls all aspects of the software: defining the contents of state and measurement vectors, defining the configurations of the various forward models available, reading appropriate a priori spectroscopic and calibration data, performing retrievals, post-processing results, computing diagnostics, and outputting results in appropriate files. In production mode, the software operates in a parallel form, with one instance of the program acting as a master, coordinating the work of multiple slave instances on a cluster of computers, each computing the results for individual chunks of data. In addition, to do conventional retrieval calculations and producing geophysical products, the Level 2 Configuration File can instruct the software to produce files of simulated radiances based on a state vector formed from a set of geophysical product files taken as input. Combining both the retrieval and simulation tasks in a single piece of software makes it far easier to ensure that identical forward model algorithms and parameters are used in both tasks. This also dramatically reduces the complexity of the code maintenance effort.

  10. MEASUREMENT AND PRECISION, EXPERIMENTAL VERSION.

    Science.gov (United States)

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    THIS DOCUMENT IS AN EXPERIMENTAL VERSION OF A PROGRAMED TEXT ON MEASUREMENT AND PRECISION. PART I CONTAINS 24 FRAMES DEALING WITH PRECISION AND SIGNIFICANT FIGURES ENCOUNTERED IN VARIOUS MATHEMATICAL COMPUTATIONS AND MEASUREMENTS. PART II BEGINS WITH A BRIEF SECTION ON EXPERIMENTAL DATA, COVERING SUCH POINTS AS (1) ESTABLISHING THE ZERO POINT, (2)…

  11. Montage Version 3.0

    Science.gov (United States)

    Jacob, Joseph; Katz, Daniel; Prince, Thomas; Berriman, Graham; Good, John; Laity, Anastasia

    2006-01-01

    The final version (3.0) of the Montage software has been released. To recapitulate from previous NASA Tech Briefs articles about Montage: This software generates custom, science-grade mosaics of astronomical images on demand from input files that comply with the Flexible Image Transport System (FITS) standard and contain image data registered on projections that comply with the World Coordinate System (WCS) standards. This software can be executed on single-processor computers, multi-processor computers, and such networks of geographically dispersed computers as the National Science Foundation s TeraGrid or NASA s Information Power Grid. The primary advantage of running Montage in a grid environment is that computations can be done on a remote supercomputer for efficiency. Multiple computers at different sites can be used for different parts of a computation a significant advantage in cases of computations for large mosaics that demand more processor time than is available at any one site. Version 3.0 incorporates several improvements over prior versions. The most significant improvement is that this version is accessible to scientists located anywhere, through operational Web services that provide access to data from several large astronomical surveys and construct mosaics on either local workstations or remote computational grids as needed.

  12. Dictionary Based Segmentation in Volumes

    DEFF Research Database (Denmark)

    Emerson, Monica Jane; Jespersen, Kristine Munk; Jørgensen, Peter Stanley

    2015-01-01

    We present a method for supervised volumetric segmentation based on a dictionary of small cubes composed of pairs of intensity and label cubes. Intensity cubes are small image volumes where each voxel contains an image intensity. Label cubes are volumes with voxelwise probabilities for a given...... label. The segmentation process is done by matching a cube from the volume, of the same size as the dictionary intensity cubes, to the most similar intensity dictionary cube, and from the associated label cube we get voxel-wise label probabilities. Probabilities from overlapping cubes are averaged...... and hereby we obtain a robust label probability encoding. The dictionary is computed from labeled volumetric image data based on weighted clustering. We experimentally demonstrate our method using two data sets from material science – a phantom data set of a solid oxide fuel cell simulation for detecting...

  13. Segmental osteotomies of the maxilla.

    Science.gov (United States)

    Rosen, H M

    1989-10-01

    Multiple segment Le Fort I osteotomies provide the maxillofacial surgeon with the capabilities to treat complex dentofacial deformities existing in all three planes of space. Sagittal, vertical, and transverse maxillomandibular discrepancies as well as three-dimensional abnormalities within the maxillary arch can be corrected simultaneously. Accordingly, optimal aesthetic enhancement of the facial skeleton and a functional, healthy occlusion can be realized. What may be perceived as elaborate treatment plans are in reality conservative in terms of osseous stability and treatment time required. The close cooperation of an orthodontist well-versed in segmental orthodontics and orthognathic surgery is critical to the success of such surgery. With close attention to surgical detail, the complication rate inherent in such surgery can be minimized and the treatment goals achieved in a timely and predictable fashion.

  14. Korean WA-DGNSS User Segment Software Design

    Directory of Open Access Journals (Sweden)

    Sayed Chhattan Shah

    2013-03-01

    Full Text Available Korean WA-DGNSS is a large scale research project funded by Ministry of Land, Transport and Maritime Affairs Korea. It aims to augment the Global Navigation Satellite System by broadcasting additional signals from geostationary satellites and providing differential correction messages and integrity data for the GNSS satellites. The project is being carried out by a consortium of universities and research institutes. The research team at Electronics and Telecommunications Research Institute is involved in design and development of data processing softwares for wide area reference station and user segment. This paper focuses on user segment software design. Korean WA-DGNSS user segment software is designed to perform several functions such as calculation of pseudorange, ionosphere and troposphere delays, application of fast and slow correction messages, and data verification. It is based on a layered architecture that provides a model to develop flexible and reusable software and is divided into several independent, interchangeable and reusable components to reduce complexity and maintenance cost. The current version is designed to collect and process GPS and WA-DGNSS data however it is flexible to accommodate future GNSS systems such as GLONASS and Galileo.

  15. Segmented fuel and moderator rod

    International Nuclear Information System (INIS)

    Doshi, P.K.

    1987-01-01

    This patent describes a continuous segmented fuel and moderator rod for use with a water cooled and moderated nuclear fuel assembly. The rod comprises: a lower fuel region containing a column of nuclear fuel; a moderator region, disposed axially above the fuel region. The moderator region has means for admitting and passing the water moderator therethrough for moderating an upper portion of the nuclear fuel assembly. The moderator region is separated from the fuel region by a water tight separator

  16. Segmentation of sows in farrowing pens

    DEFF Research Database (Denmark)

    Tu, Gang Jun; Karstoft, Henrik; Pedersen, Lene Juul

    2014-01-01

    The correct segmentation of a foreground object in video recordings is an important task for many surveillance systems. The development of an effective and practical algorithm to segment sows in grayscale video recordings captured under commercial production conditions is described...

  17. Segmentation in local hospital markets.

    Science.gov (United States)

    Dranove, D; White, W D; Wu, L

    1993-01-01

    This study examines evidence of market segmentation on the basis of patients' insurance status, demographic characteristics, and medical condition in selected local markets in California in the years 1983 and 1989. Substantial differences exist in the probability patients may be admitted to particular hospitals based on insurance coverage, particularly Medicaid, and race. Segmentation based on insurance and race is related to hospital characteristics, but not the characteristics of the hospital's community. Medicaid patients are more likely to go to hospitals with lower costs and fewer service offerings. Privately insured patients go to hospitals offering more services, although cost concerns are increasing. Hispanic patients also go to low-cost hospitals, ceteris paribus. Results indicate little evidence of segmentation based on medical condition in either 1983 or 1989, suggesting that "centers of excellence" have yet to play an important role in patient choice of hospital. The authors found that distance matters, and that patients prefer nearby hospitals, moreso for some medical conditions than others, in ways consistent with economic theories of consumer choice.

  18. Roentgenological diagnoss of central segmental lung cancer

    International Nuclear Information System (INIS)

    Gurevich, L.A.; Fedchenko, G.G.

    1984-01-01

    Basing on an analysis of the results of clinicoroentgenological examination of 268 patments roentgenological semiotics of segmental lung cancer is presented. Some peculiarities of the X-ray picture of cancer of different segments of the lungs were revealed depending on tumor site and growth type. For the syndrome of segmental darkening the comprehensive X-ray methods where the chief method is tomography of the segmental bronchi are proposed

  19. Review of segmentation process in consumer markets

    OpenAIRE

    Veronika Jadczaková

    2013-01-01

    Although there has been a considerable debate on market segmentation over five decades, attention was merely devoted to single stages of the segmentation process. In doing so, stages as segmentation base selection or segments profiling have been heavily covered in the extant literature, whereas stages as implementation of the marketing strategy or market definition were of a comparably lower interest. Capitalizing on this shortcoming, this paper strives to close the gap and provide each step...

  20. Classification and Weakly Supervised Pain Localization using Multiple Segment Representation.

    Science.gov (United States)

    Sikka, Karan; Dhall, Abhinav; Bartlett, Marian Stewart

    2014-10-01

    Automatic pain recognition from videos is a vital clinical application and, owing to its spontaneous nature, poses interesting challenges to automatic facial expression recognition (AFER) research. Previous pain vs no-pain systems have highlighted two major challenges: (1) ground truth is provided for the sequence, but the presence or absence of the target expression for a given frame is unknown, and (2) the time point and the duration of the pain expression event(s) in each video are unknown. To address these issues we propose a novel framework (referred to as MS-MIL) where each sequence is represented as a bag containing multiple segments, and multiple instance learning (MIL) is employed to handle this weakly labeled data in the form of sequence level ground-truth. These segments are generated via multiple clustering of a sequence or running a multi-scale temporal scanning window, and are represented using a state-of-the-art Bag of Words (BoW) representation. This work extends the idea of detecting facial expressions through 'concept frames' to 'concept segments' and argues through extensive experiments that algorithms such as MIL are needed to reap the benefits of such representation. The key advantages of our approach are: (1) joint detection and localization of painful frames using only sequence-level ground-truth, (2) incorporation of temporal dynamics by representing the data not as individual frames but as segments, and (3) extraction of multiple segments, which is well suited to signals with uncertain temporal location and duration in the video. Extensive experiments on UNBC-McMaster Shoulder Pain dataset highlight the effectiveness of the approach by achieving competitive results on both tasks of pain classification and localization in videos. We also empirically evaluate the contributions of different components of MS-MIL. The paper also includes the visualization of discriminative facial patches, important for pain detection, as discovered by our

  1. SPAM- SPECTRAL ANALYSIS MANAGER (UNIX VERSION)

    Science.gov (United States)

    Solomon, J. E.

    1994-01-01

    The Spectral Analysis Manager (SPAM) was developed to allow easy qualitative analysis of multi-dimensional imaging spectrometer data. Imaging spectrometers provide sufficient spectral sampling to define unique spectral signatures on a per pixel basis. Thus direct material identification becomes possible for geologic studies. SPAM provides a variety of capabilities for carrying out interactive analysis of the massive and complex datasets associated with multispectral remote sensing observations. In addition to normal image processing functions, SPAM provides multiple levels of on-line help, a flexible command interpretation, graceful error recovery, and a program structure which can be implemented in a variety of environments. SPAM was designed to be visually oriented and user friendly with the liberal employment of graphics for rapid and efficient exploratory analysis of imaging spectrometry data. SPAM provides functions to enable arithmetic manipulations of the data, such as normalization, linear mixing, band ratio discrimination, and low-pass filtering. SPAM can be used to examine the spectra of an individual pixel or the average spectra over a number of pixels. SPAM also supports image segmentation, fast spectral signature matching, spectral library usage, mixture analysis, and feature extraction. High speed spectral signature matching is performed by using a binary spectral encoding algorithm to separate and identify mineral components present in the scene. The same binary encoding allows automatic spectral clustering. Spectral data may be entered from a digitizing tablet, stored in a user library, compared to the master library containing mineral standards, and then displayed as a timesequence spectral movie. The output plots, histograms, and stretched histograms produced by SPAM can be sent to a lineprinter, stored as separate RGB disk files, or sent to a Quick Color Recorder. SPAM is written in C for interactive execution and is available for two different

  2. Market Segmentation from a Behavioral Perspective

    Science.gov (United States)

    Wells, Victoria K.; Chang, Shing Wan; Oliveira-Castro, Jorge; Pallister, John

    2010-01-01

    A segmentation approach is presented using both traditional demographic segmentation bases (age, social class/occupation, and working status) and a segmentation by benefits sought. The benefits sought in this case are utilitarian and informational reinforcement, variables developed from the Behavioral Perspective Model (BPM). Using data from 1,847…

  3. Parallel fuzzy connected image segmentation on GPU

    OpenAIRE

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K.; Miller, Robert W.

    2011-01-01

    Purpose: Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm impleme...

  4. LIFE-STYLE SEGMENTATION WITH TAILORED INTERVIEWING

    NARCIS (Netherlands)

    KAMAKURA, WA; WEDEL, M

    The authors present a tailored interviewing procedure for life-style segmentation. The procedure assumes that a life-style measurement instrument has been designed. A classification of a sample of consumers into life-style segments is obtained using a latent-class model. With these segments, the

  5. The Process of Marketing Segmentation Strategy Selection

    OpenAIRE

    Ionel Dumitru

    2007-01-01

    The process of marketing segmentation strategy selection represents the essence of strategical marketing. We present hereinafter the main forms of the marketing statategy segmentation: undifferentiated marketing, differentiated marketing, concentrated marketing and personalized marketing. In practice, the companies use a mix of these marketing segmentation methods in order to maximize the proffit and to satisfy the consumers’ needs.

  6. NCDC International Best Track Archive for Climate Stewardship (IBTrACS) Project, Version 2 (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Version 2 of the dataset has been superseded by a newer version. Users should not use version 2 except in rare cases (e.g., when reproducing previous studies that...

  7. NCDC International Best Track Archive for Climate Stewardship (IBTrACS) Project, Version 1 (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Version 1 of the dataset has been superseded by a newer version. Users should not use version 1 except in rare cases (e.g., when reproducing previous studies that...

  8. The FORM version of MINCER

    International Nuclear Information System (INIS)

    Larin, S.A.; Academy of Sciences of the USSR, Moscow; Tkachov, F.V.; McGill Univ., Montreal, PQ; Academy of Sciences of the USSR, Moscow; Vermaseren, J.A.M.

    1991-01-01

    The program MINCER for massless three-loop Feynman diagrams of the propagator type has been reprogrammed in the language of FORM. The new version is thoroughly optimized and can be run from a utility like the UNIX make, which allows one to conveniently process large numbers of diagrams. It has been used for some calculations that were previously not practical. (author). 22 refs.; 14 figs

  9. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (DEC VAX VMS VERSION)

    Science.gov (United States)

    Donnell, B.

    1994-01-01

    COOL (that is, a rule can pattern match on objects created using COOL). CLIPS 6.0 provides the capability to define functions, overloaded functions, and global variables interactively. In addition, CLIPS can be embedded within procedural code, called as a subroutine, and integrated with languages such as C, FORTRAN and Ada. CLIPS can be easily extended by a user through the use of several well-defined protocols. CLIPS provides several delivery options for programs including the ability to generate stand alone executables or to load programs from text or binary files. CLIPS 6.0 provides support for the modular development and execution of knowledge bases with the defmodule construct. CLIPS modules allow a set of constructs to be grouped together such that explicit control can be maintained over restricting the access of the constructs by other modules. This type of control is similar to global and local scoping used in languages such as C or Ada. By restricting access to deftemplate and defclass constructs, modules can function as blackboards, permitting only certain facts and instances to be seen by other modules. Modules are also used by rules to provide execution control. The CRSV (Cross-Reference, Style, and Verification) utility included with previous version of CLIPS is no longer supported. The capabilities provided by this tool are now available directly within CLIPS 6.0 to aid in the development, debugging, and verification of large rule bases. COSMIC offers four distribution versions of CLIPS 6.0: UNIX (MSC-22433), VMS (MSC-22434), MACINTOSH (MSC-22429), and IBM PC (MSC-22430). Executable files, source code, utilities, documentation, and examples are included on the program media. All distribution versions include identical source code for the command line version of CLIPS 6.0. This source code should compile on any platform with an ANSI C compiler. Each distribution version of CLIPS 6.0, except that for the Macintosh platform, includes an executable for the

  10. Energy Statistics Manual [Arabic version

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    Detailed, complete, timely and reliable statistics are essential to monitor the energy situation at a country level as well as at an international level. Energy statistics on supply, trade, stocks, transformation and demand are indeed the basis for any sound energy policy decision. For instance, the market of oil -- which is the largest traded commodity worldwide -- needs to be closely monitored in order for all market players to know at any time what is produced, traded, stocked and consumed and by whom. In view of the role and importance of energy in world development, one would expect that basic energy information to be readily available and reliable. This is not always the case and one can even observe a decline in the quality, coverage and timeliness of energy statistics over the last few years.

  11. Forsmark - site descriptive model version 0

    International Nuclear Information System (INIS)

    2002-10-01

    biosphere, is sufficiently advanced for some initial modelling exercises. The available information on the geosphere in the Forsmark regional model area is quite extensive, at least locally (especially SFR). In order to develop and test the modelling procedures, this information has been collected and transformed into appropriate formats under four separate headings: Geology, Rock mechanics, Hydrogeology, and Hydrogeochemistry. In the areas of rock engineering, hydrogeology and hydrogeochemistry, modelling activities were mainly confined to parametrisation exercises, using presently available data from the Forsmark regional model area to put limits on, for instance, the in situ stress field, the mechanical properties of the rock mass, the hydraulic properties of the fracture zones and rock mass between them, and the hydrogeochemical evolution. The site descriptive model, version 0, is intended as the basic platform and natural starting point for all groups involved in the site investigations at Forsmark, especially for the regional model area. The main results of the present project were to focus attention on the strengths and weaknesses in the available data coverage and data storage and processing systems, and to provide a basis for developing and testing ways of transforming diverse types of geoscientific information into a form appropriate for modelling. At the same time, the project provided concrete guidelines for the planning of the initial site investigations at Forsmark

  12. FORM version 4.0

    Science.gov (United States)

    Kuipers, J.; Ueda, T.; Vermaseren, J. A. M.; Vollinga, J.

    2013-05-01

    We present version 4.0 of the symbolic manipulation system FORM. The most important new features are manipulation of rational polynomials and the factorization of expressions. Many other new functions and commands are also added; some of them are very general, while others are designed for building specific high level packages, such as one for Gröbner bases. New is also the checkpoint facility, that allows for periodic backups during long calculations. Finally, FORM 4.0 has become available as open source under the GNU General Public License version 3. Program summaryProgram title: FORM. Catalogue identifier: AEOT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 151599 No. of bytes in distributed program, including test data, etc.: 1 078 748 Distribution format: tar.gz Programming language: The FORM language. FORM itself is programmed in a mixture of C and C++. Computer: All. Operating system: UNIX, LINUX, Mac OS, Windows. Classification: 5. Nature of problem: FORM defines a symbolic manipulation language in which the emphasis lies on fast processing of very large formulas. It has been used successfully for many calculations in Quantum Field Theory and mathematics. In speed and size of formulas that can be handled it outperforms other systems typically by an order of magnitude. Special in this version: The version 4.0 contains many new features. Most important are factorization and rational arithmetic. The program has also become open source under the GPL. The code in CPC is for reference. You are encouraged to upload the most recent sources from www.nikhef.nl/form/formcvs.php because of frequent bug fixes. Solution method: See "Nature of Problem", above. Additional comments: NOTE: The code in CPC is for reference. You are encouraged

  13. Automatic segmentation of vertebrae from radiographs

    DEFF Research Database (Denmark)

    Mysling, Peter; Petersen, Peter Kersten; Nielsen, Mads

    2011-01-01

    Segmentation of vertebral contours is an essential task in the design of automatic tools for vertebral fracture assessment. In this paper, we propose a novel segmentation technique which does not require operator interaction. The proposed technique solves the segmentation problem in a hierarchical...... is constrained by a conditional shape model, based on the variability of the coarse spine location estimates. The technique is evaluated on a data set of manually annotated lumbar radiographs. The results compare favorably to the previous work in automatic vertebra segmentation, in terms of both segmentation...

  14. Color image Segmentation using automatic thresholding techniques

    International Nuclear Information System (INIS)

    Harrabi, R.; Ben Braiek, E.

    2011-01-01

    In this paper, entropy and between-class variance based thresholding methods for color images segmentation are studied. The maximization of the between-class variance (MVI) and the entropy (ME) have been used as a criterion functions to determine an optimal threshold to segment images into nearly homogenous regions. Segmentation results from the two methods are validated and the segmentation sensitivity for the test data available is evaluated, and a comparative study between these methods in different color spaces is presented. The experimental results demonstrate the superiority of the MVI method for color image segmentation.

  15. MOVING WINDOW SEGMENTATION FRAMEWORK FOR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2012-07-01

    Full Text Available As lidar point clouds become larger streamed processing becomes more attractive. This paper presents a framework for the streamed segmentation of point clouds with the intention of segmenting unstructured point clouds in real-time. The framework is composed of two main components. The first component segments points within a window shifting over the point cloud. The second component stitches the segments within the windows together. In this fashion a point cloud can be streamed through these two components in sequence, thus producing a segmentation. The algorithm has been tested on airborne lidar point cloud and some results of the performance of the framework are presented.

  16. Automatic speech signal segmentation based on the innovation adaptive filter

    Directory of Open Access Journals (Sweden)

    Makowski Ryszard

    2014-06-01

    Full Text Available Speech segmentation is an essential stage in designing automatic speech recognition systems and one can find several algorithms proposed in the literature. It is a difficult problem, as speech is immensely variable. The aim of the authors’ studies was to design an algorithm that could be employed at the stage of automatic speech recognition. This would make it possible to avoid some problems related to speech signal parametrization. Posing the problem in such a way requires the algorithm to be capable of working in real time. The only such algorithm was proposed by Tyagi et al., (2006, and it is a modified version of Brandt’s algorithm. The article presents a new algorithm for unsupervised automatic speech signal segmentation. It performs segmentation without access to information about the phonetic content of the utterances, relying exclusively on second-order statistics of a speech signal. The starting point for the proposed method is time-varying Schur coefficients of an innovation adaptive filter. The Schur algorithm is known to be fast, precise, stable and capable of rapidly tracking changes in second order signal statistics. A transfer from one phoneme to another in the speech signal always indicates a change in signal statistics caused by vocal track changes. In order to allow for the properties of human hearing, detection of inter-phoneme boundaries is performed based on statistics defined on the mel spectrum determined from the reflection coefficients. The paper presents the structure of the algorithm, defines its properties, lists parameter values, describes detection efficiency results, and compares them with those for another algorithm. The obtained segmentation results, are satisfactory.

  17. Versions of the Waste Reduction Model (WARM)

    Science.gov (United States)

    This page provides a brief chronology of changes made to EPA’s Waste Reduction Model (WARM), organized by WARM version number. The page includes brief summaries of changes and updates since the previous version.

  18. Unsupervised Performance Evaluation of Image Segmentation

    Directory of Open Access Journals (Sweden)

    Chabrier Sebastien

    2006-01-01

    Full Text Available We present in this paper a study of unsupervised evaluation criteria that enable the quantification of the quality of an image segmentation result. These evaluation criteria compute some statistics for each region or class in a segmentation result. Such an evaluation criterion can be useful for different applications: the comparison of segmentation results, the automatic choice of the best fitted parameters of a segmentation method for a given image, or the definition of new segmentation methods by optimization. We first present the state of art of unsupervised evaluation, and then, we compare six unsupervised evaluation criteria. For this comparative study, we use a database composed of 8400 synthetic gray-level images segmented in four different ways. Vinet's measure (correct classification rate is used as an objective criterion to compare the behavior of the different criteria. Finally, we present the experimental results on the segmentation evaluation of a few gray-level natural images.

  19. Efficient graph-cut tattoo segmentation

    Science.gov (United States)

    Kim, Joonsoo; Parra, Albert; Li, He; Delp, Edward J.

    2015-03-01

    Law enforcement is interested in exploiting tattoos as an information source to identify, track and prevent gang-related crimes. Many tattoo image retrieval systems have been described. In a retrieval system tattoo segmentation is an important step for retrieval accuracy since segmentation removes background information in a tattoo image. Existing segmentation methods do not extract the tattoo very well when the background includes textures and color similar to skin tones. In this paper we describe a tattoo segmentation approach by determining skin pixels in regions near the tattoo. In these regions graph-cut segmentation using a skin color model and a visual saliency map is used to find skin pixels. After segmentation we determine which set of skin pixels are connected with each other that form a closed contour including a tattoo. The regions surrounded by the closed contours are considered tattoo regions. Our method segments tattoos well when the background includes textures and color similar to skin.

  20. Metric Learning for Hyperspectral Image Segmentation

    Science.gov (United States)

    Bue, Brian D.; Thompson, David R.; Gilmore, Martha S.; Castano, Rebecca

    2011-01-01

    We present a metric learning approach to improve the performance of unsupervised hyperspectral image segmentation. Unsupervised spatial segmentation can assist both user visualization and automatic recognition of surface features. Analysts can use spatially-continuous segments to decrease noise levels and/or localize feature boundaries. However, existing segmentation methods use tasks-agnostic measures of similarity. Here we learn task-specific similarity measures from training data, improving segment fidelity to classes of interest. Multiclass Linear Discriminate Analysis produces a linear transform that optimally separates a labeled set of training classes. The defines a distance metric that generalized to a new scenes, enabling graph-based segmentation that emphasizes key spectral features. We describe tests based on data from the Compact Reconnaissance Imaging Spectrometer (CRISM) in which learned metrics improve segment homogeneity with respect to mineralogical classes.

  1. Interferon Induced Focal Segmental Glomerulosclerosis

    Directory of Open Access Journals (Sweden)

    Yusuf Kayar

    2016-01-01

    Full Text Available Behçet’s disease is an inflammatory disease of unknown etiology which involves recurring oral and genital aphthous ulcers and ocular lesions as well as articular, vascular, and nervous system involvement. Focal segmental glomerulosclerosis (FSGS is usually seen in viral infections, immune deficiency syndrome, sickle cell anemia, and hyperfiltration and secondary to interferon therapy. Here, we present a case of FSGS identified with kidney biopsy in a patient who had been diagnosed with Behçet’s disease and received interferon-alpha treatment for uveitis and presented with acute renal failure and nephrotic syndrome associated with interferon.

  2. A contrario line segment detection

    CERN Document Server

    von Gioi, Rafael Grompone

    2014-01-01

    The reliable detection of low-level image structures is an old and still challenging problem in computer vision. This?book leads a detailed tour through the LSD algorithm, a line segment detector designed to be fully automatic. Based on the a contrario framework, the algorithm works efficiently without the need of any parameter tuning. The design criteria are thoroughly explained and the algorithm's good and bad results are illustrated on real and synthetic images. The issues involved, as well as the strategies used, are common to many geometrical structure detection problems and some possible

  3. Did Globalization Lead to Segmentation?

    DEFF Research Database (Denmark)

    Di Vaio, Gianfranco; Enflo, Kerstin Sofia

    Economic historians have stressed that income convergence was a key feature of the 'OECD-club' and that globalization was among the accelerating forces of this process in the long-run. This view has however been challenged, since it suffers from an ad hoc selection of countries. In the paper......, a mixture model is applied to a sample of 64 countries to endogenously analyze the cross-country growth behavior over the period 1870-2003. Results show that growth patterns were segmented in two worldwide regimes, the first one being characterized by convergence, and the other one denoted by divergence...

  4. Inclusion in the Workplace - Text Version | NREL

    Science.gov (United States)

    Careers » Inclusion in the Workplace - Text Version Inclusion in the Workplace - Text Version This is the text version for the Inclusion: Leading by Example video. I'm Martin Keller. I'm the NREL of the laboratory. Another very important element in inclusion is diversity. Because if we have a

  5. A constructive version of AIP revisited

    NARCIS (Netherlands)

    Barros, A.; Hou, T.

    2008-01-01

    In this paper, we review a constructive version of the Approximation Induction Principle. This version states that bisimilarity of regular processes can be decided by observing only a part of their behaviour. We use this constructive version to formulate a complete inference system for the Algebra

  6. Embrittlement data base, version 1

    International Nuclear Information System (INIS)

    Wang, J.A.

    1997-08-01

    The aging and degradation of light-water-reactor (LWR) pressure vessels is of particular concern because of their relevance to plant integrity and the magnitude of the expected irradiation embrittlement. The radiation embrittlement of reactor pressure vessel (RPV) materials depends on many different factors such as flux, fluence, fluence spectrum, irradiation temperature, and preirradiation material history and chemical compositions. These factors must be considered to reliably predict pressure vessel embrittlement and to ensure the safe operation of the reactor. Based on embrittlement predictions, decisions must be made concerning operating parameters and issues such as low-leakage-fuel management, possible life extension, and the need for annealing the pressure vessel. Large amounts of data from surveillance capsules and test reactor experiments, comprising many different materials and different irradiation conditions, are needed to develop generally applicable damage prediction models that can be used for industry standards and regulatory guides. Version 1 of the Embrittlement Data Base (EDB) is such a comprehensive collection of data resulting from merging version 2 of the Power Reactor Embrittlement Data Base (PR-EDB). Fracture toughness data were also integrated into Version 1 of the EDB. For power reactor data, the current EDB lists the 1,029 Charpy transition-temperature shift data points, which include 321 from plates, 125 from forgoings, 115 from correlation monitor materials, 246 from welds, and 222 from heat-affected-zone (HAZ) materials that were irradiated in 271 capsules from 101 commercial power reactors. For test reactor data, information is available for 1,308 different irradiated sets (352 from plates, 186 from forgoings, 303 from correlation monitor materials, 396 from welds and 71 from HAZs) and 268 different irradiated plus annealed data sets

  7. Pathogenesis of Focal Segmental Glomerulosclerosis

    Directory of Open Access Journals (Sweden)

    Beom Jin Lim

    2016-11-01

    Full Text Available Focal segmental glomerulosclerosis (FSGS is characterized by focal and segmental obliteration of glomerular capillary tufts with increased matrix. FSGS is classified as collapsing, tip, cellular, perihilar and not otherwise specified variants according to the location and character of the sclerotic lesion. Primary or idiopathic FSGS is considered to be related to podocyte injury, and the pathogenesis of podocyte injury has been actively investigated. Several circulating factors affecting podocyte permeability barrier have been proposed, but not proven to cause FSGS. FSGS may also be caused by genetic alterations. These genes are mainly those regulating slit diaphragm structure, actin cytoskeleton of podocytes, and foot process structure. The mode of inheritance and age of onset are different according to the gene involved. Recently, the role of parietal epithelial cells (PECs has been highlighted. Podocytes and PECs have common mesenchymal progenitors, therefore, PECs could be a source of podocyte repopulation after podocyte injury. Activated PECs migrate along adhesion to the glomerular tuft and may also contribute to the progression of sclerosis. Markers of activated PECs, including CD44, could be used to distinguish FSGS from minimal change disease. The pathogenesis of FSGS is very complex; however, understanding basic mechanisms of podocyte injury is important not only for basic research, but also for daily diagnostic pathology practice.

  8. Strong versions of Bell's theorem

    International Nuclear Information System (INIS)

    Stapp, H.P.

    1994-01-01

    Technical aspects of a recently constructed strong version of Bell's theorem are discussed. The theorem assumes neither hidden variables nor factorization, and neither determinism nor counterfactual definiteness. It deals directly with logical connections. Hence its relationship with modal logic needs to be described. It is shown that the proof can be embedded in an orthodox modal logic, and hence its compatibility with modal logic assured, but that this embedding weakens the theorem by introducing as added assumptions the conventionalities of the particular modal logic that is adopted. This weakening is avoided in the recent proof by using directly the set-theoretic conditions entailed by the locality assumption

  9. ASPEN Version 3.0

    Science.gov (United States)

    Rabideau, Gregg; Chien, Steve; Knight, Russell; Schaffer, Steven; Tran, Daniel; Cichy, Benjamin; Sherwood, Robert

    2006-01-01

    The Automated Scheduling and Planning Environment (ASPEN) computer program has been updated to version 3.0. ASPEN is a modular, reconfigurable, application software framework for solving batch problems that involve reasoning about time, activities, states, and resources. Applications of ASPEN can include planning spacecraft missions, scheduling of personnel, and managing supply chains, inventories, and production lines. ASPEN 3.0 can be customized for a wide range of applications and for a variety of computing environments that include various central processing units and random access memories.

  10. Placental fetal stem segmentation in a sequence of histology images

    Science.gov (United States)

    Athavale, Prashant; Vese, Luminita A.

    2012-02-01

    Recent research in perinatal pathology argues that analyzing properties of the placenta may reveal important information on how certain diseases progress. One important property is the structure of the placental fetal stems. Analysis of the fetal stems in a placenta could be useful in the study and diagnosis of some diseases like autism. To study the fetal stem structure effectively, we need to automatically and accurately track fetal stems through a sequence of digitized hematoxylin and eosin (H&E) stained histology slides. There are many problems in successfully achieving this goal. A few of the problems are: large size of images, misalignment of the consecutive H&E slides, unpredictable inaccuracies of manual tracing, very complicated texture patterns of various tissue types without clear characteristics, just to name a few. In this paper we propose a novel algorithm to achieve automatic tracing of the fetal stem in a sequence of H&E images, based on an inaccurate manual segmentation of a fetal stem in one of the images. This algorithm combines global affine registration, local non-affine registration and a novel 'dynamic' version of the active contours model without edges. We first use global affine image registration of all the images based on displacement, scaling and rotation. This gives us approximate location of the corresponding fetal stem in the image that needs to be traced. We then use the affine registration algorithm "locally" near this location. At this point, we use a fast non-affine registration based on L2-similarity measure and diffusion regularization to get a better location of the fetal stem. Finally, we have to take into account inaccuracies in the initial tracing. This is achieved through a novel dynamic version of the active contours model without edges where the coefficients of the fitting terms are computed iteratively to ensure that we obtain a unique stem in the segmentation. The segmentation thus obtained can then be used as an

  11. Spotting Separator Points at Line Terminals in Compressed Document Images for Text-line Segmentation

    OpenAIRE

    R, Amarnath; Nagabhushan, P.

    2017-01-01

    Line separators are used to segregate text-lines from one another in document image analysis. Finding the separator points at every line terminal in a document image would enable text-line segmentation. In particular, identifying the separators in handwritten text could be a thrilling exercise. Obviously it would be challenging to perform this in the compressed version of a document image and that is the proposed objective in this research. Such an effort would prevent the computational burde...

  12. Forsmark - site descriptive model version 0

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-10-01

    During 2002, the Swedish Nuclear Fuel and Waste Management Company (SKB) is starting investigations at two potential sites for a deep repository in the Precambrian basement of the Fennoscandian Shield. The present report concerns one of those sites, Forsmark, which lies in the municipality of Oesthammar, on the east coast of Sweden, about 150 kilometres north of Stockholm. The site description should present all collected data and interpreted parameters of importance for the overall scientific understanding of the site, for the technical design and environmental impact assessment of the deep repository, and for the assessment of long-term safety. The site description will have two main components: a written synthesis of the site, summarising the current state of knowledge, as documented in the databases containing the primary data from the site investigations, and one or several site descriptive models, in which the collected information is interpreted and presented in a form which can be used in numerical models for rock engineering, environmental impact and long-term safety assessments. The site descriptive models are devised and stepwise updated as the site investigations proceed. The point of departure for this process is the regional site descriptive model, version 0, which is the subject of the present report. Version 0 is developed out of the information available at the start of the site investigation. This information, with the exception of data from tunnels and drill holes at the sites of the Forsmark nuclear reactors and the underground low-middle active radioactive waste storage facility, SFR, is mainly 2D in nature (surface data), and is general and regional, rather than site-specific, in content. For this reason, the Forsmark site descriptive model, version 0, as detailed in the present report, has been developed at a regional scale. It covers a rectangular area, 15 km in a southwest-northeast and 11 km in a northwest-southeast direction, around the

  13. Brain Tumor Image Segmentation in MRI Image

    Science.gov (United States)

    Peni Agustin Tjahyaningtijas, Hapsari

    2018-04-01

    Brain tumor segmentation plays an important role in medical image processing. Treatment of patients with brain tumors is highly dependent on early detection of these tumors. Early detection of brain tumors will improve the patient’s life chances. Diagnosis of brain tumors by experts usually use a manual segmentation that is difficult and time consuming because of the necessary automatic segmentation. Nowadays automatic segmentation is very populer and can be a solution to the problem of tumor brain segmentation with better performance. The purpose of this paper is to provide a review of MRI-based brain tumor segmentation methods. There are number of existing review papers, focusing on traditional methods for MRI-based brain tumor image segmentation. this paper, we focus on the recent trend of automatic segmentation in this field. First, an introduction to brain tumors and methods for brain tumor segmentation is given. Then, the state-of-the-art algorithms with a focus on recent trend of full automatic segmentaion are discussed. Finally, an assessment of the current state is presented and future developments to standardize MRI-based brain tumor segmentation methods into daily clinical routine are addressed.

  14. A new framework for interactive images segmentation

    International Nuclear Information System (INIS)

    Ashraf, M.; Sarim, M.; Shaikh, A.B.

    2017-01-01

    Image segmentation has become a widely studied research problem in image processing. There exist different graph based solutions for interactive image segmentation but the domain of image segmentation still needs persistent improvements. The segmentation quality of existing techniques generally depends on the manual input provided in beginning, therefore, these algorithms may not produce quality segmentation with initial seed labels provided by a novice user. In this work we investigated the use of cellular automata in image segmentation and proposed a new algorithm that follows a cellular automaton in label propagation. It incorporates both the pixel's local and global information in the segmentation process. We introduced the novel global constraints in automata evolution rules; hence proposed scheme of automata evolution is more effective than the automata based earlier evolution schemes. Global constraints are also effective in deceasing the sensitivity towards small changes made in manual input; therefore proposed approach is less dependent on label seed marks. It can produce the quality segmentation with modest user efforts. Segmentation results indicate that the proposed algorithm performs better than the earlier segmentation techniques. (author)

  15. School version of ESTE EU

    International Nuclear Information System (INIS)

    Carny, P.; Suchon, D.; Chyly, M.; Smejkalova, E.; Fabova, V.

    2008-01-01

    ESTE EU is information system and software for radiological impacts assessment to the territory of the country in case of radiation accident inside/outside of the country .The program enables to model dispersion of radioactive clouds in small-scale and meso-scale. The system enables the user to estimate prediction of the source term (release to the atmosphere ) for any point of radiation/nuclear accident in Europe (for any point of the release, but especially for the sites of European power reactors ). The system enables to utilize results of real radiological monitoring in the process of source term estimation. Radiological impacts of release to the atmosphere are modelled and calculated across the Europe and displayed in the geographical information system (GIS). The school version of ESTE EU is intended for students of the universities which are interested in or could work in the field of emergency response, radiological and nuclear accidents, dispersion modelling, radiological impacts calculation and urgent or preventive protective measures implementation. The school version of ESTE EU is planned to be donated to specialized departments of faculties in Slovakia, Czech Republic, etc. System can be fully operated in Slovak, Czech or English language. (authors)

  16. School version of ESTE EU

    International Nuclear Information System (INIS)

    Carny, P.; Suchon, D.; Chyly, M.; Smejkalova, E.; Fabova, V.

    2009-01-01

    ESTE EU is information system and software for radiological impacts assessment to the territory of the country in case of radiation accident inside/outside of the country .The program enables to model dispersion of radioactive clouds in small-scale and meso-scale. The system enables the user to estimate prediction of the source term (release to the atmosphere ) for any point of radiation/nuclear accident in Europe (for any point of the release, but especially for the sites of European power reactors ). The system enables to utilize results of real radiological monitoring in the process of source term estimation. Radiological impacts of release to the atmosphere are modelled and calculated across the Europe and displayed in the geographical information system (GIS). The school version of ESTE EU is intended for students of the universities which are interested in or could work in the field of emergency response, radiological and nuclear accidents, dispersion modelling, radiological impacts calculation and urgent or preventive protective measures implementation. The school version of ESTE EU is planned to be donated to specialized departments of faculties in Slovakia, Czech Republic, etc. System can be fully operated in Slovak, Czech or English language. (authors)

  17. Simulating the 2012 High Plains drought using three single column versions (SCM) of BUGS5

    Science.gov (United States)

    Medina, I. D.; Denning, S.

    2013-12-01

    The impact of changes in the frequency and severity of drought on fresh water sustainability is a great concern for many regions of the world. One such location is the High Plains, where the local economy is primarily driven by fresh water withdrawals from the Ogallala Aquifer, which accounts for approximately 30% of total irrigation withdrawals from all U.S. aquifers combined. Modeling studies that focus on the feedback mechanisms that control the climate and eco-hydrology during times of drought are limited, and have used conventional General Circulation Models (GCMs) with grid length scales ranging from one hundred to several hundred kilometers. Additionally, these models utilize crude statistical parameterizations of cloud processes for estimating sub-grid fluxes of heat and moisture and have a poor representation of land surface heterogeneity. For this research, we will focus on the 2012 High Plains drought and will perform numerical simulations using three single column versions (SCM) of BUGS5 (Colorado State University (CSU) GCM coupled to the Simple Biosphere Model (SiB3)) at multiple sites overlying the Ogallala Aquifer for the 2011-2012 periods. In the first version of BUGS5, the model will be used in its standard bulk setting (single atmospheric column coupled to a single instance of SiB3), secondly, the Super-Parameterized Community Atmospheric Model (SP-CAM), a cloud resolving model (CRM consists of 64 atmospheric columns), will replace the single CSU GCM atmospheric parameterization and will be coupled to a single instance of SiB3, and for the third version of BUGS5, an instance of SiB3 will be coupled to each CRM column of the SP-CAM (64 CRM columns coupled to 64 instances of SiB3). To assess the physical realism of the land-atmosphere feedbacks simulated at each site by all versions of BUGS5, differences in simulated energy and moisture fluxes will be computed between the 2011 and 2012 period and will be compared to differences calculated using

  18. Hierarchical image segmentation for learning object priors

    Energy Technology Data Exchange (ETDEWEB)

    Prasad, Lakshman [Los Alamos National Laboratory; Yang, Xingwei [TEMPLE UNIV.; Latecki, Longin J [TEMPLE UNIV.; Li, Nan [TEMPLE UNIV.

    2010-11-10

    The proposed segmentation approach naturally combines experience based and image based information. The experience based information is obtained by training a classifier for each object class. For a given test image, the result of each classifier is represented as a probability map. The final segmentation is obtained with a hierarchial image segmentation algorithm that considers both the probability maps and the image features such as color and edge strength. We also utilize image region hierarchy to obtain not only local but also semi-global features as input to the classifiers. Moreover, to get robust probability maps, we take into account the region context information by averaging the probability maps over different levels of the hierarchical segmentation algorithm. The obtained segmentation results are superior to the state-of-the-art supervised image segmentation algorithms.

  19. Image Segmentation Using Minimum Spanning Tree

    Science.gov (United States)

    Dewi, M. P.; Armiati, A.; Alvini, S.

    2018-04-01

    This research aim to segmented the digital image. The process of segmentation is to separate the object from the background. So the main object can be processed for the other purposes. Along with the development of technology in digital image processing application, the segmentation process becomes increasingly necessary. The segmented image which is the result of the segmentation process should accurate due to the next process need the interpretation of the information on the image. This article discussed the application of minimum spanning tree on graph in segmentation process of digital image. This method is able to separate an object from the background and the image will change to be the binary images. In this case, the object that being the focus is set in white, while the background is black or otherwise.

  20. Toxic Anterior Segment Syndrome (TASS

    Directory of Open Access Journals (Sweden)

    Özlem Öner

    2011-12-01

    Full Text Available Toxic anterior segment syndrome (TASS is a sterile intraocular inflammation caused by noninfectious substances, resulting in extensive toxic damage to the intraocular tissues. Possible etiologic factors of TASS include surgical trauma, bacterial endotoxin, intraocular solutions with inappropriate pH and osmolality, preservatives, denatured ophthalmic viscosurgical devices (OVD, inadequate sterilization, cleaning and rinsing of surgical devices, intraocular lenses, polishing and sterilizing compounds which are related to intraocular lenses. The characteristic signs and symptoms such as blurred vision, corneal edema, hypopyon and nonreactive pupil usually occur 24 hours after the cataract surgery. The differential diagnosis of TASS from infectious endophthalmitis is important. The main treatment for TASS formation is prevention. TASS is a cataract surgery complication that is more commonly seen nowadays. In this article, the possible underlying causes as well as treatment and prevention methods of TASS are summarized. (Turk J Oph thal mol 2011; 41: 407-13

  1. Communication with market segments - travel agencies' perspective

    OpenAIRE

    Lorena Bašan; Jasmina Dlačić; Željko Trezner

    2013-01-01

    Purpose – The purpose of this paper is to research the travel agencies’ communication with market segments. Communication with market segments takes into account marketing communication means as well as the implementation of different business orientations. Design – Special emphasis is placed on the use of different marketing communication means and their efficiency. Research also explores business orientation adaptation when approaching different market segments. Methodology – In explo...

  2. Distance measures for image segmentation evaluation

    OpenAIRE

    Monteiro, Fernando C.; Campilho, Aurélio

    2012-01-01

    In this paper we present a study of evaluation measures that enable the quantification of the quality of an image segmentation result. Despite significant advances in image segmentation techniques, evaluation of these techniques thus far has been largely subjective. Typically, the effectiveness of a new algorithm is demonstrated only by the presentation of a few segmented images and is otherwise left to subjective evaluation by the reader. Such an evaluation criterion can be useful for differ...

  3. IFRS 8 Operating Segments - A Closer Look

    OpenAIRE

    Muthupandian, K S

    2008-01-01

    The International Accounting Standards Board issued the International Financial Reporting Standard 8 Operating Segments. Segment information is one of the most vital aspects of financial reporting for investors and other users. The IFRS 8 requires an entity to adopt the ‘management approach’ to reporting on the financial performance of its operating segments. This article presents a closer look of the standard (objective, scope, and disclosures).

  4. MRI Brain Tumor Segmentation Methods- A Review

    OpenAIRE

    Gursangeet, Kaur; Jyoti, Rani

    2016-01-01

    Medical image processing and its segmentation is an active and interesting area for researchers. It has reached at the tremendous place in diagnosing tumors after the discovery of CT and MRI. MRI is an useful tool to detect the brain tumor and segmentation is performed to carry out the useful portion from an image. The purpose of this paper is to provide an overview of different image segmentation methods like watershed algorithm, morphological operations, neutrosophic sets, thresholding, K-...

  5. Speaker Segmentation and Clustering Using Gender Information

    Science.gov (United States)

    2006-02-01

    used in the first stages of segmentation forder information in the clustering of the opposite-gender speaker diarization of news broadcasts. files, the...AFRL-HE-WP-TP-2006-0026 AIR FORCE RESEARCH LABORATORY Speaker Segmentation and Clustering Using Gender Information Brian M. Ore General Dynamics...COVERED (From - To) February 2006 ProceedinLgs 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Speaker Segmentation and Clustering Using Gender Information 5b

  6. Benchmarking of Remote Sensing Segmentation Methods

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal; Scarpa, G.; Gaetano, R.

    2015-01-01

    Roč. 8, č. 5 (2015), s. 2240-2248 ISSN 1939-1404 R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : benchmark * remote sensing segmentation * unsupervised segmentation * supervised segmentation Subject RIV: BD - Theory of Information Impact factor: 2.145, year: 2015 http://library.utia.cas.cz/separaty/2015/RO/haindl-0445995.pdf

  7. Track segment synthesis method for NTA film

    International Nuclear Information System (INIS)

    Kumazawa, Shigeru

    1980-03-01

    A method is presented for synthesizing track segments extracted from a gray-level digital picture of NTA film in automatic counting system. In order to detect each track in an arbitrary direction, even if it has some gaps, as a set of the track segments, the method links extracted segments along the track, in succession, to the linked track segments, according to whether each extracted segment bears a similarity of direction to the track or not and whether it is connected with the linked track segments or not. In the case of a large digital picture, the method is applied to each subpicture, which is a strip of the picture, and then concatenates subsets of track segments linked at each subpicture as a set of track segments belonging to a track. The method was applied to detecting tracks in various directions over the eight 364 x 40-pixel subpictures with the gray scale of 127/pixel (picture element) of the microphotograph of NTA film. It was proved to be able to synthesize track segments correctly for every track in the picture. (author)

  8. Segmenting hospitals for improved management strategy.

    Science.gov (United States)

    Malhotra, N K

    1989-09-01

    The author presents a conceptual framework for the a priori and clustering-based approaches to segmentation and evaluates them in the context of segmenting institutional health care markets. An empirical study is reported in which the hospital market is segmented on three state-of-being variables. The segmentation approach also takes into account important organizational decision-making variables. The sophisticated Thurstone Case V procedure is employed. Several marketing implications for hospitals, other health care organizations, hospital suppliers, and donor publics are identified.

  9. Prototype implementation of segment assembling software

    Directory of Open Access Journals (Sweden)

    Pešić Đorđe

    2018-01-01

    Full Text Available IT education is very important and a lot of effort is put into the development of tools for helping students to acquire programming knowledge and for helping teachers in automating the examination process. This paper describes a prototype of the program segment assembling software used in the context of making tests in the field of algorithmic complexity. The proposed new program segment assembling model uses rules and templates. A template is a simple program segment. A rule defines combining method and data dependencies if they exist. One example of program segment assembling by the proposed system is given. Graphical user interface is also described.

  10. Probabilistic Segmentation of Folk Music Recordings

    Directory of Open Access Journals (Sweden)

    Ciril Bohak

    2016-01-01

    Full Text Available The paper presents a novel method for automatic segmentation of folk music field recordings. The method is based on a distance measure that uses dynamic time warping to cope with tempo variations and a dynamic programming approach to handle pitch drifting for finding similarities and estimating the length of repeating segment. A probabilistic framework based on HMM is used to find segment boundaries, searching for optimal match between the expected segment length, between-segment similarities, and likely locations of segment beginnings. Evaluation of several current state-of-the-art approaches for segmentation of commercial music is presented and their weaknesses when dealing with folk music are exposed, such as intolerance to pitch drift and variable tempo. The proposed method is evaluated and its performance analyzed on a collection of 206 folk songs of different ensemble types: solo, two- and three-voiced, choir, instrumental, and instrumental with singing. It outperforms current commercial music segmentation methods for noninstrumental music and is on a par with the best for instrumental recordings. The method is also comparable to a more specialized method for segmentation of solo singing folk music recordings.

  11. Review of segmentation process in consumer markets

    Directory of Open Access Journals (Sweden)

    Veronika Jadczaková

    2013-01-01

    Full Text Available Although there has been a considerable debate on market segmentation over five decades, attention was merely devoted to single stages of the segmentation process. In doing so, stages as segmentation base selection or segments profiling have been heavily covered in the extant literature, whereas stages as implementation of the marketing strategy or market definition were of a comparably lower interest. Capitalizing on this shortcoming, this paper strives to close the gap and provide each step of the segmentation process with equal treatment. Hence, the objective of this paper is two-fold. First, a snapshot of the segmentation process in a step-by-step fashion will be provided. Second, each step (where possible will be evaluated on chosen criteria by means of description, comparison, analysis and synthesis of 32 academic papers and 13 commercial typology systems. Ultimately, the segmentation stages will be discussed with empirical findings prevalent in the segmentation studies and last but not least suggestions calling for further investigation will be presented. This seven-step-framework may assist when segmenting in practice allowing for more confidential targeting which in turn might prepare grounds for creating of a differential advantage.

  12. Interactive segmentation techniques algorithms and performance evaluation

    CERN Document Server

    He, Jia; Kuo, C-C Jay

    2013-01-01

    This book focuses on interactive segmentation techniques, which have been extensively studied in recent decades. Interactive segmentation emphasizes clear extraction of objects of interest, whose locations are roughly indicated by human interactions based on high level perception. This book will first introduce classic graph-cut segmentation algorithms and then discuss state-of-the-art techniques, including graph matching methods, region merging and label propagation, clustering methods, and segmentation methods based on edge detection. A comparative analysis of these methods will be provided

  13. Simpevarp - site descriptive model version 0

    International Nuclear Information System (INIS)

    2002-11-01

    During 2002, SKB is starting detailed investigations at two potential sites for a deep repository in the Precambrian rocks of the Fennoscandian Shield. The present report concerns one of those sites, Simpevarp, which lies in the municipality of Oskarshamn, on the southeast coast of Sweden, about 250 kilometres south of Stockholm. The site description will have two main components: a written synthesis of the site, summarising the current state of knowledge, as documented in the databases containing the primary data from the site investigations, and one or several site descriptive models, in which the collected information is interpreted and presented in a form which can be used in numerical models for rock engineering, environmental impact and long-term safety assessments. SKB maintains two main databases at the present time, a site characterisation database called SICADA and a geographic information system called SKB GIS. The site descriptive model will be developed and presented with the aid of the SKB GIS capabilities, and with SKBs Rock Visualisation System (RVS), which is also linked to SICADA. The version 0 model forms an important framework for subsequent model versions, which are developed successively, as new information from the site investigations becomes available. Version 0 is developed out of the information available at the start of the site investigation. In the case of Simpevarp, this is essentially the information which was compiled for the Oskarshamn feasibility study, which led to the choice of that area as a favourable object for further study, together with information collected since its completion. This information, with the exception of the extensive data base from the nearby Aespoe Hard Rock Laboratory, is mainly 2D in nature (surface data), and is general and regional, rather than site-specific, in content. Against this background, the present report consists of the following components: an overview of the present content of the databases

  14. Fast Segmentation From Blurred Data in 3D Fluorescence Microscopy.

    Science.gov (United States)

    Storath, Martin; Rickert, Dennis; Unser, Michael; Weinmann, Andreas

    2017-10-01

    We develop a fast algorithm for segmenting 3D images from linear measurements based on the Potts model (or piecewise constant Mumford-Shah model). To that end, we first derive suitable space discretizations of the 3D Potts model, which are capable of dealing with 3D images defined on non-cubic grids. Our discretization allows us to utilize a specific splitting approach, which results in decoupled subproblems of moderate size. The crucial point in the 3D setup is that the number of independent subproblems is so large that we can reasonably exploit the parallel processing capabilities of the graphics processing units (GPUs). Our GPU implementation is up to 18 times faster than the sequential CPU version. This allows to process even large volumes in acceptable runtimes. As a further contribution, we extend the algorithm in order to deal with non-negativity constraints. We demonstrate the efficiency of our method for combined image deconvolution and segmentation on simulated data and on real 3D wide field fluorescence microscopy data.

  15. Segmentation of liver tumors on CT images

    International Nuclear Information System (INIS)

    Pescia, D.

    2011-01-01

    This thesis is dedicated to 3D segmentation of liver tumors in CT images. This is a task of great clinical interest since it allows physicians benefiting from reproducible and reliable methods for segmenting such lesions. Accurate segmentation would indeed help them during the evaluation of the lesions, the choice of treatment and treatment planning. Such a complex segmentation task should cope with three main scientific challenges: (i) the highly variable shape of the structures being sought, (ii) their similarity of appearance compared with their surrounding medium and finally (iii) the low signal to noise ratio being observed in these images. This problem is addressed in a clinical context through a two step approach, consisting of the segmentation of the entire liver envelope, before segmenting the tumors which are present within the envelope. We begin by proposing an atlas-based approach for computing pathological liver envelopes. Initially images are pre-processed to compute the envelopes that wrap around binary masks in an attempt to obtain liver envelopes from estimated segmentation of healthy liver parenchyma. A new statistical atlas is then introduced and used to segmentation through its diffeomorphic registration to the new image. This segmentation is achieved through the combination of image matching costs as well as spatial and appearance prior using a multi-scale approach with MRF. The second step of our approach is dedicated to lesions segmentation contained within the envelopes using a combination of machine learning techniques and graph based methods. First, an appropriate feature space is considered that involves texture descriptors being determined through filtering using various scales and orientations. Then, state of the art machine learning techniques are used to determine the most relevant features, as well as the hyper plane that separates the feature space of tumoral voxels to the ones corresponding to healthy tissues. Segmentation is then

  16. Segment-Tube: Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation

    OpenAIRE

    Le Wang; Xuhuan Duan; Qilin Zhang; Zhenxing Niu; Gang Hua; Nanning Zheng

    2018-01-01

    Inspired by the recent spatio-temporal action localization efforts with tubelets (sequences of bounding boxes), we present a new spatio-temporal action localization detector Segment-tube, which consists of sequences of per-frame segmentation masks. The proposed Segment-tube detector can temporally pinpoint the starting/ending frame of each action category in the presence of preceding/subsequent interference actions in untrimmed videos. Simultaneously, the Segment-tube detector produces per-fr...

  17. Model-based version management system framework

    International Nuclear Information System (INIS)

    Mehmood, W.

    2016-01-01

    In this paper we present a model-based version management system. Version Management System (VMS) a branch of software configuration management (SCM) aims to provide a controlling mechanism for evolution of software artifacts created during software development process. Controlling the evolution requires many activities to perform, such as, construction and creation of versions, identification of differences between versions, conflict detection and merging. Traditional VMS systems are file-based and consider software systems as a set of text files. File based VMS systems are not adequate for performing software configuration management activities such as, version control on software artifacts produced in earlier phases of the software life cycle. New challenges of model differencing, merge, and evolution control arise while using models as central artifact. The goal of this work is to present a generic framework model-based VMS which can be used to overcome the problem of tradition file-based VMS systems and provide model versioning services. (author)

  18. Detection and analysis of ancient segmental duplications in mammalian genomes.

    Science.gov (United States)

    Pu, Lianrong; Lin, Yu; Pevzner, Pavel A

    2018-05-07

    Although segmental duplications (SDs) represent hotbeds for genomic rearrangements and emergence of new genes, there are still no easy-to-use tools for identifying SDs. Moreover, while most previous studies focused on recently emerged SDs, detection of ancient SDs remains an open problem. We developed an SDquest algorithm for SD finding and applied it to analyzing SDs in human, gorilla, and mouse genomes. Our results demonstrate that previous studies missed many SDs in these genomes and show that SDs account for at least 6.05% of the human genome (version hg19), a 17% increase as compared to the previous estimate. Moreover, SDquest classified 6.42% of the latest GRCh38 version of the human genome as SDs, a large increase as compared to previous studies. We thus propose to re-evaluate evolution of SDs based on their accurate representation across multiple genomes. Toward this goal, we analyzed the complex mosaic structure of SDs and decomposed mosaic SDs into elementary SDs, a prerequisite for follow-up evolutionary analysis. We also introduced the concept of the breakpoint graph of mosaic SDs that revealed SD hotspots and suggested that some SDs may have originated from circular extrachromosomal DNA (ecDNA), not unlike ecDNA that contributes to accelerated evolution in cancer. © 2018 Pu et al.; Published by Cold Spring Harbor Laboratory Press.

  19. Relationship between acoustic voice onset and offset and selected instances of oscillatory onset and offset in young healthy males and females

    Science.gov (United States)

    Patel, Rita; Forrest, Karen; Hedges, Drew

    2016-01-01

    Objective To investigate the relationship between (1) onset of the acoustic signal and pre-phonatory phases associated with oscillatory onset and (2) offset of the acoustic signal with the post-phonatory events associated with oscillatory offset across vocally healthy adults. Subjects and Methods High-speed videoendoscopy was captured simultaneously with the acoustic signal during repeated production of /hi.hi.hi/ at typical pitch and loudness from 56 vocally healthy adults (age 20–42 years; 21 male, 35 female). The relationship between the acoustic sound pressure signal and oscillatory onset /offset events from the glottal area waveforms (GAW), were statistically investigated using a multivariate linear regression analysis. Results The onset of the acoustic signal (X1a) is a significant predictor of the onset of first oscillations (X1g) and onset of sustained oscillations (X2g). X1a as well as gender are significant predictors of the first instance of medial contact (X1.5g). The offset of the acoustic signal (X2a) is a significant predictor of the first instance of oscillatory offset (X3g), first instance of incomplete glottal closure (X3.5g), and cessation of vocal fold motion (X4g). Conclusions The acoustic signal onset is closely related to the first medial contact of the vocal folds but the latency between these events is longer for females compared to males. The offset of the acoustic signal occurs immediately after incomplete glottal adduction. The emerging normative group latencies between the onset/offset of the acoustic and the GAW from this study appear promising for future investigations. PMID:27769696

  20. Mild toxic anterior segment syndrome mimicking delayed onset toxic anterior segment syndrome after cataract surgery

    Directory of Open Access Journals (Sweden)

    Su-Na Lee

    2014-01-01

    Full Text Available Toxic anterior segment syndrome (TASS is an acute sterile postoperative anterior segment inflammation that may occur after anterior segment surgery. I report herein a case that developed mild TASS in one eye after bilateral uneventful cataract surgery, which was masked during early postoperative period under steroid eye drop and mimicking delayed onset TASS after switching to weaker steroid eye drop.

  1. GeoSegmenter: A statistically learned Chinese word segmenter for the geoscience domain

    Science.gov (United States)

    Huang, Lan; Du, Youfu; Chen, Gongyang

    2015-03-01

    Unlike English, the Chinese language has no space between words. Segmenting texts into words, known as the Chinese word segmentation (CWS) problem, thus becomes a fundamental issue for processing Chinese documents and the first step in many text mining applications, including information retrieval, machine translation and knowledge acquisition. However, for the geoscience subject domain, the CWS problem remains unsolved. Although a generic segmenter can be applied to process geoscience documents, they lack the domain specific knowledge and consequently their segmentation accuracy drops dramatically. This motivated us to develop a segmenter specifically for the geoscience subject domain: the GeoSegmenter. We first proposed a generic two-step framework for domain specific CWS. Following this framework, we built GeoSegmenter using conditional random fields, a principled statistical framework for sequence learning. Specifically, GeoSegmenter first identifies general terms by using a generic baseline segmenter. Then it recognises geoscience terms by learning and applying a model that can transform the initial segmentation into the goal segmentation. Empirical experimental results on geoscience documents and benchmark datasets showed that GeoSegmenter could effectively recognise both geoscience terms and general terms.

  2. NUCLEAR SEGMENTATION IN MICROSCOPE CELL IMAGES: A HAND-SEGMENTED DATASET AND COMPARISON OF ALGORITHMS

    OpenAIRE

    Coelho, Luís Pedro; Shariff, Aabid; Murphy, Robert F.

    2009-01-01

    Image segmentation is an essential step in many image analysis pipelines and many algorithms have been proposed to solve this problem. However, they are often evaluated subjectively or based on a small number of examples. To fill this gap, we hand-segmented a set of 97 fluorescence microscopy images (a total of 4009 cells) and objectively evaluated some previously proposed segmentation algorithms.

  3. Robust shape regression for supervised vessel segmentation and its application to coronary segmentation in CTA

    DEFF Research Database (Denmark)

    Schaap, Michiel; van Walsum, Theo; Neefjes, Lisan

    2011-01-01

    This paper presents a vessel segmentation method which learns the geometry and appearance of vessels in medical images from annotated data and uses this knowledge to segment vessels in unseen images. Vessels are segmented in a coarse-to-fine fashion. First, the vessel boundaries are estimated...

  4. Instance-Based Question Answering

    Science.gov (United States)

    2006-12-01

    cluster-based query expan- sion, learning answering strategies, machine learning in NLP To my wife Monica Abstract During recent years, question...process is typically tedious and involves expertise in crafting and implement- ing these models (e.g. rule-based), utilizing NLP resources, and...questions. For languages that use capitalization (e.g. not Chinese or Arabic ) for named entities, IBQA can make use of NE classing (e.g. “Bob Marley

  5. Limb-segment selection in drawing behaviour

    NARCIS (Netherlands)

    Meulenbroek, R G; Rosenbaum, D A; Thomassen, A.J.W.M.; Schomaker, L R

    How do we select combinations of limb segments to carry out physical tasks? Three possible determinants of limb-segment selection are hypothesized here: (1) optimal amplitudes and frequencies of motion for the effectors; (2) preferred movement axes for the effectors; and (3) a tendency to continue

  6. LIMB-SEGMENT SELECTION IN DRAWING BEHAVIOR

    NARCIS (Netherlands)

    MEULENBROEK, RGJ; ROSENBAUM, DA; THOMASSEN, AJWM; SCHOMAKER, LRB; Schomaker, Lambertus

    How do we select combinations of limb segments to carry out physical tasks? Three possible determinants of limb-segment selection are hypothesized here: (1) optimal amplitudes and frequencies of motion for the effectors; (2) preferred movement axes for the effectors; and (3) a tendency to continue

  7. Handwriting segmentation of unconstrained Oriya text

    Indian Academy of Sciences (India)

    Based on vertical projection profiles and structural features of Oriya characters, text lines are segmented into words. For character segmentation, at first, the isolated and connected (touching) characters in a word are detected. Using structural, topological and water reservoir concept-based features, characters of the word ...

  8. Reflection symmetry-integrated image segmentation.

    Science.gov (United States)

    Sun, Yu; Bhanu, Bir

    2012-09-01

    This paper presents a new symmetry-integrated region-based image segmentation method. The method is developed to obtain improved image segmentation by exploiting image symmetry. It is realized by constructing a symmetry token that can be flexibly embedded into segmentation cues. Interesting points are initially extracted from an image by the SIFT operator and they are further refined for detecting the global bilateral symmetry. A symmetry affinity matrix is then computed using the symmetry axis and it is used explicitly as a constraint in a region growing algorithm in order to refine the symmetry of the segmented regions. A multi-objective genetic search finds the segmentation result with the highest performance for both segmentation and symmetry, which is close to the global optimum. The method has been investigated experimentally in challenging natural images and images containing man-made objects. It is shown that the proposed method outperforms current segmentation methods both with and without exploiting symmetry. A thorough experimental analysis indicates that symmetry plays an important role as a segmentation cue, in conjunction with other attributes like color and texture.

  9. Segmentation precedes face categorization under suboptimal conditions

    NARCIS (Netherlands)

    Van Den Boomen, Carlijn; Fahrenfort, Johannes J; Snijders, Tineke M; Kemner, Chantal

    2015-01-01

    Both categorization and segmentation processes play a crucial role in face perception. However, the functional relation between these subprocesses is currently unclear. The present study investigates the temporal relation between segmentation-related and category-selective responses in the brain,

  10. Bayesian segmentation of brainstem structures in MRI

    DEFF Research Database (Denmark)

    Iglesias, Juan Eugenio; Van Leemput, Koen; Bhatt, Priyanka

    2015-01-01

    the brainstem structures in novel scans. Thanks to the generative nature of the scheme, the segmentation method is robust to changes in MRI contrast or acquisition hardware. Using cross validation, we show that the algorithm can segment the structures in previously unseen T1 and FLAIR scans with great accuracy...

  11. Congenital segmental dilatation of the colon

    African Journals Online (AJOL)

    Congenital segmental dilatation of the colon is a rare cause of intestinal obstruction in neonates. We report a case of congenital segmental dilatation of the colon and highlight the clinical, radiological, and histopathological features of this entity. Proper surgical treatment was initiated on the basis of preoperative radiological ...

  12. 47 CFR 101.1505 - Segmentation plan.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Segmentation plan. 101.1505 Section 101.1505 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES FIXED MICROWAVE SERVICES Service and Technical Rules for the 70/80/90 GHz Bands § 101.1505 Segmentation plan. (a) An entity...

  13. Market Segmentation Using Bayesian Model Based Clustering

    NARCIS (Netherlands)

    Van Hattum, P.

    2009-01-01

    This dissertation deals with two basic problems in marketing, that are market segmentation, which is the grouping of persons who share common aspects, and market targeting, which is focusing your marketing efforts on one or more attractive market segments. For the grouping of persons who share

  14. Convolutional Neural Networks for SAR Image Segmentation

    DEFF Research Database (Denmark)

    Malmgren-Hansen, David; Nobel-Jørgensen, Morten

    2015-01-01

    Segmentation of Synthetic Aperture Radar (SAR) images has several uses, but it is a difficult task due to a number of properties related to SAR images. In this article we show how Convolutional Neural Networks (CNNs) can easily be trained for SAR image segmentation with good results. Besides...

  15. Storing tooth segments for optimal esthetics

    NARCIS (Netherlands)

    Tuzuner, T.; Turgut, S.; Özen, B.; Kılınç, H.; Bagis, B.

    2016-01-01

    Objective: A fractured whole crown segment can be reattached to its remnant; crowns from extracted teeth may be used as pontics in splinting techniques. We aimed to evaluate the effect of different storage solutions on tooth segment optical properties after different durations. Study design: Sixty

  16. Benefit segmentation of the fitness market.

    Science.gov (United States)

    Brown, J D

    1992-01-01

    While considerate attention is being paid to the fitness and wellness needs of people by healthcare and related marketing organizations, little research attention has been directed to identifying the market segments for fitness based upon consumers' perceived benefits of fitness. This article describes three distinct segments of fitness consumers comprising an estimated 50 percent of households. Implications for marketing strategies are also presented.

  17. Moving window segmentation framework for point clouds

    NARCIS (Netherlands)

    Sithole, G.; Gorte, B.G.H.

    2012-01-01

    As lidar point clouds become larger streamed processing becomes more attractive. This paper presents a framework for the streamed segmentation of point clouds with the intention of segmenting unstructured point clouds in real-time. The framework is composed of two main components. The first

  18. Current segmented gamma-ray scanner technology

    International Nuclear Information System (INIS)

    Bjork, C.W.

    1987-01-01

    A new generation of segmented gamma-ray scanners has been developed at Los Alamos for scrap and waste measurements at the Savannah River Plant and the Los Alamos Plutonium Facility. The new designs are highly automated and exhibit special features such as good segmentation and thorough shielding to improve performance

  19. Creating Web Area Segments with Google Analytics

    Science.gov (United States)

    Segments allow you to quickly access data for a predefined set of Sessions or Users, such as government or education users, or sessions in a particular state. You can then apply this segment to any report within the Google Analytics (GA) interface.

  20. Unsupervised Retinal Vessel Segmentation Using Combined Filters.

    Directory of Open Access Journals (Sweden)

    Wendeson S Oliveira

    Full Text Available Image segmentation of retinal blood vessels is a process that can help to predict and diagnose cardiovascular related diseases, such as hypertension and diabetes, which are known to affect the retinal blood vessels' appearance. This work proposes an unsupervised method for the segmentation of retinal vessels images using a combined matched filter, Frangi's filter and Gabor Wavelet filter to enhance the images. The combination of these three filters in order to improve the segmentation is the main motivation of this work. We investigate two approaches to perform the filter combination: weighted mean and median ranking. Segmentation methods are tested after the vessel enhancement. Enhanced images with median ranking are segmented using a simple threshold criterion. Two segmentation procedures are applied when considering enhanced retinal images using the weighted mean approach. The first method is based on deformable models and the second uses fuzzy C-means for the image segmentation. The procedure is evaluated using two public image databases, Drive and Stare. The experimental results demonstrate that the proposed methods perform well for vessel segmentation in comparison with state-of-the-art methods.

  1. A NEW APPROACH TO SEGMENT HANDWRITTEN DIGITS

    NARCIS (Netherlands)

    Oliveira, L.S.; Lethelier, E.; Bortolozzi, F.; Sabourin, R.

    2004-01-01

    This article presents a new segmentation approach applied to unconstrained handwritten digits. The novelty of the proposed algorithm is based on the combination of two types of structural features in order to provide the best segmentation path between connected entities. In this article, we first

  2. Spinal cord grey matter segmentation challenge.

    Science.gov (United States)

    Prados, Ferran; Ashburner, John; Blaiotta, Claudia; Brosch, Tom; Carballido-Gamio, Julio; Cardoso, Manuel Jorge; Conrad, Benjamin N; Datta, Esha; Dávid, Gergely; Leener, Benjamin De; Dupont, Sara M; Freund, Patrick; Wheeler-Kingshott, Claudia A M Gandini; Grussu, Francesco; Henry, Roland; Landman, Bennett A; Ljungberg, Emil; Lyttle, Bailey; Ourselin, Sebastien; Papinutto, Nico; Saporito, Salvatore; Schlaeger, Regina; Smith, Seth A; Summers, Paul; Tam, Roger; Yiannakas, Marios C; Zhu, Alyssa; Cohen-Adad, Julien

    2017-05-15

    An important image processing step in spinal cord magnetic resonance imaging is the ability to reliably and accurately segment grey and white matter for tissue specific analysis. There are several semi- or fully-automated segmentation methods for cervical cord cross-sectional area measurement with an excellent performance close or equal to the manual segmentation. However, grey matter segmentation is still challenging due to small cross-sectional size and shape, and active research is being conducted by several groups around the world in this field. Therefore a grey matter spinal cord segmentation challenge was organised to test different capabilities of various methods using the same multi-centre and multi-vendor dataset acquired with distinct 3D gradient-echo sequences. This challenge aimed to characterize the state-of-the-art in the field as well as identifying new opportunities for future improvements. Six different spinal cord grey matter segmentation methods developed independently by various research groups across the world and their performance were compared to manual segmentation outcomes, the present gold-standard. All algorithms provided good overall results for detecting the grey matter butterfly, albeit with variable performance in certain quality-of-segmentation metrics. The data have been made publicly available and the challenge web site remains open to new submissions. No modifications were introduced to any of the presented methods as a result of this challenge for the purposes of this publication. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Scale selection for supervised image segmentation

    DEFF Research Database (Denmark)

    Li, Yan; Tax, David M J; Loog, Marco

    2012-01-01

    schemes are usually unsupervised, as they do not take into account the actual segmentation problem at hand. In this paper, we consider the problem of selecting scales, which aims at an optimal discrimination between user-defined classes in the segmentation. We show the deficiency of the classical...

  4. Performing the processing required for automatically get a PDF/A version of the CERN Library documentation

    CERN Document Server

    Molina Garcia-Retamero, Antonio

    2015-01-01

    The aim of the project was to perform the processing required for automatically get a PDF/A version of the CERN Library documentation. For this, it is necessary to extract as much metadata as possible from the sources files, inject the required data into the original source files creating new ones ready for being compiled with all related dependencies. Besides, I’ve proposed the creation of a HTML version consistent with the PDF and navigable for easy access, I’ve been trying to perform some Natural Language Processing for extracting metadata, I’ve proposed the injection of the cern library documentation into the HTML version of the long writeups where it is referenced (for instance, when a CERN Library function is referenced in a sample code) Finally, I’ve designed and implemented a Graphical User Interface in order to simplify the process for the user.

  5. Improving image segmentation by learning region affinities

    Energy Technology Data Exchange (ETDEWEB)

    Prasad, Lakshman [Los Alamos National Laboratory; Yang, Xingwei [TEMPLE UNIV.; Latecki, Longin J [TEMPLE UNIV.

    2010-11-03

    We utilize the context information of other regions in hierarchical image segmentation to learn new regions affinities. It is well known that a single choice of quantization of an image space is highly unlikely to be a common optimal quantization level for all categories. Each level of quantization has its own benefits. Therefore, we utilize the hierarchical information among different quantizations as well as spatial proximity of their regions. The proposed affinity learning takes into account higher order relations among image regions, both local and long range relations, making it robust to instabilities and errors of the original, pairwise region affinities. Once the learnt affinities are obtained, we use a standard image segmentation algorithm to get the final segmentation. Moreover, the learnt affinities can be naturally unutilized in interactive segmentation. Experimental results on Berkeley Segmentation Dataset and MSRC Object Recognition Dataset are comparable and in some aspects better than the state-of-art methods.

  6. A Hybrid Technique for Medical Image Segmentation

    Directory of Open Access Journals (Sweden)

    Alamgir Nyma

    2012-01-01

    Full Text Available Medical image segmentation is an essential and challenging aspect in computer-aided diagnosis and also in pattern recognition research. This paper proposes a hybrid method for magnetic resonance (MR image segmentation. We first remove impulsive noise inherent in MR images by utilizing a vector median filter. Subsequently, Otsu thresholding is used as an initial coarse segmentation method that finds the homogeneous regions of the input image. Finally, an enhanced suppressed fuzzy c-means is used to partition brain MR images into multiple segments, which employs an optimal suppression factor for the perfect clustering in the given data set. To evaluate the robustness of the proposed approach in noisy environment, we add different types of noise and different amount of noise to T1-weighted brain MR images. Experimental results show that the proposed algorithm outperforms other FCM based algorithms in terms of segmentation accuracy for both noise-free and noise-inserted MR images.

  7. Monitoring fish distributions along electrofishing segments

    Science.gov (United States)

    Miranda, Leandro E.

    2014-01-01

    Electrofishing is widely used to monitor fish species composition and relative abundance in streams and lakes. According to standard protocols, multiple segments are selected in a body of water to monitor population relative abundance as the ratio of total catch to total sampling effort. The standard protocol provides an assessment of fish distribution at a macrohabitat scale among segments, but not within segments. An ancillary protocol was developed for assessing fish distribution at a finer scale within electrofishing segments. The ancillary protocol was used to estimate spacing, dispersion, and association of two species along shore segments in two local reservoirs. The added information provided by the ancillary protocol may be useful for assessing fish distribution relative to fish of the same species, to fish of different species, and to environmental or habitat characteristics.

  8. Aging and the segmentation of narrative film.

    Science.gov (United States)

    Kurby, Christopher A; Asiala, Lillian K E; Mills, Steven R

    2014-01-01

    The perception of event structure in continuous activity is important for everyday comprehension. Although the segmentation of experience into events is a normal concomitant of perceptual processing, previous research has shown age differences in the ability to perceive structure in naturalistic activity, such as a movie of someone washing a car. However, past research has also shown that older adults have a preserved ability to comprehend events in narrative text, which suggests that narrative may improve the event processing of older adults. This study tested whether there are age differences in event segmentation at the intersection of continuous activity and narrative: narrative film. Younger and older adults watched and segmented a narrative film, The Red Balloon, into coarse and fine events. Changes in situational features, such as changes in characters, goals, and objects predicted segmentation. Analyses revealed little age-difference in segmentation behavior. This suggests the possibility that narrative structure supports event understanding for older adults.

  9. Colour application on mammography image segmentation

    Science.gov (United States)

    Embong, R.; Aziz, N. M. Nik Ab.; Karim, A. H. Abd; Ibrahim, M. R.

    2017-09-01

    The segmentation process is one of the most important steps in image processing and computer vision since it is vital in the initial stage of image analysis. Segmentation of medical images involves complex structures and it requires precise segmentation result which is necessary for clinical diagnosis such as the detection of tumour, oedema, and necrotic tissues. Since mammography images are grayscale, researchers are looking at the effect of colour in the segmentation process of medical images. Colour is known to play a significant role in the perception of object boundaries in non-medical colour images. Processing colour images require handling more data, hence providing a richer description of objects in the scene. Colour images contain ten percent (10%) additional edge information as compared to their grayscale counterparts. Nevertheless, edge detection in colour image is more challenging than grayscale image as colour space is considered as a vector space. In this study, we implemented red, green, yellow, and blue colour maps to grayscale mammography images with the purpose of testing the effect of colours on the segmentation of abnormality regions in the mammography images. We applied the segmentation process using the Fuzzy C-means algorithm and evaluated the percentage of average relative error of area for each colour type. The results showed that all segmentation with the colour map can be done successfully even for blurred and noisy images. Also the size of the area of the abnormality region is reduced when compare to the segmentation area without the colour map. The green colour map segmentation produced the smallest percentage of average relative error (10.009%) while yellow colour map segmentation gave the largest percentage of relative error (11.367%).

  10. SEGMENTATION OF SME PORTFOLIO IN BANKING SYSTEM

    Directory of Open Access Journals (Sweden)

    Namolosu Simona Mihaela

    2013-07-01

    Full Text Available The Small and Medium Enterprises (SMEs represent an important target market for commercial Banks. In this respect, finding the best methods for designing and implementing the optimal marketing strategies (for this target are a continuous concern for the marketing specialists and researchers from the banking system; the purpose is to find the most suitable service model for these companies. SME portfolio of a bank is not homogeneous, different characteristics and behaviours being identified. The current paper reveals empirical evidence about SME portfolio characteristics and segmentation methods used in banking system. Its purpose is to identify if segmentation has an impact in finding the optimal marketing strategies and service model and if this hypothesis might be applicable for any commercial bank, irrespective of country/ region. Some banks are segmenting the SME portfolio by a single criterion: the annual company (official turnover; others are considering also profitability and other financial indicators of the company. In some cases, even the banking behaviour becomes a criterion. For all cases, creating scenarios with different thresholds and estimating the impact in profitability and volumes are two mandatory steps in establishing the final segmentation (criteria matrix. Details about each of these segmentation methods may be found in the paper. Testing the final matrix of criteria is also detailed, with the purpose of making realistic estimations. Example for lending products is provided; the product offer is presented as responding to needs of targeted sub segment and therefore being correlated with the sub segment characteristics. Identifying key issues and trends leads to further action plan proposal. Depending on overall strategy and commercial target of the bank, the focus may shift, one or more sub segments becoming high priority (for acquisition/ activation/ retention/ cross sell/ up sell/ increase profitability etc., while

  11. Robust segmentation of medical images using competitive hop field neural network as a clustering tool

    International Nuclear Information System (INIS)

    Golparvar Roozbahani, R.; Ghassemian, M. H.; Sharafat, A. R.

    2001-01-01

    This paper presents the application of competitive Hop field neural network for medical images segmentation. Our proposed approach consists of Two steps: 1) translating segmentation of the given medical image into an optimization problem, and 2) solving this problem by a version of Hop field network known as competitive Hop field neural network. Segmentation is considered as a clustering problem and its validity criterion is based on both intra set distance and inter set distance. The algorithm proposed in this paper is based on gray level features only. This leads to near optimal solutions if both intra set distance and inter set distance are considered at the same time. If only one of these distances is considered, the result of segmentation process by competitive Hop field neural network will be far from optimal solution and incorrect even for very simple cases. Furthermore, sometimes the algorithm receives at unacceptable states. Both these problems may be solved by contributing both in tera distance and inter distances in the segmentation (optimization) process. The performance of the proposed algorithm is tested on both phantom and real medical images. The promising results and the robustness of algorithm to system noises show near optimal solutions

  12. Semi-automatic geographic atrophy segmentation for SD-OCT images.

    Science.gov (United States)

    Chen, Qiang; de Sisternes, Luis; Leng, Theodore; Zheng, Luoluo; Kutzscher, Lauren; Rubin, Daniel L

    2013-01-01

    Geographic atrophy (GA) is a condition that is associated with retinal thinning and loss of the retinal pigment epithelium (RPE) layer. It appears in advanced stages of non-exudative age-related macular degeneration (AMD) and can lead to vision loss. We present a semi-automated GA segmentation algorithm for spectral-domain optical coherence tomography (SD-OCT) images. The method first identifies and segments a surface between the RPE and the choroid to generate retinal projection images in which the projection region is restricted to a sub-volume of the retina where the presence of GA can be identified. Subsequently, a geometric active contour model is employed to automatically detect and segment the extent of GA in the projection images. Two image data sets, consisting on 55 SD-OCT scans from twelve eyes in eight patients with GA and 56 SD-OCT scans from 56 eyes in 56 patients with GA, respectively, were utilized to qualitatively and quantitatively evaluate the proposed GA segmentation method. Experimental results suggest that the proposed algorithm can achieve high segmentation accuracy. The mean GA overlap ratios between our proposed method and outlines drawn in the SD-OCT scans, our method and outlines drawn in the fundus auto-fluorescence (FAF) images, and the commercial software (Carl Zeiss Meditec proprietary software, Cirrus version 6.0) and outlines drawn in FAF images were 72.60%, 65.88% and 59.83%, respectively.

  13. Brain tumor segmentation in multi-spectral MRI using convolutional neural networks (CNN).

    Science.gov (United States)

    Iqbal, Sajid; Ghani, M Usman; Saba, Tanzila; Rehman, Amjad

    2018-04-01

    A tumor could be found in any area of the brain and could be of any size, shape, and contrast. There may exist multiple tumors of different types in a human brain at the same time. Accurate tumor area segmentation is considered primary step for treatment of brain tumors. Deep Learning is a set of promising techniques that could provide better results as compared to nondeep learning techniques for segmenting timorous part inside a brain. This article presents a deep convolutional neural network (CNN) to segment brain tumors in MRIs. The proposed network uses BRATS segmentation challenge dataset which is composed of images obtained through four different modalities. Accordingly, we present an extended version of existing network to solve segmentation problem. The network architecture consists of multiple neural network layers connected in sequential order with the feeding of Convolutional feature maps at the peer level. Experimental results on BRATS 2015 benchmark data thus show the usability of the proposed approach and its superiority over the other approaches in this area of research. © 2018 Wiley Periodicals, Inc.

  14. First validation of the new continuous energy version of the MORET5 Monte Carlo code

    International Nuclear Information System (INIS)

    Miss, Joachim; Bernard, Franck; Forestier, Benoit; Haeck, Wim; Richet, Yann; Jacquet, Olivier

    2008-01-01

    The 5.A.1 version is the next release of the MORET Monte Carlo code dedicated to criticality and reactor calculations. This new version combines all the capabilities that are already available in the multigroup version with many new and enhanced features. The main capabilities of the previous version are the powerful association of a deterministic and Monte Carlo approach (like for instance APOLLO-MORET), the modular geometry, five source sampling techniques and two simulation strategies. The major advance in MORET5 is the ability to perform calculations either a multigroup or a continuous energy simulation. Thanks to these new developments, we now have better control over the whole process of criticality calculations, from reading the basic nuclear data to the Monte Carlo simulation itself. Moreover, this new capability enables us to better validate the deterministic-Monte Carlo multigroup calculations by performing continuous energy calculations with the same code, using the same geometry and tracking algorithms. The aim of this paper is to describe the main options available in this new release, and to present the first results. Comparisons of the MORET5 continuous-energy results with experimental measurements and against another continuous-energy Monte Carlo code are provided in terms of validation and time performance. Finally, an analysis of the interest of using a unified energy grid for continuous energy Monte Carlo calculations is presented. (authors)

  15. Clinical validation of a non-heteronormative version of the Social Interaction Anxiety Scale (SIAS).

    Science.gov (United States)

    Lindner, Philip; Martell, Christopher; Bergström, Jan; Andersson, Gerhard; Carlbring, Per

    2013-12-19

    Despite welcomed changes in societal attitudes and practices towards sexual minorities, instances of heteronormativity can still be found within healthcare and research. The Social Interaction Anxiety Scale (SIAS) is a valid and reliable self-rating scale of social anxiety, which includes one item (number 14) with an explicit heteronormative assumption about the respondent's sexual orientation. This heteronormative phrasing may confuse, insult or alienate sexual minority respondents. A clinically validated version of the SIAS featuring a non-heteronormative phrasing of item 14 is thus needed. 129 participants with diagnosed social anxiety disorder, enrolled in an Internet-based intervention trial, were randomly assigned to responding to the SIAS featuring either the original or a novel non-heteronormative phrasing of item 14, and then answered the other item version. Within-subject, correlation between item versions was calculated and the two scores were statistically compared. The two items' correlations with the other SIAS items and other psychiatric rating scales were also statistically compared. Item versions were highly correlated and scores did not differ statistically. The two items' correlations with other measures did not differ statistically either. The SIAS can be revised with a non-heteronormative formulation of item 14 with psychometric equivalence on item and scale level. Implications for other psychiatric instruments with heteronormative phrasings are discussed.

  16. StreamStats, version 4

    Science.gov (United States)

    Ries, Kernell G.; Newson, Jeremy K.; Smith, Martyn J.; Guthrie, John D.; Steeves, Peter A.; Haluska, Tana L.; Kolb, Katharine R.; Thompson, Ryan F.; Santoro, Richard D.; Vraga, Hans W.

    2017-10-30

    IntroductionStreamStats version 4, available at https://streamstats.usgs.gov, is a map-based web application that provides an assortment of analytical tools that are useful for water-resources planning and management, and engineering purposes. Developed by the U.S. Geological Survey (USGS), the primary purpose of StreamStats is to provide estimates of streamflow statistics for user-selected ungaged sites on streams and for USGS streamgages, which are locations where streamflow data are collected.Streamflow statistics, such as the 1-percent flood, the mean flow, and the 7-day 10-year low flow, are used by engineers, land managers, biologists, and many others to help guide decisions in their everyday work. For example, estimates of the 1-percent flood (which is exceeded, on average, once in 100 years and has a 1-percent chance of exceedance in any year) are used to create flood-plain maps that form the basis for setting insurance rates and land-use zoning. This and other streamflow statistics also are used for dam, bridge, and culvert design; water-supply planning and management; permitting of water withdrawals and wastewater and industrial discharges; hydropower facility design and regulation; and setting of minimum allowed streamflows to protect freshwater ecosystems. Streamflow statistics can be computed from available data at USGS streamgages depending on the type of data collected at the stations. Most often, however, streamflow statistics are needed at ungaged sites, where no streamflow data are available to determine the statistics.

  17. MCNP(trademark) Version 5

    International Nuclear Information System (INIS)

    Cox, Lawrence J.; Barrett, Richard F.; Booth, Thomas Edward; Briesmeister, Judith F.; Brown, Forrest B.; Bull, Jeffrey S.; Giesler, Gregg Carl; Goorley, John T.; Mosteller, Russell D.; Forster, R. Arthur; Post, Susan E.; Prael, Richard E.; Selcow, Elizabeth Carol; Sood, Avneet

    2002-01-01

    The Monte Carlo transport workhorse, MCNP, is undergoing a massive renovation at Los Alamos National Laboratory (LANL) in support of the Eolus Project of the Advanced Simulation and Computing (ASCI) Program. MCNP Version 5 (V5) (expected to be released to RSICC in Spring, 2002) will consist of a major restructuring from FORTRAN-77 (with extensions) to ANSI-standard FORTRAN-90 with support for all of the features available in the present release (MCNP-4C2/4C3). To most users, the look-and-feel of MCNP will not change much except for the improvements (improved graphics, easier installation, better online documentation). For example, even with the major format change, full support for incremental patching will still be provided. In addition to the language and style updates, MCNP V5 will have various new user features. These include improved photon physics, neutral particle radiography, enhancements and additions to variance reduction methods, new source options, and improved parallelism support (PVM, MPI, OpenMP).

  18. APGEN Version 5.0

    Science.gov (United States)

    Maldague, Pierre; Page, Dennis; Chase, Adam

    2005-01-01

    Activity Plan Generator (APGEN), now at version 5.0, is a computer program that assists in generating an integrated plan of activities for a spacecraft mission that does not oversubscribe spacecraft and ground resources. APGEN generates an interactive display, through which the user can easily create or modify the plan. The display summarizes the plan by means of a time line, whereon each activity is represented by a bar stretched between its beginning and ending times. Activities can be added, deleted, and modified via simple mouse and keyboard actions. The use of resources can be viewed on resource graphs. Resource and activity constraints can be checked. Types of activities, resources, and constraints are defined by simple text files, which the user can modify. In one of two modes of operation, APGEN acts as a planning expert assistant, displaying the plan and identifying problems in the plan. The user is in charge of creating and modifying the plan. In the other mode, APGEN automatically creates a plan that does not oversubscribe resources. The user can then manually modify the plan. APGEN is designed to interact with other software that generates sequences of timed commands for implementing details of planned activities.

  19. Method of manufacturing a large-area segmented photovoltaic module

    Science.gov (United States)

    Lenox, Carl

    2013-11-05

    One embodiment of the invention relates to a segmented photovoltaic (PV) module which is manufactured from laminate segments. The segmented PV module includes rectangular-shaped laminate segments formed from rectangular-shaped PV laminates and further includes non-rectangular-shaped laminate segments formed from rectangular-shaped and approximately-triangular-shaped PV laminates. The laminate segments are mechanically joined and electrically interconnected to form the segmented module. Another embodiment relates to a method of manufacturing a large-area segmented photovoltaic module from laminate segments of various shapes. Other embodiments relate to processes for providing a photovoltaic array for installation at a site. Other embodiments and features are also disclosed.

  20. Study of the morphology exhibited by linear segmented polyurethanes

    International Nuclear Information System (INIS)

    Pereira, I.M.; Orefice, R.L.

    2009-01-01

    Five series of segmented polyurethanes with different hard segment content were prepared by the prepolymer mixing method. The nano-morphology of the obtained polyurethanes and their microphase separation were investigated by infrared spectroscopy, modulated differential scanning calorimetry and small-angle X-ray scattering. Although highly hydrogen bonded hard segments were formed, high hard segment contents promoted phase mixture and decreased the chain mobility, decreasing the hard segment domain precipitation and the soft segments crystallization. The applied techniques were able to show that the hard-segment content and the hard-segment interactions were the two controlling factors for determining the structure of segmented polyurethanes. (author)

  1. NOAA Climate Data Record (CDR) of Ocean Near Surface Atmospheric Properties, Version 1 (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Please note, this dataset has been superseded by a newer version (see below). Users should not use this version except in rare cases (e.g., when reproducing previous...

  2. NOAA Climate Data Record (CDR) of Ocean Heat Fluxes, Version 1.0 (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Please note, this dataset has been superseded by a newer version (see below). Users should not use this version except in rare cases (e.g., when reproducing previous...

  3. Global Historical Climatology Network - Daily (GHCN-Daily), Version 2 (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Please note, this dataset has been superseded by a newer version (see below). Users should not use this version except in rare cases (e.g., when reproducing previous...

  4. NOAA Climate Data Record (CDR) of Sea Surface Temperature - WHOI, Version 1.0 (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Please note, this dataset has been superseded by a newer version (see below). Users should not use this version except in rare cases (e.g., when reproducing previous...

  5. Performance Analysis of Segmentation of Hyperspectral Images Based on Color Image Segmentation

    Directory of Open Access Journals (Sweden)

    Praveen Agarwal

    2017-06-01

    Full Text Available Image segmentation is a fundamental approach in the field of image processing and based on user’s application .This paper propose an original and simple segmentation strategy based on the EM approach that resolves many informatics problems about hyperspectral images which are observed by airborne sensors. In a first step, to simplify the input color textured image into a color image without texture. The final segmentation is simply achieved by a spatially color segmentation using feature vector with the set of color values contained around the pixel to be classified with some mathematical equations. The spatial constraint allows taking into account the inherent spatial relationships of any image and its color. This approach provides effective PSNR for the segmented image. These results have the better performance as the segmented images are compared with Watershed & Region Growing Algorithm and provide effective segmentation for the Spectral Images & Medical Images.

  6. Polyether based segmented copolymers with uniform aramid units

    NARCIS (Netherlands)

    Niesten, M.C.E.J.

    2000-01-01

    Segmented copolymers with short, glassy or crystalline hard segments and long, amorphous soft segments (multi-block copolymers) are thermoplastic elastomers (TPE’s). The hard segments form physical crosslinks for the amorphous (rubbery) soft segments. As a result, this type of materials combines

  7. INDC/NEANDC nuclear standards file 1980 version

    International Nuclear Information System (INIS)

    1981-09-01

    This working document of the Nuclear Standards Subcommittee of the International Nuclear Data Committee (INDC) summarizes the status of nuclear standards as of the 11th INDC meeting (6/'80) with selective updating to approximately 5/'81. This version of the file is presented in two sections as per the following. The first section (A) consists of numerical tabulations of the respective quantities generally including quantitative definition of the uncertainties. Most of these numerical values are taken from the ENDF/B-V file which is available on a world-wide basis through the 4-Center network. Some guidelines as to appropriate usage are also given. The objective is the provision of a concise and readily used reference guide to essential standard-nuclear quantities useful for a diversity of basic and applied endeavors. The second section (B) briefly summarizes the contemporary status of each of the standards tabulated in Section A and additional items, including recent relevant work and areas of continuing uncertainty. These brief reviews were prepared under the auspices of the Committee by outstanding specialists in the respective fields. In many instances they are new statements but, where review indicates that the previous statement (see INDC-30/L+sp) remains appropriate, the previous summaries were retained; often with additional remarks by the editor

  8. HANFORD TANK WASTE OPERATIONS SIMULATOR VERSION DESCRIPTION DOCUMENT

    International Nuclear Information System (INIS)

    ALLEN, G.K.

    2003-01-01

    This document describes the software version controls established for the Hanford Tank Waste Operations Simulator (HTWOS). It defines: the methods employed to control the configuration of HTWOS; the version of each of the 26 separate modules for the version 1.0 of HTWOS; the numbering rules for incrementing the version number of each module; and a requirement to include module version numbers in each case results documentation. Version 1.0 of HTWOS is the first version under formal software version control. HTWOS contains separate revision numbers for each of its 26 modules. Individual module version numbers do not reflect the major release HTWOS configured version number

  9. Schema Versioning for Multitemporal Relational Databases.

    Science.gov (United States)

    De Castro, Cristina; Grandi, Fabio; Scalas, Maria Rita

    1997-01-01

    Investigates new design options for extended schema versioning support for multitemporal relational databases. Discusses the improved functionalities they may provide. Outlines options and basic motivations for the new design solutions, as well as techniques for the management of proposed schema versioning solutions, includes algorithms and…

  10. Several versions of forward gas ionization calorimeter

    International Nuclear Information System (INIS)

    Babintsev, V.V.; Kholodenko, A.G.; Rodnov, Yu.V.

    1994-01-01

    The properties of several versions of a gas ionization calorimeter are analyzed by means of the simulation with the GEANT code. The jet energy and coordinate resolutions are evaluated. Some versions of the forward calorimeter meet the ATLAS requirements. 13 refs., 15 figs., 7 tabs

  11. Cluster Ensemble-Based Image Segmentation

    Directory of Open Access Journals (Sweden)

    Xiaoru Wang

    2013-07-01

    Full Text Available Image segmentation is the foundation of computer vision applications. In this paper, we propose a new cluster ensemble-based image segmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories of images. Second, we exploit the PageRank idea from Internet applications and apply it to the image segmentation task. This can improve the final segmentation results by combining the spatial information of the image and the semantic similarity of regions. Our experiments on four public image databases validate the superiority of our algorithm over conventional single type of feature or multiple types of features-based algorithms, since our algorithm can fuse multiple types of features effectively for better segmentation results. Moreover, our method is also proved to be very competitive in comparison with other state-of-the-art segmentation algorithms.

  12. An unsupervised strategy for biomedical image segmentation

    Directory of Open Access Journals (Sweden)

    Roberto Rodríguez

    2010-09-01

    Full Text Available Roberto Rodríguez1, Rubén Hernández21Digital Signal Processing Group, Institute of Cybernetics, Mathematics, and Physics, Havana, Cuba; 2Interdisciplinary Professional Unit of Engineering and Advanced Technology, IPN, MexicoAbstract: Many segmentation techniques have been published, and some of them have been widely used in different application problems. Most of these segmentation techniques have been motivated by specific application purposes. Unsupervised methods, which do not assume any prior scene knowledge can be learned to help the segmentation process, and are obviously more challenging than the supervised ones. In this paper, we present an unsupervised strategy for biomedical image segmentation using an algorithm based on recursively applying mean shift filtering, where entropy is used as a stopping criterion. This strategy is proven with many real images, and a comparison is carried out with manual segmentation. With the proposed strategy, errors less than 20% for false positives and 0% for false negatives are obtained.Keywords: segmentation, mean shift, unsupervised segmentation, entropy

  13. Segmentation precedes face categorization under suboptimal conditions

    Directory of Open Access Journals (Sweden)

    Carlijn eVan Den Boomen

    2015-05-01

    Full Text Available Both categorization and segmentation processes play a crucial role in face perception. However, the functional relation between these subprocesses is currently unclear. The present study investigates the temporal relation between segmentation-related and category-selective responses in the brain, using electroencephalography (EEG. Surface segmentation and category content were both manipulated using texture-defined objects, including faces. This allowed us to study brain activity related to segmentation and to categorization. In the main experiment, participants viewed texture-defined objects for a duration of 800 ms. EEG results revealed that segmentation-related responses precede category-selective responses. Three additional experiments revealed that the presence and timing of categorization depends on stimulus properties and presentation duration. Photographic objects were presented for a long and short (92 ms duration and evoked fast category-selective responses in both cases. On the other hand, presentation of texture-defined objects for a short duration only evoked segmentation-related but no category-selective responses. Category-selective responses were much slower when evoked by texture-defined than by photographic objects. We suggest that in case of categorization of objects under suboptimal conditions, such as when low-level stimulus properties are not sufficient for fast object categorization, segmentation facilitates the slower categorization process.

  14. Incorporation of squalene into rod outer segments

    International Nuclear Information System (INIS)

    Keller, R.K.; Fliesler, S.J.

    1990-01-01

    We have reported previously that squalene is the major radiolabeled nonsaponifiable lipid product derived from [ 3 H]acetate in short term incubations of frog retinas. In the present study, we demonstrate that newly synthesized squalene is incorporated into rod outer segments under similar in vitro conditions. We show further that squalene is an endogenous constituent of frog rod outer segment membranes; its concentration is approximately 9.5 nmol/mumol of phospholipid or about 9% of the level of cholesterol. Pulse-chase experiments with radiolabeled precursors revealed no metabolism of outer segment squalene to sterols in up to 20 h of chase. Taken together with our previous absolute rate studies, these results suggest that most, if not all, of the squalene synthesized by the frog retina is transported to rod outer segments. Synthesis of protein is not required for squalene transport since puromycin had no effect on squalene incorporation into outer segments. Conversely, inhibition of isoprenoid synthesis with mevinolin had no effect on the incorporation of opsin into the outer segment. These latter results support the conclusion that the de novo synthesis and subsequent intracellular trafficking of opsin and isoprenoid lipids destined for the outer segment occur via independent mechanisms

  15. A Kalman Filtering Perspective for Multiatlas Segmentation*

    Science.gov (United States)

    Gao, Yi; Zhu, Liangjia; Cates, Joshua; MacLeod, Rob S.; Bouix, Sylvain; Tannenbaum, Allen

    2016-01-01

    In multiatlas segmentation, one typically registers several atlases to the novel image, and their respective segmented label images are transformed and fused to form the final segmentation. In this work, we provide a new dynamical system perspective for multiatlas segmentation, inspired by the following fact: The transformation that aligns the current atlas to the novel image can be not only computed by direct registration but also inferred from the transformation that aligns the previous atlas to the image together with the transformation between the two atlases. This process is similar to the global positioning system on a vehicle, which gets position by inquiring from the satellite and by employing the previous location and velocity—neither answer in isolation being perfect. To solve this problem, a dynamical system scheme is crucial to combine the two pieces of information; for example, a Kalman filtering scheme is used. Accordingly, in this work, a Kalman multiatlas segmentation is proposed to stabilize the global/affine registration step. The contributions of this work are twofold. First, it provides a new dynamical systematic perspective for standard independent multiatlas registrations, and it is solved by Kalman filtering. Second, with very little extra computation, it can be combined with most existing multiatlas segmentation schemes for better registration/segmentation accuracy. PMID:26807162

  16. Segmentation-less Digital Rock Physics

    Science.gov (United States)

    Tisato, N.; Ikeda, K.; Goldfarb, E. J.; Spikes, K. T.

    2017-12-01

    In the last decade, Digital Rock Physics (DRP) has become an avenue to investigate physical and mechanical properties of geomaterials. DRP offers the advantage of simulating laboratory experiments on numerical samples that are obtained from analytical methods. Potentially, DRP could allow sparing part of the time and resources that are allocated to perform complicated laboratory tests. Like classic laboratory tests, the goal of DRP is to estimate accurately physical properties of rocks like hydraulic permeability or elastic moduli. Nevertheless, the physical properties of samples imaged using micro-computed tomography (μCT) are estimated through segmentation of the μCT dataset. Segmentation proves to be a challenging and arbitrary procedure that typically leads to inaccurate estimates of physical properties. Here we present a novel technique to extract physical properties from a μCT dataset without the use of segmentation. We show examples in which we use segmentation-less method to simulate elastic wave propagation and pressure wave diffusion to estimate elastic properties and permeability, respectively. The proposed method takes advantage of effective medium theories and uses the density and the porosity that are measured in the laboratory to constrain the results. We discuss the results and highlight that segmentation-less DRP is more accurate than segmentation based DRP approaches and theoretical modeling for the studied rock. In conclusion, the segmentation-less approach here presented seems to be a promising method to improve accuracy and to ease the overall workflow of DRP.

  17. Segmentation precedes face categorization under suboptimal conditions.

    Science.gov (United States)

    Van Den Boomen, Carlijn; Fahrenfort, Johannes J; Snijders, Tineke M; Kemner, Chantal

    2015-01-01

    Both categorization and segmentation processes play a crucial role in face perception. However, the functional relation between these subprocesses is currently unclear. The present study investigates the temporal relation between segmentation-related and category-selective responses in the brain, using electroencephalography (EEG). Surface segmentation and category content were both manipulated using texture-defined objects, including faces. This allowed us to study brain activity related to segmentation and to categorization. In the main experiment, participants viewed texture-defined objects for a duration of 800 ms. EEG results revealed that segmentation-related responses precede category-selective responses. Three additional experiments revealed that the presence and timing of categorization depends on stimulus properties and presentation duration. Photographic objects were presented for a long and short (92 ms) duration and evoked fast category-selective responses in both cases. On the other hand, presentation of texture-defined objects for a short duration only evoked segmentation-related but no category-selective responses. Category-selective responses were much slower when evoked by texture-defined than by photographic objects. We suggest that in case of categorization of objects under suboptimal conditions, such as when low-level stimulus properties are not sufficient for fast object categorization, segmentation facilitates the slower categorization process.

  18. Bayesian automated cortical segmentation for neonatal MRI

    Science.gov (United States)

    Chou, Zane; Paquette, Natacha; Ganesh, Bhavana; Wang, Yalin; Ceschin, Rafael; Nelson, Marvin D.; Macyszyn, Luke; Gaonkar, Bilwaj; Panigrahy, Ashok; Lepore, Natasha

    2017-11-01

    Several attempts have been made in the past few years to develop and implement an automated segmentation of neonatal brain structural MRI. However, accurate automated MRI segmentation remains challenging in this population because of the low signal-to-noise ratio, large partial volume effects and inter-individual anatomical variability of the neonatal brain. In this paper, we propose a learning method for segmenting the whole brain cortical grey matter on neonatal T2-weighted images. We trained our algorithm using a neonatal dataset composed of 3 fullterm and 4 preterm infants scanned at term equivalent age. Our segmentation pipeline combines the FAST algorithm from the FSL library software and a Bayesian segmentation approach to create a threshold matrix that minimizes the error of mislabeling brain tissue types. Our method shows promising results with our pilot training set. In both preterm and full-term neonates, automated Bayesian segmentation generates a smoother and more consistent parcellation compared to FAST, while successfully removing the subcortical structure and cleaning the edges of the cortical grey matter. This method show promising refinement of the FAST segmentation by considerably reducing manual input and editing required from the user, and further improving reliability and processing time of neonatal MR images. Further improvement will include a larger dataset of training images acquired from different manufacturers.

  19. Deformable segmentation via sparse shape representation.

    Science.gov (United States)

    Zhang, Shaoting; Zhan, Yiqiang; Dewan, Maneesh; Huang, Junzhou; Metaxas, Dimitris N; Zhou, Xiang Sean

    2011-01-01

    Appearance and shape are two key elements exploited in medical image segmentation. However, in some medical image analysis tasks, appearance cues are weak/misleading due to disease/artifacts and often lead to erroneous segmentation. In this paper, a novel deformable model is proposed for robust segmentation in the presence of weak/misleading appearance cues. Owing to the less trustable appearance information, this method focuses on the effective shape modeling with two contributions. First, a shape composition method is designed to incorporate shape prior on-the-fly. Based on two sparsity observations, this method is robust to false appearance information and adaptive to statistically insignificant shape modes. Second, shape priors are modeled and used in a hierarchical fashion. More specifically, by using affinity propagation method, our deformable surface is divided into multiple partitions, on which local shape models are built independently. This scheme facilitates a more compact shape prior modeling and hence a more robust and efficient segmentation. Our deformable model is applied on two very diverse segmentation problems, liver segmentation in PET-CT images and rodent brain segmentation in MR images. Compared to state-of-art methods, our method achieves better performance in both studies.

  20. Global Precipitation Climatology Project (GPCP) - Monthly, Version 2.2 (Version Superseded)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Version 2.2 of the dataset has been superseded by a newer version. Users should not use version 2.2 except in rare cases (e.g., when reproducing previous studies...

  1. Moxibustion for Cephalic Version of Breech Presentation.

    Science.gov (United States)

    Schlaeger, Judith M; Stoffel, Cynthia L; Bussell, Jeanie L; Cai, Hui Yan; Takayama, Miho; Yajima, Hiroyoshi; Takakura, Nobuari

    2018-05-01

    Moxibustion, a form of traditional Chinese medicine (TCM), is the burning of the herb moxa (Folium Artemisiae argyi or mugwort) over acupuncture points. It is often used in China to facilitate cephalic version of breech presentation. This article reviews the history, philosophy, therapeutic use, possible mechanisms of action, and literature pertaining to its use for this indication. For moxibustion, moxa can be rolled into stick form, placed directly on the skin, or placed on an acupuncture needle and ignited to warm acupuncture points. Studies have demonstrated that moxibustion may promote cephalic version of breech presentation and may facilitate external cephalic version. However, there is currently a paucity of research on the effects of moxibustion on cephalic version of breech presentation, and thus there is a need for further studies. Areas needing more investigation include efficacy, safety, optimal technique, and best protocol for cephalic version of breech presentation. © 2018 by the American College of Nurse-Midwives.

  2. Unsupervised Segmentation Methods of TV Contents

    Directory of Open Access Journals (Sweden)

    Elie El-Khoury

    2010-01-01

    Full Text Available We present a generic algorithm to address various temporal segmentation topics of audiovisual contents such as speaker diarization, shot, or program segmentation. Based on a GLR approach, involving the ΔBIC criterion, this algorithm requires the value of only a few parameters to produce segmentation results at a desired scale and on most typical low-level features used in the field of content-based indexing. Results obtained on various corpora are of the same quality level than the ones obtained by other dedicated and state-of-the-art methods.

  3. Contractual Incompleteness, Unemployment, and Labour Market Segmentation

    DEFF Research Database (Denmark)

    Altmann, Steffen; Falk, Armin; Grunewald, Andreas

    2014-01-01

    This article provides evidence that involuntary unemployment, and the segmentation of labour markets into firms offering "good" and "bad" jobs, may both arise as a consequence of contractual incompleteness.We provide a simple model that illustrates how unemployment and market segmentation may...... jointly emerge as part of a market equilibrium in environments where work effort is not third-party verifiable. Using experimental labour markets that differ only in the verifiability of effort, we demonstrate empirically that contractual incompleteness can cause unemployment and segmentation. Our data...

  4. Multi-scale Modelling of Segmentation

    DEFF Research Database (Denmark)

    Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri

    2016-01-01

    pieces. In a second experiment on non-real-time segmentation, musicians indicated boundaries and their strength for six examples. Kernel density estimation was used to develop multi-scale segmentation models. Contrary to previous research, no relationship was found between boundary strength and boundary......While listening to music, people often unwittingly break down musical pieces into constituent chunks such as verses and choruses. Music segmentation studies have suggested that some consensus regarding boundary perception exists, despite individual differences. However, neither the effects...

  5. Multi-Class Simultaneous Adaptive Segmentation and Quality Control of Point Cloud Data

    Directory of Open Access Journals (Sweden)

    Ayman Habib

    2016-01-01

    Full Text Available 3D modeling of a given site is an important activity for a wide range of applications including urban planning, as-built mapping of industrial sites, heritage documentation, military simulation, and outdoor/indoor analysis of airflow. Point clouds, which could be either derived from passive or active imaging systems, are an important source for 3D modeling. Such point clouds need to undergo a sequence of data processing steps to derive the necessary information for the 3D modeling process. Segmentation is usually the first step in the data processing chain. This paper presents a region-growing multi-class simultaneous segmentation procedure, where planar, pole-like, and rough regions are identified while considering the internal characteristics (i.e., local point density/spacing and noise level of the point cloud in question. The segmentation starts with point cloud organization into a kd-tree data structure and characterization process to estimate the local point density/spacing. Then, proceeding from randomly-distributed seed points, a set of seed regions is derived through distance-based region growing, which is followed by modeling of such seed regions into planar and pole-like features. Starting from optimally-selected seed regions, planar and pole-like features are then segmented. The paper also introduces a list of hypothesized artifacts/problems that might take place during the region-growing process. Finally, a quality control process is devised to detect, quantify, and mitigate instances of partially/fully misclassified planar and pole-like features. Experimental results from airborne and terrestrial laser scanning as well as image-based point clouds are presented to illustrate the performance of the proposed segmentation and quality control framework.

  6. AssesSeg—A Command Line Tool to Quantify Image Segmentation Quality: A Test Carried Out in Southern Spain from Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Antonio Novelli

    2017-01-01

    Full Text Available This letter presents the capabilities of a command line tool created to assess the quality of segmented digital images. The executable source code, called AssesSeg, was written in Python 2.7 using open source libraries. AssesSeg (University of Almeria, Almeria, Spain; Politecnico di Bari, Bari, Italy implements a modified version of the supervised discrepancy measure named Euclidean Distance 2 (ED2 and was tested on different satellite images (Sentinel-2, Landsat 8, and WorldView-2. The segmentation was applied to plastic covered greenhouse detection in the south of Spain (Almería. AssesSeg outputs were utilized to find the best band combinations for the performed segmentations of the images and showed a clear positive correlation between segmentation accuracy and the quantity of available reference data. This demonstrates the importance of a high number of reference data in supervised segmentation accuracy assessment problems.

  7. Leisure market segmentation : an integrated preferences/constraints-based approach

    NARCIS (Netherlands)

    Stemerding, M.P.; Oppewal, H.; Beckers, T.A.M.; Timmermans, H.J.P.

    1996-01-01

    Traditional segmentation schemes are often based on a grouping of consumers with similar preference functions. The research steps, ultimately leading to such segmentation schemes, are typically independent. In the present article, a new integrated approach to segmentation is introduced, which

  8. Improving iris recognition performance using segmentation, quality enhancement, match score fusion, and indexing.

    Science.gov (United States)

    Vatsa, Mayank; Singh, Richa; Noore, Afzel

    2008-08-01

    This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.

  9. Developing a Procedure for Segmenting Meshed Heat Networks of Heat Supply Systems without Outflows

    Science.gov (United States)

    Tokarev, V. V.

    2018-06-01

    The heat supply systems of cities have, as a rule, a ring structure with the possibility of redistributing the flows. Despite the fact that a ring structure is more reliable than a radial one, the operators of heat networks prefer to use them in normal modes according to the scheme without overflows of the heat carrier between the heat mains. With such a scheme, it is easier to adjust the networks and to detect and locate faults in them. The article proposes a formulation of the heat network segmenting problem. The problem is set in terms of optimization with the heat supply system's excessive hydraulic power used as the optimization criterion. The heat supply system computer model has a hierarchically interconnected multilevel structure. Since iterative calculations are only carried out for the level of trunk heat networks, decomposing the entire system into levels allows the dimensionality of the solved subproblems to be reduced by an order of magnitude. An attempt to solve the problem by fully enumerating possible segmentation versions does not seem to be feasible for systems of really existing sizes. The article suggests a procedure for searching rational segmentation of heat supply networks with limiting the search to versions of dividing the system into segments near the flow convergence nodes with subsequent refining of the solution. The refinement is performed in two stages according to the total excess hydraulic power criterion. At the first stage, the loads are redistributed among the sources. After that, the heat networks are divided into independent fragments, and the possibility of increasing the excess hydraulic power in the obtained fragments is checked by shifting the division places inside a fragment. The proposed procedure has been approbated taking as an example a municipal heat supply system involving six heat mains fed from a common source, 24 loops within the feeding mains plane, and more than 5000 consumers. Application of the proposed

  10. The Syriac versions of Old Testament quotations in Matthew

    Directory of Open Access Journals (Sweden)

    Herrie F. van Rooy

    2015-12-01

    Full Text Available In the Gospel of Matthew 10 quotations from the Old Testament are introduced by a formula containing the verb πληροῦν. This article explores the rendering of 9 of these 10 quotations in 3 Syriac versions of the New Testament, namely the Peshitta and the 2 versions of the Old Syriac Gospels (Sinaiticus and Curetonianus. The question addressed is the relationship of the Syriac versions to one another, to the Peshitta of the Old Testament and to the Greek Gospel. For the quotations in Matthew, their relationship to the Hebrew and Greek Old Testament is very important. In the quotations discussed, the Greek New Testament did not make much use of the Septuagint as it is known today. The Old Testament Peshitta influenced the Old Syriac, but not to the same extent in all instances. This influence could have been through Tatian’s Diatessaron. Tatian probably used the text of the Old Testament Peshitta for the quotations of the Old Testament in the gospels. In instances where the Curetonianus and the Sinaiticus differ, it could demonstrate attempts to bring the text closer to the Greek New Testament. The New Testament Peshitta normally started with a text close to the Old Syriac, but frequently adapted it to bring it closer to New Testament Greek. Die Siriese weergawes van die Ou-Testamentiese aanhalings in Matteus. Die Evangelie van Matteus het 10 aanhalings uit die Ou Testament wat deur ’n formule met die werkwoord, πληροῦν, ingelei word. Hierdie artikel ondersoek die weergawe van 9 van die 10 aanhalings in drie Siriese weergawes van die Nuwe Testament, naamlik die Peshitta en die twee weergawes van die Ou Siriese Evangelies (Sinaiticus en Curetonianus. Die vraagstuk handel oor dieverhouding van die drie Siriese weergawes tot mekaar, tot die Peshitta van die Ou Testament en die Griekse Evangelie. Vir die aanhalings in Matteus is hulle verhouding tot die Hebreeuse e Griekse Ou Testament baie belangrik. In die aanhalings wat bespreek

  11. Ultrasonographic evaluation of myometrial thickness and prediction of a successful external cephalic version.

    Science.gov (United States)

    Buhimschi, Catalin S; Buhimschi, Irina A; Wehrum, Mark J; Molaskey-Jones, Sherry; Sfakianaki, Anna K; Pettker, Christian M; Thung, Stephen; Campbell, Katherine H; Dulay, Antonette T; Funai, Edmund F; Bahtiyar, Mert O

    2011-10-01

    To test the hypothesis that myometrial thickness predicts the success of external cephalic version. Abdominal ultrasonographic scans were performed in 114 consecutive pregnant women with breech singletons before an external cephalic version maneuver. Myometrial thickness was measured by a standardized protocol at three sites: the lower segment, midanterior wall, and the fundal uterine wall. Independent variables analyzed in conjunction with myometrial thickness were: maternal age, parity, body mass index, abdominal wall thickness, estimated fetal weight, amniotic fluid index, placental thickness and location, fetal spine position, breech type, and delivery outcomes such as final mode of delivery and birth weight. Successful version was associated with a thicker ultrasonographic fundal myometrium (unsuccessful: 6.7 [5.5-8.4] compared with successful: 7.4 [6.6-9.7] mm, P=.037). Multivariate regression analysis showed that increased fundal myometrial thickness, high amniotic fluid index, and nonfrank breech presentation were the strongest independent predictors of external cephalic version success (Pexternal cephalic versions (fundal myometrial thickness: odds ratio [OR] 2.4, 95% confidence interval [CI] 1.1-5.2, P=.029; amniotic fluid index: OR 2.8, 95% CI 1.3-6.0, P=.008). Combining the two variables resulted in an absolute risk reduction for a failed version of 27.6% (95% CI 7.1-48.1) and a number needed to treat of four (95% CI 2.1-14.2). Fundal myometrial thickness and amniotic fluid index contribute to success of external cephalic version and their evaluation can be easily incorporated in algorithms before the procedure. III.

  12. Skin Segmentation Based on Graph Cuts

    Institute of Scientific and Technical Information of China (English)

    HU Zhilan; WANG Guijin; LIN Xinggang; YAN Hong

    2009-01-01

    Skin segmentation is widely used in many computer vision tasks to improve automated visualiza-tion. This paper presents a graph cuts algorithm to segment arbitrary skin regions from images. The detected face is used to determine the foreground skin seeds and the background non-skin seeds with the color probability distributions for the foreground represented by a single Gaussian model and for the background by a Gaussian mixture model. The probability distribution of the image is used for noise suppression to alle-viate the influence of the background regions having skin-like colors. Finally, the skin is segmented by graph cuts, with the regional parameter y optimally selected to adapt to different images. Tests of the algorithm on many real wodd photographs show that the scheme accurately segments skin regions and is robust against illumination variations, individual skin variations, and cluttered backgrounds.

  13. Measurements on a prototype segmented Clover detector

    CERN Document Server

    Shepherd, S L; Cullen, D M; Appelbe, D E; Simpson, J; Gerl, J; Kaspar, M; Kleinböhl, A; Peter, I; Rejmund, M; Schaffner, H; Schlegel, C; France, G D

    1999-01-01

    The performance of a segmented Clover germanium detector has been measured. The segmented Clover detector is a composite germanium detector, consisting of four individual germanium crystals in the configuration of a four-leaf Clover, housed in a single cryostat. Each crystal is electrically segmented on its outer surface into four quadrants, with separate energy read-outs from nine crystal zones. Signals are also taken from the inner contact of each crystal. This effectively produces a detector with 16 active elements. One of the purposes of this segmentation is to improve the overall spectral resolution when detecting gamma radiation emitted following a nuclear reaction, by minimising Doppler broadening caused by the opening angle subtended by each detector element. Results of the tests with sources and in beam will be presented. The improved granularity of the detector also leads to an improved isolated hit probability compared with an unsegmented Clover detector. (author)

  14. Constrained bidirectional propagation and stroke segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Mori, S; Gillespie, W; Suen, C Y

    1983-03-01

    A new method for decomposing a complex figure into its constituent strokes is described. This method, based on constrained bidirectional propagation, is suitable for parallel processing. Examples of its application to the segmentation of Chinese characters are presented. 9 references.

  15. Neural Scene Segmentation by Oscillatory Correlation

    National Research Council Canada - National Science Library

    Wang, DeLiang

    2000-01-01

    The segmentation of a visual scene into a set of coherent patterns (objects) is a fundamental aspect of perception, which underlies a variety of important tasks such as figure/ground segregation, and scene analysis...

  16. Variational mesh segmentation via quadric surface fitting

    KAUST Repository

    Yan, Dongming

    2012-11-01

    We present a new variational method for mesh segmentation by fitting quadric surfaces. Each component of the resulting segmentation is represented by a general quadric surface (including plane as a special case). A novel energy function is defined to evaluate the quality of the segmentation, which combines both L2 and L2 ,1 metrics from a triangle to a quadric surface. The Lloyd iteration is used to minimize the energy function, which repeatedly interleaves between mesh partition and quadric surface fitting. We also integrate feature-based and simplification-based techniques in the segmentation framework, which greatly improve the performance. The advantages of our algorithm are demonstrated by comparing with the state-of-the-art methods. © 2012 Elsevier Ltd. All rights reserved.

  17. Probabilistic segmentation of remotely sensed images

    NARCIS (Netherlands)

    Gorte, B.

    1998-01-01

    For information extraction from image data to create or update geographic information systems, objects are identified and labeled using an integration of segmentation and classification. This yields geometric and thematic information, respectively.

    Bayesian image

  18. Medical image segmentation using genetic algorithms.

    Science.gov (United States)

    Maulik, Ujjwal

    2009-03-01

    Genetic algorithms (GAs) have been found to be effective in the domain of medical image segmentation, since the problem can often be mapped to one of search in a complex and multimodal landscape. The challenges in medical image segmentation arise due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. The resulting search space is therefore often noisy with a multitude of local optima. Not only does the genetic algorithmic framework prove to be effective in coming out of local optima, it also brings considerable flexibility into the segmentation procedure. In this paper, an attempt has been made to review the major applications of GAs to the domain of medical image segmentation.

  19. Joint shape segmentation with linear programming

    KAUST Repository

    Huang, Qixing; Koltun, Vladlen; Guibas, Leonidas

    2011-01-01

    program is solved via a linear programming relaxation, using a block coordinate descent procedure that makes the optimization feasible for large databases. We evaluate the presented approach on the Princeton segmentation benchmark and show that joint shape

  20. Variational mesh segmentation via quadric surface fitting

    KAUST Repository

    Yan, Dongming; Wang, Wen Ping; Liu, Yang; Yang, Zhouwang

    2012-01-01

    We present a new variational method for mesh segmentation by fitting quadric surfaces. Each component of the resulting segmentation is represented by a general quadric surface (including plane as a special case). A novel energy function is defined to evaluate the quality of the segmentation, which combines both L2 and L2 ,1 metrics from a triangle to a quadric surface. The Lloyd iteration is used to minimize the energy function, which repeatedly interleaves between mesh partition and quadric surface fitting. We also integrate feature-based and simplification-based techniques in the segmentation framework, which greatly improve the performance. The advantages of our algorithm are demonstrated by comparing with the state-of-the-art methods. © 2012 Elsevier Ltd. All rights reserved.

  1. Pitch Synchronous Segmentation of Speech Signals

    Data.gov (United States)

    National Aeronautics and Space Administration — The Pitch Synchronous Segmentation (PSS) that accelerates speech without changing its fundamental frequency method could be applied and evaluated for use at NASA....

  2. Field Sampling from a Segmented Image

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-06-01

    Full Text Available This paper presents a statistical method for deriving the optimal prospective field sampling scheme on a remote sensing image to represent different categories in the field. The iterated conditional modes algorithm (ICM) is used for segmentation...

  3. Retina image–based optic disc segmentation

    Directory of Open Access Journals (Sweden)

    Ching-Lin Wang

    2016-05-01

    Full Text Available The change of optic disc can be used to diagnose many eye diseases, such as glaucoma, diabetic retinopathy and macular degeneration. Moreover, retinal blood vessel pattern is unique for human beings even for identical twins. It is a highly stable pattern in biometric identification. Since optic disc is the beginning of the optic nerve and main blood vessels in retina, it can be used as a reference point of identification. Therefore, optic disc segmentation is an important technique for developing a human identity recognition system and eye disease diagnostic system. This article hence presents an optic disc segmentation method to extract the optic disc from a retina image. The experimental results show that the optic disc segmentation method can give impressive results in segmenting the optic disc from a retina image.

  4. Handwriting segmentation of unconstrained Oriya text

    Indian Academy of Sciences (India)

    Segmentation of handwritten text into lines, words and characters .... We now discuss here some terms relating to water reservoirs that will be used in feature ..... is found. Next, based on the touching position, reservoir base-area points, ...

  5. Mixed raster content segmentation, compression, transmission

    CERN Document Server

    Pavlidis, George

    2017-01-01

    This book presents the main concepts in handling digital images of mixed content, traditionally referenced as mixed raster content (MRC), in two main parts. The first includes introductory chapters covering the scientific and technical background aspects, whereas the second presents a set of research and development approaches to tackle key issues in MRC segmentation, compression and transmission. The book starts with a review of color theory and the mechanism of color vision in humans. In turn, the second chapter reviews data coding and compression methods so as to set the background and demonstrate the complexity involved in dealing with MRC. Chapter three addresses the segmentation of images through an extensive literature review, which highlights the various approaches used to tackle MRC segmentation. The second part of the book focuses on the segmentation of color images for optimized compression, including multi-layered decomposition and representation of MRC and the processes that can be employed to op...

  6. A competition in unsupervised color image segmentation

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Mikeš, Stanislav

    2016-01-01

    Roč. 57, č. 9 (2016), s. 136-151 ISSN 0031-3203 R&D Projects: GA ČR(CZ) GA14-10911S Institutional support: RVO:67985556 Keywords : Unsupervised image segmentation * Segmentation contest * Texture analysis Subject RIV: BD - Theory of Information Impact factor: 4.582, year: 2016 http://library.utia.cas.cz/separaty/2016/RO/haindl-0459179.pdf

  7. Learning Semantic Segmentation with Diverse Supervision

    OpenAIRE

    Ye, Linwei; Liu, Zhi; Wang, Yang

    2018-01-01

    Models based on deep convolutional neural networks (CNN) have significantly improved the performance of semantic segmentation. However, learning these models requires a large amount of training images with pixel-level labels, which are very costly and time-consuming to collect. In this paper, we propose a method for learning CNN-based semantic segmentation models from images with several types of annotations that are available for various computer vision tasks, including image-level labels fo...

  8. Unfolding Implementation in Industrial Market Segmentation

    DEFF Research Database (Denmark)

    Bøjgaard, John; Ellegaard, Chris

    2011-01-01

    to pave the way towards closing this gap. The extent of implementation coverage is assessed and various notions of implementation are identified. Implementation as the task of converting segmentation plans into action (referred to as execution) is identified as a particularly beneficial focus area...... for marketing management. Three key elements and challenges connected to execution of market segmentation are identified — organization, motivation, and adaptation....

  9. Mounting and Alignment of IXO Mirror Segments

    Science.gov (United States)

    Chan, Kai-Wing; Zhang, William; Evans, Tyler; McClelland, Ryan; Hong, Melinda; Mazzarella, James; Saha, Timo; Jalota, Lalit; Olsen, Lawrence; Byron, Glenn

    2010-01-01

    A suspension-mounting scheme is developed for the IXO (International X-ray Observatory) mirror segments in which the figure of the mirror segment is preserved in each stage of mounting. The mirror, first fixed on a thermally compatible strongback, is subsequently transported, aligned and transferred onto its mirror housing. In this paper, we shall outline the requirement, approaches, and recent progress of the suspension mount processes.

  10. Active Mask Segmentation of Fluorescence Microscope Images

    OpenAIRE

    Srinivasa, Gowri; Fickus, Matthew C.; Guo, Yusong; Linstedt, Adam D.; Kovačević, Jelena

    2009-01-01

    We propose a new active mask algorithm for the segmentation of fluorescence microscope images of punctate patterns. It combines the (a) flexibility offered by active-contour methods, (b) speed offered by multiresolution methods, (c) smoothing offered by multiscale methods, and (d) statistical modeling offered by region-growing methods into a fast and accurate segmentation tool. The framework moves from the idea of the “contour” to that of “inside and outside”, or, masks, allowing for easy mul...

  11. Primary Segmental Volvulus Mimicking Ileal Atresia

    Science.gov (United States)

    Rao, Sadashiva; B Shetty, Kishan

    2013-01-01

    Neonatal intestinal volvulus in the absence of malrotation is a rare occurrence and rarer still is the intestinal volvulus in absence of any other predisposing factors. Primary segmental volvulus in neonates is very rare entity, which can have catastrophic outcome if not intervened at appropriate time. We report two such cases, which were preoperatively diagnosed as ileal atresia and intraoperatively revealed to be primary segmental volvulus of the ileum. PMID:26023426

  12. Hepatobiliary scintigraphy in the assessment of long-term complication after biliary-enteric anastomosis: role in the diagnosis of post-operative segmental or total biliary obstruction

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Seung; Moon, Dae Hyuk; Lee, Sung Gyu; Lee, Yung Joo; Park, Kwang Min; Shin, Jung Woo; Ryu, Jin Sook; Lee, Hee Kyung [Asan Medicial Center, Seoul (Korea, Republic of)

    1998-07-01

    The purpose of this study was to investigate the accuracy of hepatobiliary scintigraphy (HBS) in the diagnosis of segmental or total biliary obstruction during long-term follow-up period after curative radical surgery with biliary-enteric anastomosis. The study population included 80 patients who underwent biliary-enteric anastomoses for benign (n=33) or malignant (n=47) biliary disease. Fifty-six of these 80 patients also underwent curative hepatic resection. Ninety eight hepatobiliary scintigrams using {sup 99m}Tc-DISIDA were performed at least 1 month after surgery (median 9 month). The scintigraphic criteria of total biliary obstruction we used were intestinal excretion beyond one hour or delayed hepatobiliary washout despite the presence of intestinal excretion. Segmental biliary obstruction was defined as delayed segmental excretion. The accuracy for biliary obstruction was evaluated according to different clinical situations. There were 9 instances with total biliary obstruction and 23 with segmental bile duct obstruction. Diagnosis of biliary obstruction was confirmed by percutaneous transhepatic cholangiography or surgery in 13, and follow-up clinical data for at least 6 months in 19 instances. Among the 32 instances with biliary symptoms and abnormal liver function, HBS allowed correct diagnosis in all 32(9 total, 14 segmental obstruction and 9 non-obstruction). Of the 40 with nonspecific symptom or isolated elevation of serum alkaline phosphatase, HBS diagnosed 8 of the 9 segmental biliary obstruction and 30 of the 31 non-obstruction. There were no biliary obstruction and no false positive result of scintigraphy in 26 instances which had no clinical symptom or signs of biliary obstruction. Diagnostic sensitivity of HBS was 100% (9/9) for total biliary obstruction, and 96%(22/23) for segmental bile obstruction. Specificity was 98%(39/40) in patients who had abnormal symptom or sign. Hepatobiliary scintigraphy is a highly accurate modality in the

  13. Hepatobiliary scintigraphy in the assessment of long-term complication after biliary-enteric anastomosis: role in the diagnosis of post-operative segmental or total biliary obstruction

    International Nuclear Information System (INIS)

    Kim, Jae Seung; Moon, Dae Hyuk; Lee, Sung Gyu; Lee, Yung Joo; Park, Kwang Min; Shin, Jung Woo; Ryu, Jin Sook; Lee, Hee Kyung

    1998-01-01

    The purpose of this study was to investigate the accuracy of hepatobiliary scintigraphy (HBS) in the diagnosis of segmental or total biliary obstruction during long-term follow-up period after curative radical surgery with biliary-enteric anastomosis. The study population included 80 patients who underwent biliary-enteric anastomoses for benign (n=33) or malignant (n=47) biliary disease. Fifty-six of these 80 patients also underwent curative hepatic resection. Ninety eight hepatobiliary scintigrams using 99m Tc-DISIDA were performed at least 1 month after surgery (median 9 month). The scintigraphic criteria of total biliary obstruction we used were intestinal excretion beyond one hour or delayed hepatobiliary washout despite the presence of intestinal excretion. Segmental biliary obstruction was defined as delayed segmental excretion. The accuracy for biliary obstruction was evaluated according to different clinical situations. There were 9 instances with total biliary obstruction and 23 with segmental bile duct obstruction. Diagnosis of biliary obstruction was confirmed by percutaneous transhepatic cholangiography or surgery in 13, and follow-up clinical data for at least 6 months in 19 instances. Among the 32 instances with biliary symptoms and abnormal liver function, HBS allowed correct diagnosis in all 32(9 total, 14 segmental obstruction and 9 non-obstruction). Of the 40 with nonspecific symptom or isolated elevation of serum alkaline phosphatase, HBS diagnosed 8 of the 9 segmental biliary obstruction and 30 of the 31 non-obstruction. There were no biliary obstruction and no false positive result of scintigraphy in 26 instances which had no clinical symptom or signs of biliary obstruction. Diagnostic sensitivity of HBS was 100% (9/9) for total biliary obstruction, and 96%(22/23) for segmental bile obstruction. Specificity was 98%(39/40) in patients who had abnormal symptom or sign. Hepatobiliary scintigraphy is a highly accurate modality in the evaluation of

  14. Intensity-based hierarchical clustering in CT-scans: application to interactive segmentation in cardiology

    Science.gov (United States)

    Hadida, Jonathan; Desrosiers, Christian; Duong, Luc

    2011-03-01

    The segmentation of anatomical structures in Computed Tomography Angiography (CTA) is a pre-operative task useful in image guided surgery. Even though very robust and precise methods have been developed to help achieving a reliable segmentation (level sets, active contours, etc), it remains very time consuming both in terms of manual interactions and in terms of computation time. The goal of this study is to present a fast method to find coarse anatomical structures in CTA with few parameters, based on hierarchical clustering. The algorithm is organized as follows: first, a fast non-parametric histogram clustering method is proposed to compute a piecewise constant mask. A second step then indexes all the space-connected regions in the piecewise constant mask. Finally, a hierarchical clustering is achieved to build a graph representing the connections between the various regions in the piecewise constant mask. This step builds up a structural knowledge about the image. Several interactive features for segmentation are presented, for instance association or disassociation of anatomical structures. A comparison with the Mean-Shift algorithm is presented.

  15. Object-based implicit learning in visual search: perceptual segmentation constrains contextual cueing.

    Science.gov (United States)

    Conci, Markus; Müller, Hermann J; von Mühlenen, Adrian

    2013-07-09

    In visual search, detection of a target is faster when it is presented within a spatial layout of repeatedly encountered nontarget items, indicating that contextual invariances can guide selective attention (contextual cueing; Chun & Jiang, 1998). However, perceptual regularities may interfere with contextual learning; for instance, no contextual facilitation occurs when four nontargets form a square-shaped grouping, even though the square location predicts the target location (Conci & von Mühlenen, 2009). Here, we further investigated potential causes for this interference-effect: We show that contextual cueing can reliably occur for targets located within the region of a segmented object, but not for targets presented outside of the object's boundaries. Four experiments demonstrate an object-based facilitation in contextual cueing, with a modulation of context-based learning by relatively subtle grouping cues including closure, symmetry, and spatial regularity. Moreover, the lack of contextual cueing for targets located outside the segmented region was due to an absence of (latent) learning of contextual layouts, rather than due to an attentional bias towards the grouped region. Taken together, these results indicate that perceptual segmentation provides a basic structure within which contextual scene regularities are acquired. This in turn argues that contextual learning is constrained by object-based selection.

  16. Osmotic and Heat Stress Effects on Segmentation.

    Directory of Open Access Journals (Sweden)

    Julian Weiss

    Full Text Available During vertebrate embryonic development, early skin, muscle, and bone progenitor populations organize into segments known as somites. Defects in this conserved process of segmentation lead to skeletal and muscular deformities, such as congenital scoliosis, a curvature of the spine caused by vertebral defects. Environmental stresses such as hypoxia or heat shock produce segmentation defects, and significantly increase the penetrance and severity of vertebral defects in genetically susceptible individuals. Here we show that a brief exposure to a high osmolarity solution causes reproducible segmentation defects in developing zebrafish (Danio rerio embryos. Both osmotic shock and heat shock produce border defects in a dose-dependent manner, with an increase in both frequency and severity of defects. We also show that osmotic treatment has a delayed effect on somite development, similar to that observed in heat shocked embryos. Our results establish osmotic shock as an alternate experimental model for stress, affecting segmentation in a manner comparable to other known environmental stressors. The similar effects of these two distinct environmental stressors support a model in which a variety of cellular stresses act through a related response pathway that leads to disturbances in the segmentation process.

  17. SVM Pixel Classification on Colour Image Segmentation

    Science.gov (United States)

    Barui, Subhrajit; Latha, S.; Samiappan, Dhanalakshmi; Muthu, P.

    2018-04-01

    The aim of image segmentation is to simplify the representation of an image with the help of cluster pixels into something meaningful to analyze. Segmentation is typically used to locate boundaries and curves in an image, precisely to label every pixel in an image to give each pixel an independent identity. SVM pixel classification on colour image segmentation is the topic highlighted in this paper. It holds useful application in the field of concept based image retrieval, machine vision, medical imaging and object detection. The process is accomplished step by step. At first we need to recognize the type of colour and the texture used as an input to the SVM classifier. These inputs are extracted via local spatial similarity measure model and Steerable filter also known as Gabon Filter. It is then trained by using FCM (Fuzzy C-Means). Both the pixel level information of the image and the ability of the SVM Classifier undergoes some sophisticated algorithm to form the final image. The method has a well developed segmented image and efficiency with respect to increased quality and faster processing of the segmented image compared with the other segmentation methods proposed earlier. One of the latest application result is the Light L16 camera.

  18. Event segmentation ability uniquely predicts event memory.

    Science.gov (United States)

    Sargent, Jesse Q; Zacks, Jeffrey M; Hambrick, David Z; Zacks, Rose T; Kurby, Christopher A; Bailey, Heather R; Eisenberg, Michelle L; Beck, Taylor M

    2013-11-01

    Memory for everyday events plays a central role in tasks of daily living, autobiographical memory, and planning. Event memory depends in part on segmenting ongoing activity into meaningful units. This study examined the relationship between event segmentation and memory in a lifespan sample to answer the following question: Is the ability to segment activity into meaningful events a unique predictor of subsequent memory, or is the relationship between event perception and memory accounted for by general cognitive abilities? Two hundred and eight adults ranging from 20 to 79years old segmented movies of everyday events and attempted to remember the events afterwards. They also completed psychometric ability tests and tests measuring script knowledge for everyday events. Event segmentation and script knowledge both explained unique variance in event memory above and beyond the psychometric measures, and did so as strongly in older as in younger adults. These results suggest that event segmentation is a basic cognitive mechanism, important for memory across the lifespan. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Process Segmentation Typology in Czech Companies

    Directory of Open Access Journals (Sweden)

    Tucek David

    2016-03-01

    Full Text Available This article describes process segmentation typology during business process management implementation in Czech companies. Process typology is important for a manager’s overview of process orientation as well as for a manager’s general understanding of business process management. This article provides insight into a process-oriented organizational structure. The first part analyzes process segmentation typology itself as well as some original results of quantitative research evaluating process segmentation typology in the specific context of Czech company strategies. Widespread data collection was carried out in 2006 and 2013. The analysis of this data showed that managers have more options regarding process segmentation and its selection. In terms of practicality and ease of use, the most frequently used method of process segmentation (managerial, main, and supportive stems directly from the requirements of ISO 9001. Because of ISO 9001:2015, managers must now apply risk planning in relation to the selection of processes that are subjected to process management activities. It is for this fundamental reason that this article focuses on process segmentation typology.

  20. Fold distributions at clover, crystal and segment levels for segmented clover detectors

    International Nuclear Information System (INIS)

    Kshetri, R; Bhattacharya, P

    2014-01-01

    Fold distributions at clover, crystal and segment levels have been extracted for an array of segmented clover detectors for various gamma energies. A simple analysis of the results based on a model independant approach has been presented. For the first time, the clover fold distribution of an array and associated array addback factor have been extracted. We have calculated the percentages of the number of crystals and segments that fire for a full energy peak event

  1. Glenoid version by CT scan: an analysis of clinical measurement error and introduction of a protocol to reduce variability

    Energy Technology Data Exchange (ETDEWEB)

    Bunt, Fabian van de [VU University Medical Center, Amsterdam (Netherlands); Pearl, Michael L.; Lee, Eric K.; Peng, Lauren; Didomenico, Paul [Kaiser Permanente, Los Angeles, CA (United States)

    2015-11-15

    Recent studies have challenged the accuracy of conventional measurements of glenoid version. Variability in the orientation of the scapula from individual anatomical differences and patient positioning, combined with differences in observer measurement practices, have been identified as sources of variability. The purpose of this study was to explore the utility and reliability of clinically available software that allows manipulation of three-dimensional images in order to bridge the variance between clinical and anatomic version in a clinical setting. Twenty CT scans of normal glenoids of patients who had proximal humerus fractures were measured for version. Four reviewers first measured version in a conventional manner (clinical version), measurements were made again (anatomic version) after employing a protocol for reformatting the CT data to align the coronal and sagittal planes with the superior-inferior axis of the glenoid, and the scapular body, respectively. The average value of clinical retroversion for all reviewers and all subjects was -1.4 (range, -16 to 21 ), as compared to -3.2 (range, -21 to 6 ) when measured from reformatted images. The mean difference between anatomical and clinical version was 1.9 ± 5.6 but ranged on individual measurements from -13 to 26 . In no instance did all four observers choose the same image slice from the sequence of images. This study confirmed the variation in glenoid version dependent on scapular orientation previously identified in other studies using scapular models, and presents a clinically accessible protocol to correct for scapular orientation from the patient's CT data. (orig.)

  2. Glenoid version by CT scan: an analysis of clinical measurement error and introduction of a protocol to reduce variability

    International Nuclear Information System (INIS)

    Bunt, Fabian van de; Pearl, Michael L.; Lee, Eric K.; Peng, Lauren; Didomenico, Paul

    2015-01-01

    Recent studies have challenged the accuracy of conventional measurements of glenoid version. Variability in the orientation of the scapula from individual anatomical differences and patient positioning, combined with differences in observer measurement practices, have been identified as sources of variability. The purpose of this study was to explore the utility and reliability of clinically available software that allows manipulation of three-dimensional images in order to bridge the variance between clinical and anatomic version in a clinical setting. Twenty CT scans of normal glenoids of patients who had proximal humerus fractures were measured for version. Four reviewers first measured version in a conventional manner (clinical version), measurements were made again (anatomic version) after employing a protocol for reformatting the CT data to align the coronal and sagittal planes with the superior-inferior axis of the glenoid, and the scapular body, respectively. The average value of clinical retroversion for all reviewers and all subjects was -1.4 (range, -16 to 21 ), as compared to -3.2 (range, -21 to 6 ) when measured from reformatted images. The mean difference between anatomical and clinical version was 1.9 ± 5.6 but ranged on individual measurements from -13 to 26 . In no instance did all four observers choose the same image slice from the sequence of images. This study confirmed the variation in glenoid version dependent on scapular orientation previously identified in other studies using scapular models, and presents a clinically accessible protocol to correct for scapular orientation from the patient's CT data. (orig.)

  3. Deformable meshes for medical image segmentation accurate automatic segmentation of anatomical structures

    CERN Document Server

    Kainmueller, Dagmar

    2014-01-01

    ? Segmentation of anatomical structures in medical image data is an essential task in clinical practice. Dagmar Kainmueller introduces methods for accurate fully automatic segmentation of anatomical structures in 3D medical image data. The author's core methodological contribution is a novel deformation model that overcomes limitations of state-of-the-art Deformable Surface approaches, hence allowing for accurate segmentation of tip- and ridge-shaped features of anatomical structures. As for practical contributions, she proposes application-specific segmentation pipelines for a range of anatom

  4. Integrated Global Radiosonde Archive (IGRA) Version 2

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Integrated Global Radiosonde Archive (IGRA) Version 2 consists of quality-controlled radiosonde observations of temperature, humidity, and wind at stations across...

  5. Integrated Procurement Management System, Version II

    Science.gov (United States)

    Collier, L. J.

    1985-01-01

    Integrated Procurement Management System, Version II (IPMS II) is online/ batch system for collecting developing, managing and disseminating procurementrelated data at NASA Johnson Space Center. Portions of IPMS II adaptable to other procurement situations.

  6. TJ-II Library Manual (Version 2)

    International Nuclear Information System (INIS)

    Tribaldos, V.; Milligen, B. Ph. van; Lopez-Fraguas, A.

    2001-01-01

    This is a manual of use of the TJ2 Numerical Library that has been developed for making numerical computations of different TJ-II configurations. This manual is a new version of the earlier manual CIEMAT report 806. (Author)

  7. Fetomaternal hemorrhage during external cephalic version.

    Science.gov (United States)

    Boucher, Marc; Marquette, Gerald P; Varin, Jocelyne; Champagne, Josette; Bujold, Emmanuel

    2008-07-01

    To estimate the frequency and volume of fetomaternal hemorrhage during external cephalic version for term breech singleton fetuses and to identify risk factors involved with this complication. A prospective observational study was performed including all patients undergoing a trial of external cephalic version for a breech presentation of at least 36 weeks of gestation between 1987 and 2001 in our center. A search for fetal erythrocytes using the standard Kleihauer-Betke test was obtained before and after each external cephalic version. The frequency and volume of fetomaternal hemorrhage were calculated. Putative risk factors for fetomaternal hemorrhage were evaluated by chi(2) test and Mann-Whitney U test. A Kleihauer-Betke test result was available before and after 1,311 trials of external cephalic version. The Kleihauer-Betke test was positive in 67 (5.1%) before the procedure. Of the 1,244 women with a negative Kleihauer-Betke test before external cephalic version, 30 (2.4%) had a positive Kleihauer-Betke test after the procedure. Ten (0.8%) had an estimated fetomaternal hemorrhage greater than 1 mL, and one (0.08%) had an estimated fetomaternal hemorrhage greater than 30 mL. The risk of fetomaternal hemorrhage was not influenced by parity, gestational age, body mass index, number of attempts at version, placental location, or amniotic fluid index. The risk of detectable fetomaternal hemorrhage during external cephalic version was 2.4%, with fetomaternal hemorrhage more than 30 mL in less than 0.1% of cases. These data suggest that the performance of a Kleihauer-Betke test is unwarranted in uneventful external cephalic version and that in Rh-negative women, no further Rh immune globulin is necessary other than the routine 300-microgram dose at 28 weeks of gestation and postpartum. II.

  8. Anesthetic management of external cephalic version.

    Science.gov (United States)

    Chalifoux, Laurie A; Sullivan, John T

    2013-09-01

    Breech presentation is common at term and its reduction through external cephalic version represents a noninvasive opportunity to avoid cesarean delivery and the associated maternal morbidity. In addition to uterine relaxants, neuraxial anesthesia is associated with increased success of version procedures when surgical anesthetic dosing is used. The intervention is likely cost effective given the effect size and the avoided high costs of cesarean delivery. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. SHUFFLE. Windows 95/98/2000 version

    International Nuclear Information System (INIS)

    Slavic, S.; Zefran, B.

    2000-01-01

    Program package SHUFFLE was developed to help the user during fuel loading and unloading operations at a nuclear power plant. The first version, developed in 1992, has been written in the CLIPPER program language and run under the DOS operating system. Since the DOS environment exhibits several drawbacks regarding code portability and flexibility, the recent SHUFFLE version has been transformed to run under the MS Windows operating system. (author)

  10. Ecodesign Directive version 2.0

    DEFF Research Database (Denmark)

    This present report reports on the main findings of the project Ecodesign Directive version 2.0 - from Energy Efficiency to Resource Efficiency. The project is financed by the Danish Environmental Protection Agency and ran from December 2012 to June 2014.......This present report reports on the main findings of the project Ecodesign Directive version 2.0 - from Energy Efficiency to Resource Efficiency. The project is financed by the Danish Environmental Protection Agency and ran from December 2012 to June 2014....

  11. Cubical version of combinatorial differential forms

    DEFF Research Database (Denmark)

    Kock, Anders

    2010-01-01

    The theory of combinatorial differential forms is usually presented in simplicial terms. We present here a cubical version; it depends on the possibility of forming affine combinations of mutual neighbour points in a manifold, in the context of synthetic differential geometry.......The theory of combinatorial differential forms is usually presented in simplicial terms. We present here a cubical version; it depends on the possibility of forming affine combinations of mutual neighbour points in a manifold, in the context of synthetic differential geometry....

  12. Implementing version support for complex objects

    OpenAIRE

    Blanken, Henk

    1991-01-01

    New applications in the area of office information systems, computer aided design and manufacturing make new demands upon database management systems. Among others highly structured objects and their history have to be represented and manipulated. The paper discusses some general problems concerning the access and storage of complex objects with their versions and the solutions developed within the AIM/II project. Queries related to versions are distinguished in ASOF queries (asking informati...

  13. A kernel version of multivariate alteration detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2013-01-01

    Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....

  14. A method and software for segmentation of anatomic object ensembles by deformable m-reps

    International Nuclear Information System (INIS)

    Pizer, Stephen M.; Fletcher, P. Thomas; Joshi, Sarang; Gash, A. Graham; Stough, Joshua; Thall, Andrew; Tracton, Gregg; Chaney, Edward L.

    2005-01-01

    Deformable shape models (DSMs) comprise a general approach that shows great promise for automatic image segmentation. Published studies by others and our own research results strongly suggest that segmentation of a normal or near-normal object from 3D medical images will be most successful when the DSM approach uses (1) knowledge of the geometry of not only the target anatomic object but also the ensemble of objects providing context for the target object and (2) knowledge of the image intensities to be expected relative to the geometry of the target and contextual objects. The segmentation will be most efficient when the deformation operates at multiple object-related scales and uses deformations that include not just local translations but the biologically important transformations of bending and twisting, i.e., local rotation, and local magnification. In computer vision an important class of DSM methods uses explicit geometric models in a Bayesian statistical framework to provide a priori information used in posterior optimization to match the DSM against a target image. In this approach a DSM of the object to be segmented is placed in the target image data and undergoes a series of rigid and nonrigid transformations that deform the model to closely match the target object. The deformation process is driven by optimizing an objective function that has terms for the geometric typicality and model-to-image match for each instance of the deformed model. The success of this approach depends strongly on the object representation, i.e., the structural details and parameter set for the DSM, which in turn determines the analytic form of the objective function. This paper describes a form of DSM called m-reps that has or allows these properties, and a method of segmentation consisting of large to small scale posterior optimization of m-reps. Segmentation by deformable m-reps, together with the appropriate data representations, visualizations, and user interface, has been

  15. Single-segment and double-segment INTACS for post-LASIK ectasia.

    Directory of Open Access Journals (Sweden)

    Hassan Hashemi

    2014-09-01

    Full Text Available The objective of the present study was to compare single segment and double segment INTACS rings in the treatment of post-LASIK ectasia. In this interventional study, 26 eyes with post-LASIK ectasia were assessed. Ectasia was defined as progressive myopia regardless of astigmatism, along with topographic evidence of inferior steepening of the cornea after LASIK. We excluded those with a history of intraocular surgery, certain eye conditions, and immune disorders, as well as monocular, pregnant and lactating patients. A total of 11 eyes had double ring and 15 eyes had single ring implantation. Visual and refractive outcomes were compared with preoperative values based on the number of implanted INTACS rings. Pre and postoperative spherical equivalent were -3.92 and -2.29 diopter (P=0.007. The spherical equivalent decreased by 1 ± 3.2 diopter in the single-segment group and 2.56 ± 1.58 diopter in the double-segment group (P=0.165. Mean preoperative astigmatism was 2.38 ± 1.93 diopter which decreased to 2.14 ± 1.1 diopter after surgery (P=0.508; 0.87 ± 1.98 diopter decrease in the single-segment group and 0.67 ± 1.2 diopter increase in the double-segment group (P=0.025. Nineteen patients (75% gained one or two lines, and only three, who were all in the double-segment group, lost one or two lines of best corrected visual acuity. The spherical equivalent and vision significantly decreased in all patients. In these post-LASIK ectasia patients, the spherical equivalent was corrected better with two segments compared to single segment implantation; nonetheless, the level of astigmatism in the single-segment group was significantly better than that in the double-segment group.

  16. SALIENCY BASED SEGMENTATION OF SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    A. Sharma

    2015-03-01

    Full Text Available Saliency gives the way as humans see any image and saliency based segmentation can be eventually helpful in Psychovisual image interpretation. Keeping this in view few saliency models are used along with segmentation algorithm and only the salient segments from image have been extracted. The work is carried out for terrestrial images as well as for satellite images. The methodology used in this work extracts those segments from segmented image which are having higher or equal saliency value than a threshold value. Salient and non salient regions of image become foreground and background respectively and thus image gets separated. For carrying out this work a dataset of terrestrial images and Worldview 2 satellite images (sample data are used. Results show that those saliency models which works better for terrestrial images are not good enough for satellite image in terms of foreground and background separation. Foreground and background separation in terrestrial images is based on salient objects visible on the images whereas in satellite images this separation is based on salient area rather than salient objects.

  17. Exploratory analysis of genomic segmentations with Segtools

    Directory of Open Access Journals (Sweden)

    Buske Orion J

    2011-10-01

    Full Text Available Abstract Background As genome-wide experiments and annotations become more prevalent, researchers increasingly require tools to help interpret data at this scale. Many functional genomics experiments involve partitioning the genome into labeled segments, such that segments sharing the same label exhibit one or more biochemical or functional traits. For example, a collection of ChlP-seq experiments yields a compendium of peaks, each labeled with one or more associated DNA-binding proteins. Similarly, manually or automatically generated annotations of functional genomic elements, including cis-regulatory modules and protein-coding or RNA genes, can also be summarized as genomic segmentations. Results We present a software toolkit called Segtools that simplifies and automates the exploration of genomic segmentations. The software operates as a series of interacting tools, each of which provides one mode of summarization. These various tools can be pipelined and summarized in a single HTML page. We describe the Segtools toolkit and demonstrate its use in interpreting a collection of human histone modification data sets and Plasmodium falciparum local chromatin structure data sets. Conclusions Segtools provides a convenient, powerful means of interpreting a genomic segmentation.

  18. Segmentation in reading and film comprehension.

    Science.gov (United States)

    Zacks, Jeffrey M; Speer, Nicole K; Reynolds, Jeremy R

    2009-05-01

    When reading a story or watching a film, comprehenders construct a series of representations in order to understand the events depicted. Discourse comprehension theories and a recent theory of perceptual event segmentation both suggest that comprehenders monitor situational features such as characters' goals, to update these representations at natural boundaries in activity. However, the converging predictions of these theories had previously not been tested directly. Two studies provided evidence that changes in situational features such as characters, their locations, their interactions with objects, and their goals are related to the segmentation of events in both narrative texts and films. A 3rd study indicated that clauses with event boundaries are read more slowly than are other clauses and that changes in situational features partially mediate this relation. A final study suggested that the predictability of incoming information influences reading rate and possibly event segmentation. Taken together, these results suggest that processing situational changes during comprehension is an important determinant of how one segments ongoing activity into events and that this segmentation is related to the control of processing during reading. (c) 2009 APA, all rights reserved.

  19. Experience with mechanical segmentation of reactor internals

    International Nuclear Information System (INIS)

    Carlson, R.; Hedin, G.

    2003-01-01

    Operating experience from BWE:s world-wide has shown that many plants experience initial cracking of the reactor internals after approximately 20 to 25 years of service life. This ''mid-life crisis'', considering a plant design life of 40 years, is now being addressed by many utilities. Successful resolution of these issues should give many more years of trouble-free operation. Replacement of reactor internals could be, in many cases, the most favourable option to achieve this. The proactive strategy of many utilities to replace internals in a planned way is a market-driven effort to minimize the overall costs for power generation, including time spent for handling contingencies and unplanned outages. Based on technical analyses, knowledge about component market prices and in-house costs, a cost-effective, optimized strategy for inspection, mitigation and replacements can be implemented. Also decommissioning of nuclear plants has become a reality for many utilities as numerous plants worldwide are closed due to age and/or other reasons. These facts address a need for safe, fast and cost-effective methods for segmentation of internals. Westinghouse has over the last years developed methods for segmentation of internals and has also carried out successful segmentation projects. Our experience from the segmentation business for Nordic BWR:s is that the most important parameters to consider when choosing a method and equipment for a segmentation project are: - Safety, - Cost-effectiveness, - Cleanliness, - Reliability. (orig.)

  20. Segmentation and Visualisation of Human Brain Structures

    Energy Technology Data Exchange (ETDEWEB)

    Hult, Roger

    2003-10-01

    In this thesis the focus is mainly on the development of segmentation techniques for human brain structures and of the visualisation of such structures. The images of the brain are both anatomical images (magnet resonance imaging (MRI) and autoradiography) and functional images that show blood flow (functional magnetic imaging (fMRI), positron emission tomography (PET), and single photon emission tomography (SPECT)). When working with anatomical images, the structures segmented are visible as different parts of the brain, e.g. the brain cortex, the hippocampus, or the amygdala. In functional images, the activity or the blood flow that be seen. Grey-level morphology methods are used in the segmentations to make tissue types in the images more homogenous and minimise difficulties with connections to outside tissue. A method for automatic histogram thresholding is also used. Furthermore, there are binary operations such as logic operation between masks and binary morphology operations. The visualisation of the segmented structures uses either surface rendering or volume rendering. For the visualisation of thin structures, surface rendering is the better choice since otherwise some voxels might be missed. It is possible to display activation from a functional image on the surface of a segmented cortex. A new method for autoradiographic images has been developed, which integrates registration, background compensation, and automatic thresholding to get faster and more reliable results than the standard techniques give.

  1. Segmentation and Visualisation of Human Brain Structures

    International Nuclear Information System (INIS)

    Hult, Roger

    2003-01-01

    In this thesis the focus is mainly on the development of segmentation techniques for human brain structures and of the visualisation of such structures. The images of the brain are both anatomical images (magnet resonance imaging (MRI) and autoradiography) and functional images that show blood flow (functional magnetic imaging (fMRI), positron emission tomography (PET), and single photon emission tomography (SPECT)). When working with anatomical images, the structures segmented are visible as different parts of the brain, e.g. the brain cortex, the hippocampus, or the amygdala. In functional images, the activity or the blood flow that be seen. Grey-level morphology methods are used in the segmentations to make tissue types in the images more homogenous and minimise difficulties with connections to outside tissue. A method for automatic histogram thresholding is also used. Furthermore, there are binary operations such as logic operation between masks and binary morphology operations. The visualisation of the segmented structures uses either surface rendering or volume rendering. For the visualisation of thin structures, surface rendering is the better choice since otherwise some voxels might be missed. It is possible to display activation from a functional image on the surface of a segmented cortex. A new method for autoradiographic images has been developed, which integrates registration, background compensation, and automatic thresholding to get faster and more reliable results than the standard techniques give

  2. Compound image segmentation of published biomedical figures.

    Science.gov (United States)

    Li, Pengyuan; Jiang, Xiangying; Kambhamettu, Chandra; Shatkay, Hagit

    2018-04-01

    Images convey essential information in biomedical publications. As such, there is a growing interest within the bio-curation and the bio-databases communities, to store images within publications as evidence for biomedical processes and for experimental results. However, many of the images in biomedical publications are compound images consisting of multiple panels, where each individual panel potentially conveys a different type of information. Segmenting such images into constituent panels is an essential first step toward utilizing images. In this article, we develop a new compound image segmentation system, FigSplit, which is based on Connected Component Analysis. To overcome shortcomings typically manifested by existing methods, we develop a quality assessment step for evaluating and modifying segmentations. Two methods are proposed to re-segment the images if the initial segmentation is inaccurate. Experimental results show the effectiveness of our method compared with other methods. The system is publicly available for use at: https://www.eecis.udel.edu/~compbio/FigSplit. The code is available upon request. shatkay@udel.edu. Supplementary data are available online at Bioinformatics.

  3. Infants generalize representations of statistically segmented words

    Directory of Open Access Journals (Sweden)

    Katharine eGraf Estes

    2012-10-01

    Full Text Available The acoustic variation in language presents learners with a substantial challenge. To learn by tracking statistical regularities in speech, infants must recognize words across tokens that differ based on characteristics such as the speaker’s voice, affect, or the sentence context. Previous statistical learning studies have not investigated how these types of surface form variation affect learning. The present experiments used tasks tailored to two distinct developmental levels to investigate the robustness of statistical learning to variation. Experiment 1 examined statistical word segmentation in 11-month-olds and found that infants can recognize statistically segmented words across a change in the speaker’s voice from segmentation to testing. The direction of infants’ preferences suggests that recognizing words across a voice change is more difficult than recognizing them in a consistent voice. Experiment 2 tested whether 17-month-olds can generalize the output of statistical learning across variation to support word learning. The infants were successful in their generalization; they associated referents with statistically defined words despite a change in voice from segmentation to label learning. Infants’ learning patterns also indicate that they formed representations of across-word syllable sequences during segmentation. Thus, low probability sequences can act as object labels in some conditions. The findings of these experiments suggest that the units that emerge during statistical learning are not perceptually constrained, but rather are robust to naturalistic acoustic variation.

  4. Contour tracing for segmentation of mammographic masses

    International Nuclear Information System (INIS)

    Elter, Matthias; Held, Christian; Wittenberg, Thomas

    2010-01-01

    CADx systems have the potential to support radiologists in the difficult task of discriminating benign and malignant mammographic lesions. The segmentation of mammographic masses from the background tissue is an important module of CADx systems designed for the characterization of mass lesions. In this work, a novel approach to this task is presented. The segmentation is performed by automatically tracing the mass' contour in-between manually provided landmark points defined on the mass' margin. The performance of the proposed approach is compared to the performance of implementations of three state-of-the-art approaches based on region growing and dynamic programming. For an unbiased comparison of the different segmentation approaches, optimal parameters are selected for each approach by means of tenfold cross-validation and a genetic algorithm. Furthermore, segmentation performance is evaluated on a dataset of ROI and ground-truth pairs. The proposed method outperforms the three state-of-the-art methods. The benchmark dataset will be made available with publication of this paper and will be the first publicly available benchmark dataset for mass segmentation.

  5. Differences in axial segment reorientation during standing turns predict multiple falls in older adults

    OpenAIRE

    Wright, Rachel L.; Peters, Derek M.; Robinson, Paul D.; Sitch, Alice J.; Watt, Thomas N.; Hollands, Mark A.

    2012-01-01

    Author's version of an article in the journal: Gait and Posture. Also available from the publisher at: http://dx.doi.org/10.1016/j.gaitpost.2012.05.013 Background: The assessment of standing turning performance is proposed to predict fall risk in older adults. This study investigated differences in segmental coordination during a 360° standing turn task between older community-dwelling fallers and non-fallers. Methods: Thirty-five older adults age mean (SD) of 71 (5.4) years performed 360°...

  6. Poly(ether amide) segmented block copolymers with adipicacid based tetra amide segments

    NARCIS (Netherlands)

    Biemond, G.J.E.; Feijen, Jan; Gaymans, R.J.

    2007-01-01

    Poly(tetramethylene oxide)-based poly(ether ester amide)s with monodisperse tetraamide segments were synthesized. The tetraamide segment was based on adipic acid, terephthalic acid, and hexamethylenediamine. The synthesis method of the copolymers and the influence of the tetraamide concentration,

  7. Market Segmentation in Business Technology Base: The Case of Segmentation of Sparkling

    Directory of Open Access Journals (Sweden)

    Valéria Riscarolli

    2014-08-01

    Full Text Available A common market segmentation premise for products and services rules consumer behavior as the segmentation center piece. Would this be the logic for segmentation used by small technology based companies? In this article we target at determining the principles of market segmentation used by a vitiwinery company, as research object. This company is recognized by its products excellence, either in domestic as well as in the foreign market, among 13 distinct countries. The research method used is a case study, through information from the company’s CEOs and crossed by primary information from observation and formal registries and documents of the company. In this research we look at sparkling wines market segmentation. Main results indicate that the winery studied considers only technological elements as the basis to build a market segment. One may conclude that a market segmentation for this company is based upon technological dominion of sparkling wines production, aligned with a premium-price policy. In the company, directorship believes that as sparkling wines market is still incipient in the country, sparkling wine market segments will form and consolidate after the evolution of consumers tasting preferences, depending on technologies that boost sparkling wines quality. 

  8. POSTERIOR SEGMENT CAUSES OF BLINDNESS AMONG CHILDREN IN BLIND SCHOOLS

    Directory of Open Access Journals (Sweden)

    Sandhya

    2015-09-01

    Full Text Available BACKGROUND: It is estimated that there are 1.4 million irreversibly blind children in the world out of which 1 million are in Asia alone. India has the highest number of blind children than any other country. Nearly 70% of the childhood blindness is avoidable. There i s paucity of data available on the causes of childhood blindness. This study focuses on the posterior segment causes of blindness among children attending blind schools in 3 adjacent districts of Andhra Pradesh. MATERIAL & METHODS: This is a cross sectiona l study conducted among 204 blind children aged 6 - 16 years age. Detailed eye examination was done by the same investigator to avoid bias. Posterior segment examination was done using a direct and/or indirect ophthalmoscope after dilating pupil wherever nec essary. The standard WHO/PBL for blindness and low vision examination protocol was used to categorize the causes of blindness. A major anatomical site and underlying cause was selected for each child. The study was carried out during July 2014 to June 2015 . The results were analyzed using MS excel software and Epi - info 7 software version statistical software. RESULTS: Majority of the children was found to be aged 13 - 16 years (45.1% and males (63.7%. Family history of blindness was noted in 26.0% and consa nguinity was reported in 29.9% cases. A majority of them were belonged to fulfill WHO grade of blindness (73.0% and in majority of the cases, the onset of blindness was since birth (83.7%. The etiology of blindness was unknown in majority of cases (57.4% while hereditary causes constituted 25.4% cases. Posterior segment causes were responsible in 33.3% cases with retina being the most commonly involved anatomical site (19.1% followed by optic nerve (14.2%. CONCLUSIONS: There is a need for mandatory oph thalmic evaluation, refraction and assessment of low vision prior to admission into blind schools with periodic evaluation every 2 - 3 years

  9. Dynamics in international market segmentation of new product growth

    NARCIS (Netherlands)

    Lemmens, A.; Croux, C.; Stremersch, S.

    2012-01-01

    Prior international segmentation studies have been static in that they have identified segments that remain stable over time. This paper shows that country segments in new product growth are intrinsically dynamic. We propose a semiparametric hidden Markov model to dynamically segment countries based

  10. Why segmentation matters: experience-driven segmentation errors impair “morpheme” learning

    Science.gov (United States)

    Finn, Amy S.; Hudson Kam, Carla L.

    2015-01-01

    We ask whether an adult learner’s knowledge of their native language impedes statistical learning in a new language beyond just word segmentation (as previously shown). In particular, we examine the impact of native-language word-form phonotactics on learners’ ability to segment words into their component morphemes and learn phonologically triggered variation of morphemes. We find that learning is impaired when words and component morphemes are structured to conflict with a learner’s native-language phonotactic system, but not when native-language phonotactics do not conflict with morpheme boundaries in the artificial language. A learner’s native-language knowledge can therefore have a cascading impact affecting word segmentation and the morphological variation that relies upon proper segmentation. These results show that getting word segmentation right early in learning is deeply important for learning other aspects of language, even those (morphology) that are known to pose a great difficulty for adult language learners. PMID:25730305

  11. Why segmentation matters: Experience-driven segmentation errors impair "morpheme" learning.

    Science.gov (United States)

    Finn, Amy S; Hudson Kam, Carla L

    2015-09-01

    We ask whether an adult learner's knowledge of their native language impedes statistical learning in a new language beyond just word segmentation (as previously shown). In particular, we examine the impact of native-language word-form phonotactics on learners' ability to segment words into their component morphemes and learn phonologically triggered variation of morphemes. We find that learning is impaired when words and component morphemes are structured to conflict with a learner's native-language phonotactic system, but not when native-language phonotactics do not conflict with morpheme boundaries in the artificial language. A learner's native-language knowledge can therefore have a cascading impact affecting word segmentation and the morphological variation that relies upon proper segmentation. These results show that getting word segmentation right early in learning is deeply important for learning other aspects of language, even those (morphology) that are known to pose a great difficulty for adult language learners. (c) 2015 APA, all rights reserved).

  12. Rhythm-based segmentation of Popular Chinese Music

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2005-01-01

    We present a new method to segment popular music based on rhythm. By computing a shortest path based on the self-similarity matrix calculated from a model of rhythm, segmenting boundaries are found along the di- agonal of the matrix. The cost of a new segment is opti- mized by matching manual...... and automatic segment boundaries. We compile a small song database of 21 randomly selected popular Chinese songs which come from Chinese Mainland, Taiwan and Hong Kong. The segmenting results on the small corpus show that 78% manual segmentation points are detected and 74% auto- matic segmentation points...

  13. Underwater Object Segmentation Based on Optical Features

    Directory of Open Access Journals (Sweden)

    Zhe Chen

    2018-01-01

    Full Text Available Underwater optical environments are seriously affected by various optical inputs, such as artificial light, sky light, and ambient scattered light. The latter two can block underwater object segmentation tasks, since they inhibit the emergence of objects of interest and distort image information, while artificial light can contribute to segmentation. Artificial light often focuses on the object of interest, and, therefore, we can initially identify the region of target objects if the collimation of artificial light is recognized. Based on this concept, we propose an optical feature extraction, calculation, and decision method to identify the collimated region of artificial light as a candidate object region. Then, the second phase employs a level set method to segment the objects of interest within the candidate region. This two-phase structure largely removes background noise and highlights the outline of underwater objects. We test the performance of the method with diverse underwater datasets, demonstrating that it outperforms previous methods.

  14. Nuclear propulsion apparatus with alternate reactor segments

    International Nuclear Information System (INIS)

    Szekely, T.

    1979-01-01

    Nuclear propulsion apparatus comprising: (a) means for compressing incoming air; (b) nuclear fission reactor means for heating said air; (c) means for expanding a portion of the heated air to drive said compressing means; (d) said nuclear fission reactor means being divided into a plurality of radially extending segments; (e) means for directing a portion of the compressed air for heating through alternate segments of said reactor means and another portion of the compressed air for heating through the remaining segments of said reactor means; and (f) means for further expanding the heated air from said drive means and the remaining heated air from said reactor means through nozzle means to effect reactive thrust on said apparatus. 12 claims

  15. Segmental blood pressure after total hip replacement

    DEFF Research Database (Denmark)

    Gebuhr, Peter Henrik; Soelberg, M; Henriksen, Jens Henrik Sahl

    1992-01-01

    Twenty-nine patients due to have a total hip replacement had their systemic systolic and segmental blood pressures measured prior to operation and 1 and 6 weeks postoperatively. No patients had signs of ischemia. The segmental blood pressure was measured at the ankle and at the toes. A significant...... drop was found in all pressures 1 week postoperatively. The decrease followed the systemic pressure and was restored to normal after 6 weeks. In a group of six patients with preoperatively decreased ankle pressure, a significant transient further decrease in the ankle-toe gradient pressure was found...... on the operated side. None of the patients had symptoms from the lowered pressure. We conclude that in patients without signs of ischemia, the postoperative segmental pressure decrease is reversible and therefore not dangerous....

  16. Active mask segmentation of fluorescence microscope images.

    Science.gov (United States)

    Srinivasa, Gowri; Fickus, Matthew C; Guo, Yusong; Linstedt, Adam D; Kovacević, Jelena

    2009-08-01

    We propose a new active mask algorithm for the segmentation of fluorescence microscope images of punctate patterns. It combines the (a) flexibility offered by active-contour methods, (b) speed offered by multiresolution methods, (c) smoothing offered by multiscale methods, and (d) statistical modeling offered by region-growing methods into a fast and accurate segmentation tool. The framework moves from the idea of the "contour" to that of "inside and outside," or masks, allowing for easy multidimensional segmentation. It adapts to the topology of the image through the use of multiple masks. The algorithm is almost invariant under initialization, allowing for random initialization, and uses a few easily tunable parameters. Experiments show that the active mask algorithm matches the ground truth well and outperforms the algorithm widely used in fluorescence microscopy, seeded watershed, both qualitatively, as well as quantitatively.

  17. Natural color image segmentation using integrated mechanism

    Institute of Scientific and Technical Information of China (English)

    Jie Xu (徐杰); Pengfei Shi (施鹏飞)

    2003-01-01

    A new method for natural color image segmentation using integrated mechanism is proposed in this paper.Edges are first detected in term of the high phase congruency in the gray-level image. K-mean cluster is used to label long edge lines based on the global color information to estimate roughly the distribution of objects in the image, while short ones are merged based on their positions and local color differences to eliminate the negative affection caused by texture or other trivial features in image. Region growing technique is employed to achieve final segmentation results. The proposed method unifies edges, whole and local color distributions, as well as spatial information to solve the natural image segmentation problem.The feasibility and effectiveness of this method have been demonstrated by various experiments.

  18. Continuation of Sets of Constrained Orbit Segments

    DEFF Research Database (Denmark)

    Schilder, Frank; Brøns, Morten; Chamoun, George Chaouki

    Sets of constrained orbit segments of time continuous flows are collections of trajectories that represent a whole or parts of an invariant set. A non-trivial but simple example is a homoclinic orbit. A typical representation of this set consists of an equilibrium point of the flow and a trajectory...... that starts close and returns close to this fixed point within finite time. More complicated examples are hybrid periodic orbits of piecewise smooth systems or quasi-periodic invariant tori. Even though it is possible to define generalised two-point boundary value problems for computing sets of constrained...... orbit segments, this is very disadvantageous in practice. In this talk we will present an algorithm that allows the efficient continuation of sets of constrained orbit segments together with the solution of the full variational problem....

  19. CERES: A new cerebellum lobule segmentation method.

    Science.gov (United States)

    Romero, Jose E; Coupé, Pierrick; Giraud, Rémi; Ta, Vinh-Thong; Fonov, Vladimir; Park, Min Tae M; Chakravarty, M Mallar; Voineskos, Aristotle N; Manjón, Jose V

    2017-02-15

    The human cerebellum is involved in language, motor tasks and cognitive processes such as attention or emotional processing. Therefore, an automatic and accurate segmentation method is highly desirable to measure and understand the cerebellum role in normal and pathological brain development. In this work, we propose a patch-based multi-atlas segmentation tool called CERES (CEREbellum Segmentation) that is able to automatically parcellate the cerebellum lobules. The proposed method works with standard resolution magnetic resonance T1-weighted images and uses the Optimized PatchMatch algorithm to speed up the patch matching process. The proposed method was compared with related recent state-of-the-art methods showing competitive results in both accuracy (average DICE of 0.7729) and execution time (around 5 minutes). Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Video-based noncooperative iris image segmentation.

    Science.gov (United States)

    Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig

    2011-02-01

    In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.

  1. Interaction features for prediction of perceptual segmentation

    DEFF Research Database (Denmark)

    Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri

    2017-01-01

    As music unfolds in time, structure is recognised and understood by listeners, regardless of their level of musical expertise. A number of studies have found spectral and tonal changes to quite successfully model boundaries between structural sections. However, the effects of musical expertise...... and experimental task on computational modelling of structure are not yet well understood. These issues need to be addressed to better understand how listeners perceive the structure of music and to improve automatic segmentation algorithms. In this study, computational prediction of segmentation by listeners...... was investigated for six musical stimuli via a real-time task and an annotation (non real-time) task. The proposed approach involved computation of novelty curve interaction features and a prediction model of perceptual segmentation boundary density. We found that, compared to non-musicians’, musicians...

  2. SIDES - Segment Interconnect Diagnostic Expert System

    International Nuclear Information System (INIS)

    Booth, A.W.; Forster, R.; Gustafsson, L.; Ho, N.

    1989-01-01

    It is well known that the FASTBUS Segment Interconnect (SI) provides a communication path between two otherwise independent, asynchronous bus segments. The SI is probably the most important module in any FASTBUS data acquisition network since it's failure to function can cause whole segments of the network to be inaccessible and sometimes inoperable. This paper describes SIDES, an intelligent program designed to diagnose SI's both in situ as they operate in a data acquisition network, and in the laboratory in an acceptance/repair environment. The paper discusses important issues such as knowledge acquisition; extracting knowledge from human experts and other knowledge sources. SIDES can benefit high energy physics experiments, where SI problems can be diagnosed and solved more quickly. Equipment pool technicians can also benefit from SIDES, first by decreasing the number of SI's erroneously turned in for repair, and secondly as SIDES acts as an intelligent assistant to the technician in the diagnosis and repair process

  3. Parallel fuzzy connected image segmentation on GPU.

    Science.gov (United States)

    Zhuge, Ying; Cao, Yong; Udupa, Jayaram K; Miller, Robert W

    2011-07-01

    Image segmentation techniques using fuzzy connectedness (FC) principles have shown their effectiveness in segmenting a variety of objects in several large applications. However, one challenge in these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays, commodity graphics hardware provides a highly parallel computing environment. In this paper, the authors present a parallel fuzzy connected image segmentation algorithm implementation on NVIDIA's compute unified device Architecture (CUDA) platform for segmenting medical image data sets. In the FC algorithm, there are two major computational tasks: (i) computing the fuzzy affinity relations and (ii) computing the fuzzy connectedness relations. These two tasks are implemented as CUDA kernels and executed on GPU. A dramatic improvement in speed for both tasks is achieved as a result. Our experiments based on three data sets of small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 24.4x, 18.1x, and 10.3x, correspondingly, for the three data sets on the NVIDIA Tesla C1060 over the implementation of the algorithm on CPU, and takes 0.25, 0.72, and 15.04 s, correspondingly, for the three data sets. The authors developed a parallel algorithm of the widely used fuzzy connected image segmentation method on the NVIDIA GPUs, which are far more cost- and speed-effective than both cluster of workstations and multiprocessing systems. A near-interactive speed of segmentation has been achieved, even for the large data set.

  4. CT identification of bronchopulmonary segments: 50 normal subjects

    International Nuclear Information System (INIS)

    Osbourne, D.; Vock, P.; Godwin, J.D.; Silverman, P.M.

    1984-01-01

    A systematic evaluation of the fissures, segmental bronchi and arteries, bronchopulmonary segments, and peripheral pulmonary parenchyma was made from computed tomographic (CT) scans of 50 patients with normal chest radiographs. Seventy percent of the segmental bronchi and 76% of the segmental arteries were identified. Arteries could be traced to their sixth- and seventh-order branches; their orientation to the plane of the CT section allowed gross identification and localization of bronchopulmonary segments

  5. Segmentation of Lung Structures in CT

    DEFF Research Database (Denmark)

    Lo, Pechin Chien Pau

    This thesis proposes and evaluates new algorithms for segmenting various lung structures in computed tomography (CT) images, namely the lungs, airway trees and vessel trees. The main objective of these algorithms is to facilitate a better platform for studying Chronic Obstructive Pulmonary Disease......, 200 randomly selected CT scans were manually evaluated by medical experts, and only negligible or minor errors were found in nine scans. The proposed algorithm has been used to study how changes in smoking behavior affect CT based emphysema quantification. The algorithms for segmenting the airway...

  6. Coupled Shape Model Segmentation in Pig Carcasses

    DEFF Research Database (Denmark)

    Hansen, Mads Fogtmann; Larsen, Rasmus; Ersbøll, Bjarne Kjær

    2006-01-01

    levels inside the outline as well as in a narrow band outside the outline. The maximum a posteriori estimate of the outline is found by gradient descent optimization. In order to segment a group of mutually dependent objects we propose 2 procedures, 1) the objects are found sequentially by conditioning...... the initialization of the next search from already found objects; 2) all objects are found simultaneously and a repelling force is introduced in order to avoid overlap between outlines in the solution. The methods are applied to segmentation of cross sections of muscles in slices of CT scans of pig backs for quality...

  7. Self-assembling segmented coiled tubing

    Science.gov (United States)

    Raymond, David W.

    2016-09-27

    Self-assembling segmented coiled tubing is a concept that allows the strength of thick-wall rigid pipe, and the flexibility of thin-wall tubing, to be realized in a single design. The primary use is for a drillstring tubular, but it has potential for other applications requiring transmission of mechanical loads (forces and torques) through an initially coiled tubular. The concept uses a spring-loaded spherical `ball-and-socket` type joint to interconnect two or more short, rigid segments of pipe. Use of an optional snap ring allows the joint to be permanently made, in a `self-assembling` manner.

  8. Segmentation of the Indian photovoltaic market

    International Nuclear Information System (INIS)

    Srinivasan, S.

    2005-01-01

    This paper provides an analytical framework studying the actors, networks and institutions and examines the evolution of the Indian Solar Photovoltaic (PV) Market. Different market segments, along the lines of demand and supply of PV equipment, i.e. on the basis of geography, end-use application, subsidy policy and other financing mechanisms, are detailed. The objective of this effort is to identify segments that require special attention from policy makers, donors and the Ministry of Non-Conventional Energy Sources. The paper also discusses the evolution of the commercial PV market in certain parts of the country and trends in the maturity of the market. (author)

  9. Unsupervised information extraction by text segmentation

    CERN Document Server

    Cortez, Eli

    2013-01-01

    A new unsupervised approach to the problem of Information Extraction by Text Segmentation (IETS) is proposed, implemented and evaluated herein. The authors' approach relies on information available on pre-existing data to learn how to associate segments in the input string with attributes of a given domain relying on a very effective set of content-based features. The effectiveness of the content-based features is also exploited to directly learn from test data structure-based features, with no previous human-driven training, a feature unique to the presented approach. Based on the approach, a

  10. Innovative visualization and segmentation approaches for telemedicine

    Science.gov (United States)

    Nguyen, D.; Roehrig, Hans; Borders, Marisa H.; Fitzpatrick, Kimberly A.; Roveda, Janet

    2014-09-01

    In health care applications, we obtain, manage, store and communicate using high quality, large volume of image data through integrated devices. In this paper we propose several promising methods that can assist physicians in image data process and communication. We design a new semi-automated segmentation approach for radiological images, such as CT and MRI to clearly identify the areas of interest. This approach combines the advantages from both the region-based method and boundary-based methods. It has three key steps compose: coarse segmentation by using fuzzy affinity and homogeneity operator, image division and reclassification using the Voronoi Diagram, and refining boundary lines using the level set model.

  11. Strategies for regular segmented reductions on GPU

    DEFF Research Database (Denmark)

    Larsen, Rasmus Wriedt; Henriksen, Troels

    2017-01-01

    We present and evaluate an implementation technique for regular segmented reductions on GPUs. Existing techniques tend to be either consistent in performance but relatively inefficient in absolute terms, or optimised for specific workloads and thereby exhibiting bad performance for certain input...... is in the context of the Futhark compiler, the implementation technique is applicable to any library or language that has a need for segmented reductions. We evaluate the technique on four microbenchmarks, two of which we also compare to implementations in the CUB library for GPU programming, as well as on two...

  12. Actinic Granuloma with Focal Segmental Glomerulosclerosis

    Directory of Open Access Journals (Sweden)

    Ruedee Phasukthaworn

    2016-02-01

    Full Text Available Actinic granuloma is an uncommon granulomatous disease, characterized by annular erythematous plaque with central clearing predominately located on sun-damaged skin. The pathogenesis is not well understood, ultraviolet radiation is recognized as precipitating factor. We report a case of a 52-year-old woman who presented with asymptomatic annular erythematous plaques on the forehead and both cheeks persisting for 2 years. The clinical presentation and histopathologic findings support the diagnosis of actinic granuloma. During that period of time, she also developed focal segmental glomerulosclerosis. The association between actinic granuloma and focal segmental glomerulosclerosis needs to be clarified by further studies.

  13. Segmentation and informality in Vietnam : a survey of the literature: country case study on labour market segmentation

    OpenAIRE

    Cling, Jean-Pierre; Razafindrakoto, Mireille; Roubaud, François

    2014-01-01

    Labour market segmentation is usually defined as the division of the labour markets into separate sub-markets or segments, distinguished by different characteristics and behavioural rules (incomes, contracts, etc.). The economic debate on the segmentation issue has been focusing in developed countries, and especially in Europe, on contractual segmentation and dualism.

  14. London SPAN version 4 parameter file format

    International Nuclear Information System (INIS)

    2004-06-01

    Powernext SA is a Multilateral Trading Facility in charge of managing the French power exchange through an optional and anonymous organised trading system. Powernext SA collaborates with the clearing organization LCH.Clearnet SA to secure and facilitate the transactions. The French Standard Portfolio Analysis of Risk (SPAN) is a system used by LCH.Clearnet to calculate the initial margins from and for its clearing members. SPAN is a computerized system which calculates the impact of several possible variations of rates and volatility on by-product portfolios. The initial margin call is equal to the maximum probable loss calculated by the system. This document contains details of the format of the London SPAN version 4 parameter file. This file contains all the parameters and risk arrays required to calculate SPAN margins. London SPAN Version 4 is an upgrade from Version 3, which is also known as LME SPAN. This document contains the full revised file specification, highlighting the changes from Version 3 to Version 4

  15. Enhancements to the CALIOP Aerosol Subtyping and Lidar Ratio Selection Algorithms for Level II Version 4

    Science.gov (United States)

    Omar, A. H.; Tackett, J. L.; Vaughan, M. A.; Kar, J.; Trepte, C. R.; Winker, D. M.

    2016-12-01

    This presentation describes several enhancements planned for the version 4 aerosol subtyping and lidar ratio selection algorithms of the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) instrument. The CALIOP subtyping algorithm determines the most likely aerosol type from CALIOP measurements (attenuated backscatter, estimated particulate depolarization ratios δe, layer altitude), and surface type. The aerosol type, so determined, is associated with a lidar ratio (LR) from a discrete set of values. Some of these lidar ratios have been updated in the version 4 algorithms. In particular, the dust and polluted dust will be adjusted to reflect the latest measurements and model studies of these types. Version 4 eliminates the confusion between smoke and clean marine aerosols seen in version 3 by modifications to the elevated layer flag definitions used to identify smoke aerosols over the ocean. In the subtyping algorithms pure dust is determined by high estimated particulate depolarization ratios [δe > 0.20]. Mixtures of dust and other aerosol types are determined by intermediate values of the estimated depolarization ratio [0.075limited to mixtures of dust and smoke, the so called polluted dust aerosol type. To differentiate between mixtures of dust and smoke, and dust and marine aerosols, a new aerosol type will be added in the version 4 data products. In the revised classification algorithms, polluted dust will still defined as dust + smoke/pollution but in the marine boundary layer instances of moderate depolarization will be typed as dusty marine aerosols with a lower lidar ratio than polluted dust. The dusty marine type introduced in version 4 is modeled as a mixture of dust + marine aerosol. To account for fringes, the version 4 Level 2 algorithms implement Subtype Coalescence Algorithm for AeRosol Fringes (SCAARF) routine to detect and classify fringe of aerosol plumes that are detected at 20 km or 80 km horizontal resolution at the plume base. These

  16. Lung segment geometry study: simulation of largest possible tumours that fit into bronchopulmonary segments.

    Science.gov (United States)

    Welter, S; Stöcker, C; Dicken, V; Kühl, H; Krass, S; Stamatis, G

    2012-03-01

    Segmental resection in stage I non-small cell lung cancer (NSCLC) has been well described and is considered to have similar survival rates as lobectomy but with increased rates of local tumour recurrence due to inadequate parenchymal margins. In consequence, today segmentectomy is only performed when the tumour is smaller than 2 cm. Three-dimensional reconstructions from 11 thin-slice CT scans of bronchopulmonary segments were generated, and virtual spherical tumours were placed over the segments, respecting all segmental borders. As a next step, virtual parenchymal safety margins of 2 cm and 3 cm were subtracted and the size of the remaining tumour calculated. The maximum tumour diameters with a 30-mm parenchymal safety margin ranged from 26.1 mm in right-sided segments 7 + 8 to 59.8 mm in the left apical segments 1-3. Using a three-dimensional reconstruction of lung CT scans, we demonstrated that segmentectomy or resection of segmental groups should be feasible with adequate margins, even for larger tumours in selected cases. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  17. New developments in program STANSOL version 3

    International Nuclear Information System (INIS)

    Gray, W.H.

    1981-10-01

    STANSOL is a computer program that applied a solution for the mechanical displacement, stress, and strain in rotationally-transversely isotropic, homogeneous, axisymmetric solenoids. Careful application of the solution permits the complex mechanical behavior of multilayered, nonhomogeneous solenoids to be examined in which the loads may vary arbitrarily from layer to layer. Loads applied to the solenoid model by program STANSOL may consist of differential temperature, winding preload, internal and/or external surface pressure, and electromagnetic Lorentz body forces. STANSOL version 3, the latest update to the original version of the computer program, also permits structural analysis of solenoid magnets in which frictionless interlayer gaps may open or close. This paper presents the new theory coded into version 3 of the STANSOL program, as well as the new input data format and graphical output display of the resulting analysis

  18. Nuclear criticality safety handbook. Version 2

    International Nuclear Information System (INIS)

    1999-03-01

    The Nuclear Criticality Safety Handbook, Version 2 essentially includes the description of the Supplement Report to the Nuclear Criticality Safety Handbook, released in 1995, into the first version of Nuclear Criticality Safety Handbook, published in 1988. The following two points are new: (1) exemplifying safety margins related to modelled dissolution and extraction processes, (2) describing evaluation methods and alarm system for criticality accidents. Revision is made based on previous studies for the chapter that treats modelling the fuel system: e.g., the fuel grain size that the system can be regarded as homogeneous, non-uniformity effect of fuel solution, and burnup credit. This revision solves the inconsistencies found in the first version between the evaluation of errors found in JACS code system and criticality condition data that were calculated based on the evaluation. (author)

  19. Automated gastric cancer diagnosis on H&E-stained sections; ltraining a classifier on a large scale with multiple instance machine learning

    Science.gov (United States)

    Cosatto, Eric; Laquerre, Pierre-Francois; Malon, Christopher; Graf, Hans-Peter; Saito, Akira; Kiyuna, Tomoharu; Marugame, Atsushi; Kamijo, Ken'ichi

    2013-03-01

    We present a system that detects cancer on slides of gastric tissue sections stained with hematoxylin and eosin (H&E). At its heart is a classi er trained using the semi-supervised multi-instance learning framework (MIL) where each tissue is represented by a set of regions-of-interest (ROI) and a single label. Such labels are readily obtained because pathologists diagnose each tissue independently as part of the normal clinical work ow. From a large dataset of over 26K gastric tissue sections from over 12K patients obtained from a clinical load spanning several months, we train a MIL classi er on a patient-level partition of the dataset (2/3 of the patients) and obtain a very high performance of 96% (AUC), tested on the remaining 1/3 never-seen before patients (over 8K tissues). We show this level of performance to match the more costly supervised approach where individual ROIs need to be labeled manually. The large amount of data used to train this system gives us con dence in its robustness and that it can be safely used in a clinical setting. We demonstrate how it can improve the clinical work ow when used for pre-screening or quality control. For pre-screening, the system can diagnose 47% of the tissues with a very low likelihood (cancers, thus halving the clinicians' caseload. For quality control, compared to random rechecking of 33% of the cases, the system achieves a three-fold increase in the likelihood of catching cancers missed by pathologists. The system is currently in regular use at independent pathology labs in Japan where it is used to double-check clinician's diagnoses. At the end of 2012 it will have analyzed over 80,000 slides of gastric and colorectal samples (200,000 tissues).

  20. Representation of architectural artifacts: definition of an approach combining the complexity of the 3d digital instance with the intelligibility of the theoretical model.

    Directory of Open Access Journals (Sweden)

    David Lo Buglio

    2012-12-01

    Full Text Available EnWith the arrival of digital technologies in the field of architectural documentation, many tools and methods for data acquisition have been considerably developed. However, these developments are primarily used for recording colorimetric and dimensional properties of the objects processed. The actors, of the disciplines concerned by 3D digitization of architectural heritage, are facing with a large number of data, leaving the survey far from its cognitive dimension. In this context, it seems necessary to provide innovative solutions in order to increase the informational value of the representations produced by strengthen relations between "multiplicity" of data and "intelligibility" of the theoretical model. With the purpose of answering to the lack of methodology we perceived, this article therefore offers an approach to the creation of representation systems that articulate the digital instance with the geometric/semantic model.ItGrazie all’introduzione delle tecnologie digitali nel campo della documentazione architettonica, molti strumenti e metodi di acquisizione hanno avuto un notevole sviluppo. Tuttavia, questi sviluppi si sono principalmente concentrati sulla registrazione e sulla restituzione delle proprietà geometriche e colorimetriche degli oggetti di studio. Le discipline interessate alla digitalizzazione 3D del patrimonio architettonico hanno pertanto la possibilità di produrre delle grandi quantità di dati attraverso un’evoluzione delle pratiche di documentazione che potrebbero progressivamente far scomparire la dimensione cognitiva del rilievo. In questo contesto, appare necessario fornire soluzioni innovative per aumentare il valore informativo delle rappresentazioni digitali tramite l’identificazione delle relazioni potenziali che è possibile costruire fra le nozioni di "molteplicità" ed "intelligibilità". Per rispondere a questo deficit metodologico, questo articolo presenta le basi di un approccio per la